- On January 1, 2020 this library will no longer support Python 2 on the latest released version. - Previously released library versions will continue to be available. For more information please + As of January 1, 2020 this library no longer supports Python 2 on the latest released version. + Library versions released prior to that date will continue to be available. For more information please visit Python 2 support on Google Cloud.
{% block body %} {% endblock %} diff --git a/docs/advanced-session-pool-topics.rst b/docs/advanced-session-pool-topics.rst index 1b21fdcc9b..ea64c98a10 100644 --- a/docs/advanced-session-pool-topics.rst +++ b/docs/advanced-session-pool-topics.rst @@ -6,7 +6,7 @@ Custom Session Pool Implementations You can supply your own pool implementation, which must satisfy the contract laid out in -:class:`~google.cloud.spanner.pool.AbstractSessionPool`: +:class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`: .. code-block:: python @@ -35,11 +35,11 @@ Lowering latency for read / query operations Some applications may need to minimize latency for read operations, including particularly the overhead of making an API request to create or refresh a -session. :class:`~google.cloud.spanner.pool.PingingPool` is designed for such +session. :class:`~google.cloud.spanner_v1.pool.PingingPool` is designed for such applications, which need to configure a background thread to do the work of keeping the sessions fresh. -Create an instance of :class:`~google.cloud.spanner.pool.PingingPool`: +Create an instance of :class:`~google.cloud.spanner_v1.pool.PingingPool`: .. code-block:: python @@ -74,12 +74,12 @@ Lowering latency for mixed read-write operations Some applications may need to minimize latency for read write operations, including particularly the overhead of making an API request to create or refresh a session or to begin a session's transaction. -:class:`~google.cloud.spanner.pool.TransactionPingingPool` is designed for +:class:`~google.cloud.spanner_v1.pool.TransactionPingingPool` is designed for such applications, which need to configure a background thread to do the work of keeping the sessions fresh and starting their transactions after use. Create an instance of -:class:`~google.cloud.spanner.pool.TransactionPingingPool`: +:class:`~google.cloud.spanner_v1.pool.TransactionPingingPool`: .. code-block:: python diff --git a/docs/api-reference.rst b/docs/api-reference.rst deleted file mode 100644 index c767b23afa..0000000000 --- a/docs/api-reference.rst +++ /dev/null @@ -1,33 +0,0 @@ -API Reference -============= - -The following classes and methods constitute the Spanner client. -Most likely, you will be interacting almost exclusively with these: - -.. toctree:: - :maxdepth: 1 - - client-api - instance-api - database-api - session-api - keyset-api - snapshot-api - batch-api - transaction-api - streamed-api - - -The classes and methods above depend on the following, lower-level -classes and methods. Documentation for these is provided for completion, -and some advanced use cases may wish to interact with these directly: - -.. toctree:: - :maxdepth: 1 - - gapic/v1/api - gapic/v1/types - gapic/v1/admin_database_api - gapic/v1/admin_database_types - gapic/v1/admin_instance_api - gapic/v1/admin_instance_types diff --git a/docs/batch-usage.rst b/docs/batch-usage.rst index 419ca106e6..0da1086779 100644 --- a/docs/batch-usage.rst +++ b/docs/batch-usage.rst @@ -1,36 +1,64 @@ Batching Modifications ###################### -A :class:`~google.cloud.spanner.batch.Batch` represents a set of data +A :class:`~google.cloud.spanner_v1.batch.Batch` represents a set of data modification operations to be performed on tables in a database. Use of a ``Batch`` does not require creating an explicit -:class:`~google.cloud.spanner.snapshot.Snapshot` or -:class:`~google.cloud.spanner.transaction.Transaction`. Until -:meth:`~google.cloud.spanner.batch.Batch.commit` is called on a ``Batch``, +:class:`~google.cloud.spanner_v1.snapshot.Snapshot` or +:class:`~google.cloud.spanner_v1.transaction.Transaction`. Until +:meth:`~google.cloud.spanner_v1.batch.Batch.commit` is called on a ``Batch``, no changes are propagated to the back-end. -Starting a Batch ----------------- +Use Batch via BatchCheckout +-------------------------------- -Construct a :class:`~google.cloud.spanner.batch.Batch` object from a :class:`~google.cloud.spanner.database.Database` object: +:meth:`Database.batch` creates a :class:`~google.cloud.spanner_v1.database.BatchCheckout` +instance to use as a context manager to handle creating and committing a +:class:`~google.cloud.spanner_v1.batch.Batch`. The +:class:`BatchCheckout` will automatically call +:meth:`~google.cloud.spanner_v1.batch.Batch.commit` if the ``with`` block exits +without raising an exception. .. code:: python - from google.cloud import spanner + from google.cloud.spanner import KeySet client = spanner.Client() instance = client.instance(INSTANCE_NAME) database = instance.database(DATABASE_NAME) - batch = database.batch() + to_delete = KeySet(keys=[ + ('bharney@example.com',) + ('nonesuch@example.com',) + ]) + + with database.batch() as batch: + + batch.insert( + 'citizens', columns=['email', 'first_name', 'last_name', 'age'], + values=[ + ['phred@exammple.com', 'Phred', 'Phlyntstone', 32], + ['bharney@example.com', 'Bharney', 'Rhubble', 31], + ]) + + batch.update( + 'citizens', columns=['email', 'age'], + values=[ + ['phred@exammple.com', 33], + ['bharney@example.com', 32], + ]) + + ... + + batch.delete('citizens', to_delete) Inserting records using a Batch ------------------------------- -:meth:`Batch.insert` adds one or more new records to a table. Fails if -any of the records already exists. +:meth:`Batch.insert` adds one or more new records to a table. This fails if +any of the records already exist. .. code:: python @@ -53,8 +81,8 @@ any of the records already exists. Update records using a Batch ------------------------------- -:meth:`Batch.update` updates one or more existing records in a table. Fails -if any of the records does not already exist. +:meth:`Batch.update` updates one or more existing records in a table. This fails +if any of the records do not already exist. .. code:: python @@ -127,8 +155,8 @@ column values are set to null. Delete records using a Batch ---------------------------- -:meth:`Batch.delete` removes one or more records from a table. Non-existent -rows do not cause errors. +:meth:`Batch.delete` removes one or more records from a table. Attempting to delete +rows that do not exist will not cause errors. .. code:: python @@ -151,50 +179,13 @@ After describing the modifications to be made to table data via the the back-end by calling :meth:`Batch.commit`, which makes the ``Commit`` API call. -.. code:: python - - batch.commit() - - -Use a Batch as a Context Manager --------------------------------- - -Rather than calling :meth:`Batch.commit` manually, you can use the -:class:`Batch` instance as a context manager, and have it called automatically -if the ``with`` block exits without raising an exception. +You do not need to call this yourself as +:class:`~google.cloud.spanner_v1.database.BatchCheckout` will call +this method automatically upon exiting the ``with`` block. .. code:: python - from google.cloud.spanner import KeySet - - client = spanner.Client() - instance = client.instance(INSTANCE_NAME) - database = instance.database(DATABASE_NAME) - - to_delete = KeySet(keys=[ - ('bharney@example.com',) - ('nonesuch@example.com',) - ]) - - with database.batch() as batch: - - batch.insert( - 'citizens', columns=['email', 'first_name', 'last_name', 'age'], - values=[ - ['phred@exammple.com', 'Phred', 'Phlyntstone', 32], - ['bharney@example.com', 'Bharney', 'Rhubble', 31], - ]) - - batch.update( - 'citizens', columns=['email', 'age'], - values=[ - ['phred@exammple.com', 33], - ['bharney@example.com', 32], - ]) - - ... - - batch.delete('citizens', to_delete) + batch.commit() Next Step diff --git a/docs/client-usage.rst b/docs/client-usage.rst index 801c9cb135..7ba3390e59 100644 --- a/docs/client-usage.rst +++ b/docs/client-usage.rst @@ -1,5 +1,5 @@ -Spanner Client -============== +Spanner Client Usage +==================== .. _spanner-client: @@ -16,17 +16,6 @@ and creating other objects: from google.cloud import spanner client = spanner.Client() -Long-lived Defaults -------------------- - -When creating a :class:`~google.cloud.spanner_v1.client.Client`, the -``user_agent`` and ``timeout_seconds`` arguments have sensible -defaults -(:data:`~google.cloud.spanner_v1.client.DEFAULT_USER_AGENT` and -:data:`~google.cloud.spanner_v1.client.DEFAULT_TIMEOUT_SECONDS`). -However, you may over-ride them and these will be used throughout all API -requests made with the ``client`` you create. - Configuration ------------- @@ -61,18 +50,29 @@ Configuration Be sure to use the **Project ID**, not the **Project Number**. - -Warnings about Multiprocessing +Using a Cloud Spanner Emulator ------------------------------ -.. warning:: - When using multiprocessing, the application may hang if a - :class:`Client ` instance is created - before :class:`multiprocessing.Pool` or :class:`multiprocessing.Process` - invokes :func:`os.fork`. The issue is under investigation, but may be only - happening on Macintosh and not Linux. See `GRPC/GRPC#12455 - `_ for - more information. +There are two ways to use the client with a Cloud Spanner emulator: programmatically or via an environment variable. + +To programmatically use an emulator, you must specify the project, the endpoint of the emulator, and use anonymous credentials: + +.. code:: python + + from google.cloud import spanner + from google.auth.credentials import AnonymousCredentials + + client = spanner.Client( + project='my-project', + client_options={"api_endpoint": "0.0.0.0:9010"}, + credentials=AnonymousCredentials() + ) + +To use an emulator via an environment variable, set the `SPANNER_EMULATOR_HOST` environment variable to the emulator endpoint: + +.. code:: + + export SPANNER_EMULATOR_HOST=0.0.0.0:9010 Next Step --------- diff --git a/docs/conf.py b/docs/conf.py index 4fffc063c8..78e49ed55c 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -1,4 +1,17 @@ # -*- coding: utf-8 -*- +# Copyright 2024 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. # # google-cloud-spanner documentation build configuration file # @@ -20,12 +33,16 @@ # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath("..")) +# For plugins that can not read conf.py. +# See also: https://github.com/docascode/sphinx-docfx-yaml/issues/85 +sys.path.insert(0, os.path.abspath(".")) + __version__ = "" # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. -needs_sphinx = "1.6.3" +needs_sphinx = "1.5.5" # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom @@ -35,6 +52,7 @@ "sphinx.ext.autosummary", "sphinx.ext.intersphinx", "sphinx.ext.coverage", + "sphinx.ext.doctest", "sphinx.ext.napoleon", "sphinx.ext.todo", "sphinx.ext.viewcode", @@ -43,17 +61,13 @@ # autodoc/autosummary flags autoclass_content = "both" -autodoc_default_flags = ["members"] +autodoc_default_options = {"members": True} autosummary_generate = True # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] -# Allow markdown includes (so releases.md can include CHANGLEOG.md) -# http://www.sphinx-doc.org/en/master/markdown.html -source_parsers = {".md": "recommonmark.parser.CommonMarkParser"} - # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # source_suffix = ['.rst', '.md'] @@ -62,13 +76,13 @@ # The encoding of source files. # source_encoding = 'utf-8-sig' -# The master toctree document. -master_doc = "index" +# The root toctree document. +root_doc = "index" # General information about the project. -project = u"google-cloud-spanner" -copyright = u"2019, Google" -author = u"Google APIs" +project = "google-cloud-spanner" +copyright = "2019, Google" +author = "Google APIs" # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the @@ -94,7 +108,13 @@ # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. -exclude_patterns = ["_build"] +exclude_patterns = [ + "_build", + "**/.nox/**/*", + "samples/AUTHORING_GUIDE.md", + "samples/CONTRIBUTING.md", + "samples/snippets/README.rst", +] # The reST default role (used for this markup: `text`) to use for all # documents. @@ -260,9 +280,9 @@ # author, documentclass [howto, manual, or own class]). latex_documents = [ ( - master_doc, + root_doc, "google-cloud-spanner.tex", - u"google-cloud-spanner Documentation", + "google-cloud-spanner Documentation", author, "manual", ) @@ -295,9 +315,9 @@ # (source start file, name, description, authors, manual section). man_pages = [ ( - master_doc, + root_doc, "google-cloud-spanner", - u"google-cloud-spanner Documentation", + "google-cloud-spanner Documentation", [author], 1, ) @@ -314,9 +334,9 @@ # dir menu entry, description, category) texinfo_documents = [ ( - master_doc, + root_doc, "google-cloud-spanner", - u"google-cloud-spanner Documentation", + "google-cloud-spanner Documentation", author, "google-cloud-spanner", "google-cloud-spanner Library", @@ -339,10 +359,15 @@ # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = { - "python": ("http://python.readthedocs.org/en/latest/", None), - "google-auth": ("https://google-auth.readthedocs.io/en/stable", None), - "google.api_core": ("https://googleapis.dev/python/google-api-core/latest/", None), - "grpc": ("https://grpc.io/grpc/python/", None), + "python": ("https://python.readthedocs.org/en/latest/", None), + "google-auth": ("https://googleapis.dev/python/google-auth/latest/", None), + "google.api_core": ( + "https://googleapis.dev/python/google-api-core/latest/", + None, + ), + "grpc": ("https://grpc.github.io/grpc/python/", None), + "proto-plus": ("https://proto-plus-python.readthedocs.io/en/latest/", None), + "protobuf": ("https://googleapis.dev/python/protobuf/latest/", None), } diff --git a/docs/database-usage.rst b/docs/database-usage.rst index 8989501a7d..afcfa06cb2 100644 --- a/docs/database-usage.rst +++ b/docs/database-usage.rst @@ -1,7 +1,7 @@ -Database Admin -============== +Database Admin Usage +==================== -After creating a :class:`~google.cloud.spanner.instance.Instance`, you can +After creating an :class:`~google.cloud.spanner_v1.instance.Instance`, you can interact with individual databases for that instance. @@ -9,30 +9,31 @@ List Databases -------------- To iterate over all existing databases for an instance, use its -:meth:`~google.cloud.spanner.instance.Instance.list_databases` method: +:meth:`~google.cloud.spanner_v1.instance.Instance.list_databases` method: .. code:: python for database in instance.list_databases(): # `database` is a `Database` object. -This method yields :class:`~.spanner_admin_database_v1.types.Database` +This method yields :class:`~google.cloud.spanner_v1.database.Database` objects. Database Factory ---------------- -To create a :class:`~google.cloud.spanner.database.Database` object: +To create a :class:`~google.cloud.spanner_v1.database.Database` object: .. code:: python database = instance.database(database_id, ddl_statements) -- ``ddl_statements`` is a string containing DDL for the new database. +- ``ddl_statements`` is a list of strings containing DDL statements for the new database. -You can also use :meth:`Instance.database` to create a local wrapper for -a database that has already been created: +You can also use the :meth:`~google.cloud.spanner_v1.instance.Instance.database` method +on an :class:`~google.cloud.spanner_v1.instance.Instance` object to create a local wrapper +for a database that has already been created: .. code:: python @@ -43,7 +44,7 @@ Create a new Database --------------------- After creating the database object, use its -:meth:`~google.cloud.spanner.database.Database.create` method to +:meth:`~google.cloud.spanner_v1.database.Database.create` method to trigger its creation on the server: .. code:: python @@ -52,8 +53,8 @@ trigger its creation on the server: .. note:: - Creating an instance triggers a "long-running operation" and - returns an :class:`~concurrent.futures.Future`-like object. Use + Creating a database triggers a "long-running operation" and + returns a :class:`~concurrent.futures.Future`-like object. Use the :meth:`~concurrent.futures.Future.result` method to wait for and inspect the result. @@ -62,21 +63,21 @@ Update an existing Database --------------------------- After creating the database object, you can apply additional DDL statements -via its :meth:`~google.cloud.spanner.database.Database.update_ddl` method: +via its :meth:`~google.cloud.spanner_v1.database.Database.update_ddl` method: .. code:: python operation = database.update_ddl(ddl_statements, operation_id) -- ``ddl_statements`` is a string containing DDL to be applied to - the database. +- ``ddl_statements`` is a list of strings containing DDL statements to be + applied to the database. - ``operation_id`` is a string ID for the long-running operation. .. note:: - Update an instance triggers a "long-running operation" and - returns a :class:`google.cloud.spanner.database.Operation` + Updating a database triggers a "long-running operation" and + returns an :class:`~google.cloud.spanner_v1.database.Operation` object. See :ref:`check-on-current-database-operation` for polling to find out if the operation is completed. @@ -85,7 +86,7 @@ Drop a Database --------------- Drop a database using its -:meth:`~google.cloud.spanner.database.Database.drop` method: +:meth:`~google.cloud.spanner_v1.database.Database.drop` method: .. code:: python @@ -97,14 +98,15 @@ Drop a database using its Check on Current Database Operation ----------------------------------- -The :meth:`~google.cloud.spanner.database.Database.create` and -:meth:`~google.cloud.spanner.database.Database.update` methods of instance -object trigger long-running operations on the server, and return instances +The :meth:`~google.cloud.spanner_v1.database.Database.create` and +:meth:`~google.cloud.spanner_v1.database.Database.update_ddl` methods of the +:class:`~google.cloud.spanner_v1.database.Database` object trigger +long-running operations on the server, and return operations conforming to the :class:`~.concurrent.futures.Future` class. .. code:: python - >>> operation = instance.create() + >>> operation = database.create() >>> operation.result() @@ -116,7 +118,7 @@ Use a Snapshot to Read / Query the Database A snapshot represents a read-only point-in-time view of the database. -Calling :meth:`~google.cloud.spanner.database.Database.snapshot` with +Calling :meth:`~google.cloud.spanner_v1.database.Database.snapshot` with no arguments creates a snapshot with strong concurrency: .. code:: python @@ -124,16 +126,17 @@ no arguments creates a snapshot with strong concurrency: with database.snapshot() as snapshot: do_something_with(snapshot) -See :class:`~google.cloud.spanner.snapshot.Snapshot` for the other options +See :class:`~google.cloud.spanner_v1.snapshot.Snapshot` for the other options which can be passed. .. note:: - :meth:`~google.cloud.spanner.database.Database.snapshot` returns an + :meth:`~google.cloud.spanner_v1.database.Database.snapshot` returns an object intended to be used as a Python context manager (i.e., as the - target of a ``with`` statement). Use the instance, and any result - sets returned by its ``read`` or ``execute_sql`` methods, only inside - the block created by the ``with`` statement. + target of a ``with`` statement). Perform all iterations within the + context of the ``with database.snapshot()`` block. + + See :doc:`snapshot-usage` for more complete examples of snapshot usage. @@ -151,7 +154,7 @@ on the rows of tables in the database. .. note:: - :meth:`~google.cloud.spanner.database.Database.batch` returns an + :meth:`~google.cloud.spanner_v1.database.Database.batch` returns an object intended to be used as a Python context manager (i.e., as the target of a ``with`` statement). It applies any changes made inside the block of its ``with`` statement when exiting the block, unless an @@ -187,15 +190,15 @@ transaction as a required argument: .. note:: - :meth:`~google.cloud.spanner.database.Database.run_in_transaction` + :meth:`~google.cloud.spanner_v1.database.Database.run_in_transaction` commits the transaction automatically if the "unit of work" function returns without raising an exception. .. note:: - :meth:`~google.cloud.spanner.database.Database.run_in_transaction` - retries the "unit of work" function if the read / query operatoins - or the commit are aborted due to concurrent updates + :meth:`~google.cloud.spanner_v1.database.Database.run_in_transaction` + retries the "unit of work" function if the read / query operations + or the commit are aborted due to concurrent updates. See :doc:`transaction-usage` for more complete examples of transaction usage. @@ -203,10 +206,10 @@ Configuring a session pool for a database ----------------------------------------- Under the covers, the ``snapshot``, ``batch``, and ``run_in_transaction`` -methods use a pool of :class:`~google.cloud.spanner.session.Session` objects +methods use a pool of :class:`~google.cloud.spanner_v1.session.Session` objects to manage their communication with the back-end. You can configure one of the pools manually to control the number of sessions, timeouts, etc., -and then passing it to the :class:`~google.cloud.spanner.database.Database` +and then pass it to the :class:`~google.cloud.spanner_v1.database.Database` constructor: .. code-block:: python @@ -221,12 +224,12 @@ constructor: pool = spanner.FixedSizePool(size=10, default_timeout=5) database = instance.database(DATABASE_NAME, pool=pool) -Note that creating a database with a pool may presume that its database -already exists, as it may need to pre-create sessions (rather than creating -them on demand, as the default implementation does). +Note that creating a database with a pool will require the database to +already exist if the pool implementation needs to pre-create sessions +(rather than creating them on demand, as the default implementation does). You can supply your own pool implementation, which must satisfy the -contract laid out in :class:`~google.cloud.spanner.pool.AbstractSessionPool`: +contract laid out in :class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`: .. code-block:: python diff --git a/docs/gapic/v1/admin_database_api.rst b/docs/gapic/v1/admin_database_api.rst deleted file mode 100644 index c63f242e85..0000000000 --- a/docs/gapic/v1/admin_database_api.rst +++ /dev/null @@ -1,6 +0,0 @@ -Spanner Admin Database Client API -================================= - -.. automodule:: google.cloud.spanner_admin_database_v1 - :members: - :inherited-members: diff --git a/docs/gapic/v1/admin_database_types.rst b/docs/gapic/v1/admin_database_types.rst deleted file mode 100644 index fa9aaa73b1..0000000000 --- a/docs/gapic/v1/admin_database_types.rst +++ /dev/null @@ -1,6 +0,0 @@ -Spanner Admin Database Client Types -=================================== - -.. automodule:: google.cloud.spanner_admin_database_v1.types - :members: - :noindex: diff --git a/docs/gapic/v1/admin_instance_api.rst b/docs/gapic/v1/admin_instance_api.rst deleted file mode 100644 index c8c320a6cf..0000000000 --- a/docs/gapic/v1/admin_instance_api.rst +++ /dev/null @@ -1,6 +0,0 @@ -Spanner Admin Instance Client API -================================= - -.. automodule:: google.cloud.spanner_admin_instance_v1 - :members: - :inherited-members: diff --git a/docs/gapic/v1/admin_instance_types.rst b/docs/gapic/v1/admin_instance_types.rst deleted file mode 100644 index f8f3afa5ff..0000000000 --- a/docs/gapic/v1/admin_instance_types.rst +++ /dev/null @@ -1,6 +0,0 @@ -Spanner Admin Instance Client Types -=================================== - -.. automodule:: google.cloud.spanner_admin_instance_v1.types - :members: - :noindex: diff --git a/docs/gapic/v1/api.rst b/docs/gapic/v1/api.rst deleted file mode 100644 index 79e4835f22..0000000000 --- a/docs/gapic/v1/api.rst +++ /dev/null @@ -1,6 +0,0 @@ -Spanner Client API -================== - -.. automodule:: google.cloud.spanner_v1 - :members: - :inherited-members: diff --git a/docs/gapic/v1/transactions.rst b/docs/gapic/v1/transactions.rst deleted file mode 100644 index d34af43b4a..0000000000 --- a/docs/gapic/v1/transactions.rst +++ /dev/null @@ -1,241 +0,0 @@ -.. - This page is pulled from the TransactionOption type, where this entire - kaboodle is auto-generated. Sphinx does not particularly appreciate - entire narrative documentation, complete with headers, in an arbitrary - class docstring, and complains about this, so I (lukesneeringer@) - manually copied it over here. - - This should probably be updated when the Spanner code is re-generated. - This will be easy to remember because the source that needs to be copied - will be dropped in transaction_pb2.py and Sphinx will complain loudly - about it. - - Internal Google ticket: b/65243734 - -:orphan: - -.. _spanner-txn: - -Transactions -============ - -Each session can have at most one active transaction at a time. After -the active transaction is completed, the session can immediately be -re-used for the next transaction. It is not necessary to create a new -session for each transaction. - -Transaction Modes -================= - -Cloud Spanner supports two transaction modes: - -1. Locking read-write. This type of transaction is the only way to write - data into Cloud Spanner. These transactions rely on pessimistic - locking and, if necessary, two-phase commit. Locking read-write - transactions may abort, requiring the application to retry. - -2. Snapshot read-only. This transaction type provides guaranteed - consistency across several reads, but does not allow writes. Snapshot - read-only transactions can be configured to read at timestamps in the - past. Snapshot read-only transactions do not need to be committed. - -For transactions that only read, snapshot read-only transactions provide -simpler semantics and are almost always faster. In particular, read-only -transactions do not take locks, so they do not conflict with read-write -transactions. As a consequence of not taking locks, they also do not -abort, so retry loops are not needed. - -Transactions may only read/write data in a single database. They may, -however, read/write data in different tables within that database. - -Locking Read-Write Transactions -------------------------------- - -Locking transactions may be used to atomically read-modify-write data -anywhere in a database. This type of transaction is externally -consistent. - -Clients should attempt to minimize the amount of time a transaction is -active. Faster transactions commit with higher probability and cause -less contention. Cloud Spanner attempts to keep read locks active as -long as the transaction continues to do reads, and the transaction has -not been terminated by [Commit][google.spanner.v1.Spanner.Commit] or -[Rollback][google.spanner.v1.Spanner.Rollback]. Long periods of -inactivity at the client may cause Cloud Spanner to release a -transaction's locks and abort it. - -Reads performed within a transaction acquire locks on the data being -read. Writes can only be done at commit time, after all reads have been -completed. Conceptually, a read-write transaction consists of zero or -more reads or SQL queries followed by -[Commit][google.spanner.v1.Spanner.Commit]. At any time before -[Commit][google.spanner.v1.Spanner.Commit], the client can send a -[Rollback][google.spanner.v1.Spanner.Rollback] request to abort the -transaction. - -Semantics -~~~~~~~~~ - -Cloud Spanner can commit the transaction if all read locks it acquired -are still valid at commit time, and it is able to acquire write locks -for all writes. Cloud Spanner can abort the transaction for any reason. -If a commit attempt returns ``ABORTED``, Cloud Spanner guarantees that -the transaction has not modified any user data in Cloud Spanner. - -Unless the transaction commits, Cloud Spanner makes no guarantees about -how long the transaction's locks were held for. It is an error to use -Cloud Spanner locks for any sort of mutual exclusion other than between -Cloud Spanner transactions themselves. - -Retrying Aborted Transactions -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When a transaction aborts, the application can choose to retry the whole -transaction again. To maximize the chances of successfully committing -the retry, the client should execute the retry in the same session as -the original attempt. The original session's lock priority increases -with each consecutive abort, meaning that each attempt has a slightly -better chance of success than the previous. - -Under some circumstances (e.g., many transactions attempting to modify -the same row(s)), a transaction can abort many times in a short period -before successfully committing. Thus, it is not a good idea to cap the -number of retries a transaction can attempt; instead, it is better to -limit the total amount of wall time spent retrying. - -Idle Transactions -~~~~~~~~~~~~~~~~~ - -A transaction is considered idle if it has no outstanding reads or SQL -queries and has not started a read or SQL query within the last 10 -seconds. Idle transactions can be aborted by Cloud Spanner so that they -don't hold on to locks indefinitely. In that case, the commit will fail -with error ``ABORTED``. - -If this behavior is undesirable, periodically executing a simple SQL -query in the transaction (e.g., ``SELECT 1``) prevents the transaction -from becoming idle. - -Snapshot Read-Only Transactions -------------------------------- - -Snapshot read-only transactions provides a simpler method than locking -read-write transactions for doing several consistent reads. However, -this type of transaction does not support writes. - -Snapshot transactions do not take locks. Instead, they work by choosing -a Cloud Spanner timestamp, then executing all reads at that timestamp. -Since they do not acquire locks, they do not block concurrent read-write -transactions. - -Unlike locking read-write transactions, snapshot read-only transactions -never abort. They can fail if the chosen read timestamp is garbage -collected; however, the default garbage collection policy is generous -enough that most applications do not need to worry about this in -practice. - -Snapshot read-only transactions do not need to call -[Commit][google.spanner.v1.Spanner.Commit] or -[Rollback][google.spanner.v1.Spanner.Rollback] (and in fact are not -permitted to do so). - -To execute a snapshot transaction, the client specifies a timestamp -bound, which tells Cloud Spanner how to choose a read timestamp. - -The types of timestamp bound are: - -- Strong (the default). -- Bounded staleness. -- Exact staleness. - -If the Cloud Spanner database to be read is geographically distributed, -stale read-only transactions can execute more quickly than strong or -read-write transaction, because they are able to execute far from the -leader replica. - -Each type of timestamp bound is discussed in detail below. - -Strong -~~~~~~ - -Strong reads are guaranteed to see the effects of all transactions that -have committed before the start of the read. Furthermore, all rows -yielded by a single read are consistent with each other -- if any part -of the read observes a transaction, all parts of the read see the -transaction. - -Strong reads are not repeatable: two consecutive strong read-only -transactions might return inconsistent results if there are concurrent -writes. If consistency across reads is required, the reads should be -executed within a transaction or at an exact read timestamp. - -See -[TransactionOptions.ReadOnly.strong][google.spanner.v1.TransactionOptions.ReadOnly.strong]. - -Exact Staleness -~~~~~~~~~~~~~~~ - -These timestamp bounds execute reads at a user-specified timestamp. -Reads at a timestamp are guaranteed to see a consistent prefix of the -global transaction history: they observe modifications done by all -transactions with a commit timestamp <= the read timestamp, and observe -none of the modifications done by transactions with a larger commit -timestamp. They will block until all conflicting transactions that may -be assigned commit timestamps <= the read timestamp have finished. - -The timestamp can either be expressed as an absolute Cloud Spanner -commit timestamp or a staleness relative to the current time. - -These modes do not require a "negotiation phase" to pick a timestamp. As -a result, they execute slightly faster than the equivalent boundedly -stale concurrency modes. On the other hand, boundedly stale reads -usually return fresher results. - -See -[TransactionOptions.ReadOnly.read\_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.read\_timestamp] -and -[TransactionOptions.ReadOnly.exact\_staleness][google.spanner.v1.TransactionOptions.ReadOnly.exact\_staleness]. - -Bounded Staleness -~~~~~~~~~~~~~~~~~ - -Bounded staleness modes allow Cloud Spanner to pick the read timestamp, -subject to a user-provided staleness bound. Cloud Spanner chooses the -newest timestamp within the staleness bound that allows execution of the -reads at the closest available replica without blocking. - -All rows yielded are consistent with each other -- if any part of the -read observes a transaction, all parts of the read see the transaction. -Boundedly stale reads are not repeatable: two stale reads, even if they -use the same staleness bound, can execute at different timestamps and -thus return inconsistent results. - -Boundedly stale reads execute in two phases: the first phase negotiates -a timestamp among all replicas needed to serve the read. In the second -phase, reads are executed at the negotiated timestamp. - -As a result of the two phase execution, bounded staleness reads are -usually a little slower than comparable exact staleness reads. However, -they are typically able to return fresher results, and are more likely -to execute at the closest replica. - -Because the timestamp negotiation requires up-front knowledge of which -rows will be read, it can only be used with single-use read-only -transactions. - -See -[TransactionOptions.ReadOnly.max\_staleness][google.spanner.v1.TransactionOptions.ReadOnly.max\_staleness] -and -[TransactionOptions.ReadOnly.min\_read\_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.min\_read\_timestamp]. - -Old Read Timestamps and Garbage Collection -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Cloud Spanner continuously garbage collects deleted and overwritten data -in the background to reclaim storage space. This process is known as -"version GC". By default, version GC reclaims versions after they are -one hour old. Because of this, Cloud Spanner cannot perform reads at -read timestamps more than one hour in the past. This restriction also -applies to in-progress reads and/or SQL queries whose timestamp become -too old while executing. Reads and SQL queries with too-old read -timestamps fail with the error ``FAILED_PRECONDITION``. diff --git a/docs/gapic/v1/types.rst b/docs/gapic/v1/types.rst deleted file mode 100644 index 54424febf3..0000000000 --- a/docs/gapic/v1/types.rst +++ /dev/null @@ -1,6 +0,0 @@ -Spanner Client Types -=================================== - -.. automodule:: google.cloud.spanner_v1.types - :members: - :noindex: diff --git a/docs/index.rst b/docs/index.rst index 729f42d0e0..0de0483409 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -1,26 +1,51 @@ .. include:: README.rst +.. include:: multiprocessing.rst + Usage Documentation ------------------- .. toctree:: - :maxdepth: 1 - :titlesonly: + :maxdepth: 2 client-usage - instance-usage - database-usage + table-usage batch-usage snapshot-usage transaction-usage + database-usage + instance-usage + API Documentation ----------------- .. toctree:: :maxdepth: 1 :titlesonly: - api-reference advanced-session-pool-topics + opentelemetry-tracing + + spanner_v1/client + spanner_v1/instance + spanner_v1/database + spanner_v1/table + spanner_v1/session + spanner_v1/keyset + spanner_v1/snapshot + spanner_v1/batch + spanner_v1/transaction + spanner_v1/streamed + + spanner_v1/services_ + spanner_v1/types_ + spanner_admin_database_v1/services_ + spanner_admin_database_v1/types_ + spanner_admin_database_v1/database_admin + spanner_admin_instance_v1/services_ + spanner_admin_instance_v1/types_ + spanner_admin_instance_v1/instance_admin + + Changelog --------- @@ -31,3 +56,8 @@ For a list of all ``google-cloud-spanner`` releases: :maxdepth: 2 changelog + +.. toctree:: + :hidden: + + summary_overview.md diff --git a/docs/instance-usage.rst b/docs/instance-usage.rst index 909e36b93f..b45b69acc6 100644 --- a/docs/instance-usage.rst +++ b/docs/instance-usage.rst @@ -1,7 +1,7 @@ -Instance Admin -============== +Instance Admin Usage +==================== -After creating a :class:`~google.cloud.spanner.client.Client`, you can +After creating a :class:`~google.cloud.spanner_v1.client.Client`, you can interact with individual instances for a project. Instance Configurations @@ -12,7 +12,7 @@ specifying the location and other parameters for a set of instances. These configurations are defined by the server, and cannot be changed. To iterate over all instance configurations available to your project, use the -:meth:`~google.cloud.spanner.client.Client.list_instance_configs` +:meth:`~google.cloud.spanner_v1.client.Client.list_instance_configs` method of the client: .. code:: python @@ -22,7 +22,7 @@ method of the client: To fetch a single instance configuration, use the -:meth:`~google.cloud.spanner.client.Client.get_instance_configuration` +:meth:`~google.cloud.spanner_v1.client.Client.get_instance_configuration` method of the client: .. code:: python @@ -37,7 +37,7 @@ List Instances -------------- If you want a comprehensive list of all existing instances, iterate over the -:meth:`~google.cloud.spanner.client.Client.list_instances` method of +:meth:`~google.cloud.spanner_v1.client.Client.list_instances` method of the client: .. code:: python @@ -52,7 +52,7 @@ objects. Instance Factory ---------------- -To create a :class:`~google.cloud.spanner.instance.Instance` object: +To create a :class:`~google.cloud.spanner_v1.instance.Instance` object: .. code:: python @@ -65,7 +65,7 @@ To create a :class:`~google.cloud.spanner.instance.Instance` object: - ``configuration_name`` is the name of the instance configuration to which the instance will be bound. It must be one of the names configured for your project, discoverable via - :meth:`~google.cloud.spanner.client.Client.list_instance_configs`. + :meth:`~google.cloud.spanner_v1.client.Client.list_instance_configs`. - ``node_count`` is a postitive integral count of the number of nodes used by the instance. More nodes allows for higher performance, but at a higher @@ -87,7 +87,7 @@ Create a new Instance --------------------- After creating the instance object, use its -:meth:`~google.cloud.spanner.instance.Instance.create` method to +:meth:`~google.cloud.spanner_v1.instance.Instance.create` method to trigger its creation on the server: .. code:: python @@ -98,7 +98,7 @@ trigger its creation on the server: .. note:: Creating an instance triggers a "long-running operation" and - returns an :class:`google.cloud.spanner.instance.Operation` + returns an :class:`google.cloud.spanner_v1.instance.Operation` object. See :ref:`check-on-current-instance-operation` for polling to find out if the operation is completed. @@ -107,7 +107,7 @@ Refresh metadata for an existing Instance ----------------------------------------- After creating the instance object, reload its server-side configuration -using its :meth:`~google.cloud.spanner.instance.Instance.reload` method: +using its :meth:`~google.cloud.spanner_v1.instance.Instance.reload` method: .. code:: python @@ -121,7 +121,7 @@ Update an existing Instance --------------------------- After creating the instance object, you can update its metadata via -its :meth:`~google.cloud.spanner.instance.Instance.update` method: +its :meth:`~google.cloud.spanner_v1.instance.Instance.update` method: .. code:: python @@ -131,7 +131,7 @@ its :meth:`~google.cloud.spanner.instance.Instance.update` method: .. note:: Update an instance triggers a "long-running operation" and - returns a :class:`google.cloud.spanner.instance.Operation` + returns a :class:`google.cloud.spanner_v1.instance.Operation` object. See :ref:`check-on-current-instance-operation` for polling to find out if the operation is completed. @@ -140,7 +140,7 @@ Delete an existing Instance --------------------------- Delete an instance using its -:meth:`~google.cloud.spanner.instance.Instance.delete` method: +:meth:`~google.cloud.spanner_v1.instance.Instance.delete` method: .. code:: python @@ -152,10 +152,10 @@ Delete an instance using its Resolve Current Instance Operation ---------------------------------- -The :meth:`~google.cloud.spanner.instance.Instance.create` and -:meth:`~google.cloud.spanner.instance.Instance.update` methods of instance +The :meth:`~google.cloud.spanner_v1.instance.Instance.create` and +:meth:`~google.cloud.spanner_v1.instance.Instance.update` methods of instance object trigger long-running operations on the server, and return instances -of the :class:`~google.cloud.spanner.instance.Operation` class. +of the :class:`~google.cloud.spanner_v1.instance.Operation` class. If you want to block on the completion of those operations, use the ``result`` method on the returned objects: @@ -172,8 +172,8 @@ Next Step --------- Now we go down the hierarchy from -:class:`~google.cloud.spanner.instance.Instance` to a -:class:`~google.cloud.spanner.database.Database`. +:class:`~google.cloud.spanner_v1.instance.Instance` to a +:class:`~google.cloud.spanner_v1.database.Database`. Next, learn about the :doc:`database-usage`. diff --git a/docs/multiprocessing.rst b/docs/multiprocessing.rst new file mode 100644 index 0000000000..536d17b2ea --- /dev/null +++ b/docs/multiprocessing.rst @@ -0,0 +1,7 @@ +.. note:: + + Because this client uses :mod:`grpc` library, it is safe to + share instances across threads. In multiprocessing scenarios, the best + practice is to create client instances *after* the invocation of + :func:`os.fork` by :class:`multiprocessing.pool.Pool` or + :class:`multiprocessing.Process`. diff --git a/docs/opentelemetry-tracing.rst b/docs/opentelemetry-tracing.rst new file mode 100644 index 0000000000..c581d2cb87 --- /dev/null +++ b/docs/opentelemetry-tracing.rst @@ -0,0 +1,96 @@ +Tracing with OpenTelemetry +========================== + +This library uses `OpenTelemetry `_ to automatically generate traces providing insight on calls to Cloud Spanner. +For information on the benefits and utility of tracing, see the `Cloud Trace docs `_. + +To take advantage of these traces, we first need to install OpenTelemetry: + +.. code-block:: sh + + pip install opentelemetry-api opentelemetry-sdk + pip install opentelemetry-exporter-gcp-trace + +We also need to tell OpenTelemetry which exporter to use. To export Spanner traces to `Cloud Tracing `_, add the following lines to your application: + +.. code:: python + + from opentelemetry import trace + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.sampling import TraceIdRatioBased + from opentelemetry.exporter.cloud_trace import CloudTraceSpanExporter + # BatchSpanProcessor exports spans to Cloud Trace + # in a seperate thread to not block on the main thread + from opentelemetry.sdk.trace.export import BatchSpanProcessor + + # Create and export one trace every 1000 requests + sampler = TraceIdRatioBased(1/1000) + tracer_provider = TracerProvider(sampler=sampler) + tracer_provider.add_span_processor( + # Initialize the cloud tracing exporter + BatchSpanProcessor(CloudTraceSpanExporter()) + ) + observability_options = dict( + tracer_provider=tracer_provider, + + # By default extended_tracing is set to True due + # to legacy reasons to avoid breaking changes, you + # can modify it though using the environment variable + # SPANNER_ENABLE_EXTENDED_TRACING=false. + enable_extended_tracing=False, + + # By default end to end tracing is set to False. Set to True + # for getting spans for Spanner server. + enable_end_to_end_tracing=True, + ) + spanner = spanner.NewClient(project_id, observability_options=observability_options) + + +To get more fine-grained traces from gRPC, you can enable the gRPC instrumentation by the following + +.. code-block:: sh + + pip install opentelemetry-instrumentation opentelemetry-instrumentation-grpc + +and then in your Python code, please add the following lines: + +.. code:: python + + from opentelemetry.instrumentation.grpc import GrpcInstrumentorClient + grpc_client_instrumentor = GrpcInstrumentorClient() + grpc_client_instrumentor.instrument() + + +Generated spanner traces should now be available on `Cloud Trace `_. + +Tracing is most effective when many libraries are instrumented to provide insight over the entire lifespan of a request. +For a list of libraries that can be instrumented, see the `OpenTelemetry Integrations` section of the `OpenTelemetry Python docs `_ + +Annotating spans with SQL +~~~~~~~~~~~~~~~~~~~~~~~~~ + +By default your spans will be annotated with SQL statements where appropriate, but that can be a PII (Personally Identifiable Information) +leak. Sadly due to legacy behavior, we cannot simply turn off this behavior by default. However you can control this behavior by setting + + SPANNER_ENABLE_EXTENDED_TRACING=false + +to turn it off globally or when creating each SpannerClient, please set `observability_options.enable_extended_tracing=false` + +End to end tracing +~~~~~~~~~~~~~~~~~~~~~~~~~ + +In addition to client-side tracing, you can opt in for end-to-end tracing. End-to-end tracing helps you understand and debug latency issues that are specific to Spanner. Refer [here](https://cloud.google.com/spanner/docs/tracing-overview) for more information. + +To configure end-to-end tracing. + +1. Opt in for end-to-end tracing. You can opt-in by either: +* Setting the environment variable `SPANNER_ENABLE_END_TO_END_TRACING=true` before your application is started +* In code, by setting `observability_options.enable_end_to_end_tracing=true` when creating each SpannerClient. + +2. Set the trace context propagation in OpenTelemetry. + +.. code:: python + + from opentelemetry.propagate import set_global_textmap + from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator + set_global_textmap(TraceContextTextMapPropagator()) \ No newline at end of file diff --git a/docs/snapshot-usage.rst b/docs/snapshot-usage.rst index 4c5a5b2420..0f00686a54 100644 --- a/docs/snapshot-usage.rst +++ b/docs/snapshot-usage.rst @@ -1,8 +1,8 @@ Read-only Transactions via Snapshots #################################### -A :class:`~google.cloud.spanner.snapshot.Snapshot` represents a read-only -transaction: when multiple read operations are peformed via a Snapshot, +A :class:`~google.cloud.spanner_v1.snapshot.Snapshot` represents a read-only +transaction: when multiple read operations are performed via a Snapshot, the results are consistent as of a particular point in time. @@ -15,7 +15,8 @@ transactions are visible: .. code:: python - snapshot = database.snapshot() + with database.snapshot() as snapshot: + ... You can also specify a weaker bound, which can either be to perform all reads as of a given timestamp: @@ -23,9 +24,10 @@ reads as of a given timestamp: .. code:: python import datetime - from pytz import UTC - TIMESTAMP = datetime.datetime.utcnow().replace(tzinfo=UTC) - snapshot = database.snapshot(read_timestamp=TIMESTAMP) + TIMESTAMP = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc) + + with database.snapshot(read_timestamp=TIMESTAMP) as snapshot: + ... or as of a given duration in the past: @@ -33,7 +35,9 @@ or as of a given duration in the past: import datetime DURATION = datetime.timedelta(seconds=5) - snapshot = database.snapshot(exact_staleness=DURATION) + + with database.snapshot(exact_staleness=DURATION) as snapshot: + ... Single Use and Multiple Use Snapshots ------------------------------------- @@ -48,41 +52,43 @@ reused. .. code:: python - snapshot = database.snapshot(multi_use=True) + with database.snapshot(multi_use=True) as snapshot: + ... -:meth:`~.spanner_v1.snapshot.Snapshot.begin` can only be used on a +:meth:`~google.cloud.spanner_v1.snapshot.Snapshot.begin` can only be used on a snapshot with ``multi_use=True``. In which case it is also necessary to call if you need to have multiple pending operations. Read Table Data --------------- -Read data for selected rows from a table in the database. Calls -the ``Read`` API, which returns all rows specified in ``key_set``, or else -fails if the result set is too large, +To read data for selected rows from a table in the database, call +:meth:`~google.cloud.spanner_v1.snapshot.Snapshot.read` which will return +all rows specified in ``keyset``, or fail if the result set is too large, .. code:: python with database.snapshot() as snapshot: result = snapshot.read( table='table-name', columns=['first_name', 'last_name', 'age'], - key_set=['phred@example.com', 'bharney@example.com']) + keyset=spanner.KeySet([['phred@example.com'], ['bharney@example.com']])) - for row in result.rows: + for row in result: print(row) .. note:: - Perform all iteration within the context of the ``with database.snapshot()`` + Perform all iterations within the context of the ``with database.snapshot()`` block. Execute a SQL Select Statement ------------------------------ -Read data from a query against tables in the database. Calls -the ``ExecuteSql`` API, which returns all rows matching the query, or else -fails if the result set is too large, +To read data from tables in the database using a query, call +:meth:`~google.cloud.spanner_v1.snapshot.Snapshot.execute_sql` +which will return all rows matching the query, or fail if the +result set is too large, .. code:: python @@ -93,7 +99,7 @@ fails if the result set is too large, 'WHERE p.employee_id == e.employee_id') result = snapshot.execute_sql(QUERY) - for row in list(result): + for row in result: print(row) .. note:: diff --git a/docs/spanner_admin_database_v1/database_admin.rst b/docs/spanner_admin_database_v1/database_admin.rst new file mode 100644 index 0000000000..bd6aab00e4 --- /dev/null +++ b/docs/spanner_admin_database_v1/database_admin.rst @@ -0,0 +1,10 @@ +DatabaseAdmin +------------------------------- + +.. automodule:: google.cloud.spanner_admin_database_v1.services.database_admin + :members: + :inherited-members: + +.. automodule:: google.cloud.spanner_admin_database_v1.services.database_admin.pagers + :members: + :inherited-members: diff --git a/docs/spanner_admin_database_v1/services_.rst b/docs/spanner_admin_database_v1/services_.rst new file mode 100644 index 0000000000..55e57d8dc0 --- /dev/null +++ b/docs/spanner_admin_database_v1/services_.rst @@ -0,0 +1,6 @@ +Services for Google Cloud Spanner Admin Database v1 API +======================================================= +.. toctree:: + :maxdepth: 2 + + database_admin diff --git a/docs/spanner_admin_database_v1/types_.rst b/docs/spanner_admin_database_v1/types_.rst new file mode 100644 index 0000000000..fe6c27778b --- /dev/null +++ b/docs/spanner_admin_database_v1/types_.rst @@ -0,0 +1,6 @@ +Types for Google Cloud Spanner Admin Database v1 API +==================================================== + +.. automodule:: google.cloud.spanner_admin_database_v1.types + :members: + :show-inheritance: diff --git a/docs/spanner_admin_instance_v1/instance_admin.rst b/docs/spanner_admin_instance_v1/instance_admin.rst new file mode 100644 index 0000000000..fe820b3fad --- /dev/null +++ b/docs/spanner_admin_instance_v1/instance_admin.rst @@ -0,0 +1,10 @@ +InstanceAdmin +------------------------------- + +.. automodule:: google.cloud.spanner_admin_instance_v1.services.instance_admin + :members: + :inherited-members: + +.. automodule:: google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers + :members: + :inherited-members: diff --git a/docs/spanner_admin_instance_v1/services_.rst b/docs/spanner_admin_instance_v1/services_.rst new file mode 100644 index 0000000000..407d44cc34 --- /dev/null +++ b/docs/spanner_admin_instance_v1/services_.rst @@ -0,0 +1,6 @@ +Services for Google Cloud Spanner Admin Instance v1 API +======================================================= +.. toctree:: + :maxdepth: 2 + + instance_admin diff --git a/docs/spanner_admin_instance_v1/types_.rst b/docs/spanner_admin_instance_v1/types_.rst new file mode 100644 index 0000000000..250cf6bf9b --- /dev/null +++ b/docs/spanner_admin_instance_v1/types_.rst @@ -0,0 +1,6 @@ +Types for Google Cloud Spanner Admin Instance v1 API +==================================================== + +.. automodule:: google.cloud.spanner_admin_instance_v1.types + :members: + :show-inheritance: diff --git a/docs/batch-api.rst b/docs/spanner_v1/batch.rst similarity index 100% rename from docs/batch-api.rst rename to docs/spanner_v1/batch.rst diff --git a/docs/client-api.rst b/docs/spanner_v1/client.rst similarity index 100% rename from docs/client-api.rst rename to docs/spanner_v1/client.rst diff --git a/docs/database-api.rst b/docs/spanner_v1/database.rst similarity index 100% rename from docs/database-api.rst rename to docs/spanner_v1/database.rst diff --git a/docs/instance-api.rst b/docs/spanner_v1/instance.rst similarity index 100% rename from docs/instance-api.rst rename to docs/spanner_v1/instance.rst diff --git a/docs/keyset-api.rst b/docs/spanner_v1/keyset.rst similarity index 100% rename from docs/keyset-api.rst rename to docs/spanner_v1/keyset.rst diff --git a/docs/spanner_v1/services_.rst b/docs/spanner_v1/services_.rst new file mode 100644 index 0000000000..3bbbb55f79 --- /dev/null +++ b/docs/spanner_v1/services_.rst @@ -0,0 +1,6 @@ +Services for Google Cloud Spanner v1 API +======================================== +.. toctree:: + :maxdepth: 2 + + spanner diff --git a/docs/session-api.rst b/docs/spanner_v1/session.rst similarity index 100% rename from docs/session-api.rst rename to docs/spanner_v1/session.rst diff --git a/docs/snapshot-api.rst b/docs/spanner_v1/snapshot.rst similarity index 100% rename from docs/snapshot-api.rst rename to docs/spanner_v1/snapshot.rst diff --git a/docs/spanner_v1/spanner.rst b/docs/spanner_v1/spanner.rst new file mode 100644 index 0000000000..b51f4447e4 --- /dev/null +++ b/docs/spanner_v1/spanner.rst @@ -0,0 +1,10 @@ +Spanner +------------------------- + +.. automodule:: google.cloud.spanner_v1.services.spanner + :members: + :inherited-members: + +.. automodule:: google.cloud.spanner_v1.services.spanner.pagers + :members: + :inherited-members: diff --git a/docs/streamed-api.rst b/docs/spanner_v1/streamed.rst similarity index 100% rename from docs/streamed-api.rst rename to docs/spanner_v1/streamed.rst diff --git a/docs/spanner_v1/table.rst b/docs/spanner_v1/table.rst new file mode 100644 index 0000000000..86b81dc86e --- /dev/null +++ b/docs/spanner_v1/table.rst @@ -0,0 +1,6 @@ +Table API +========= + +.. automodule:: google.cloud.spanner_v1.table + :members: + :show-inheritance: diff --git a/docs/transaction-api.rst b/docs/spanner_v1/transaction.rst similarity index 100% rename from docs/transaction-api.rst rename to docs/spanner_v1/transaction.rst diff --git a/docs/spanner_v1/types_.rst b/docs/spanner_v1/types_.rst new file mode 100644 index 0000000000..c7ff7e6c71 --- /dev/null +++ b/docs/spanner_v1/types_.rst @@ -0,0 +1,6 @@ +Types for Google Cloud Spanner v1 API +===================================== + +.. automodule:: google.cloud.spanner_v1.types + :members: + :show-inheritance: diff --git a/docs/summary_overview.md b/docs/summary_overview.md new file mode 100644 index 0000000000..ffaf71df07 --- /dev/null +++ b/docs/summary_overview.md @@ -0,0 +1,22 @@ +[ +This is a templated file. Adding content to this file may result in it being +reverted. Instead, if you want to place additional content, create an +"overview_content.md" file in `docs/` directory. The Sphinx tool will +pick up on the content and merge the content. +]: # + +# Cloud Spanner API + +Overview of the APIs available for Cloud Spanner API. + +## All entries + +Classes, methods and properties & attributes for +Cloud Spanner API. + +[classes](https://cloud.google.com/python/docs/reference/spanner/latest/summary_class.html) + +[methods](https://cloud.google.com/python/docs/reference/spanner/latest/summary_method.html) + +[properties and +attributes](https://cloud.google.com/python/docs/reference/spanner/latest/summary_property.html) diff --git a/docs/table-usage.rst b/docs/table-usage.rst new file mode 100644 index 0000000000..01459b5f8e --- /dev/null +++ b/docs/table-usage.rst @@ -0,0 +1,47 @@ +Table Admin Usage +================= + +After creating an :class:`~google.cloud.spanner_v1.database.Database`, you can +interact with individual tables for that instance. + + +List Tables +----------- + +To iterate over all existing tables for an database, use its +:meth:`~google.cloud.spanner_v1.database.Database.list_tables` method: + +.. code:: python + + for table in database.list_tables(): + # `table` is a `Table` object. + +This method yields :class:`~google.cloud.spanner_v1.table.Table` objects. + + +Table Factory +------------- + +A :class:`~google.cloud.spanner_v1.table.Table` object can be created with the +:meth:`~google.cloud.spanner_v1.database.Database.table` factory method: + +.. code:: python + + table = database.table("my_table_id") + if table.exists(): + print("Table with ID 'my_table' exists.") + else: + print("Table with ID 'my_table' does not exist." + + +Getting the Table Schema +------------------------ + +Use the :attr:`~google.cloud.spanner_v1.table.Table.schema` property to inspect +the columns of a table as a list of +:class:`~google.cloud.spanner_v1.types.StructType.Field` objects. + +.. code:: python + + for field in table.schema + # `field` is a `Field` object. diff --git a/docs/transaction-usage.rst b/docs/transaction-usage.rst index e475894939..78026bf5a4 100644 --- a/docs/transaction-usage.rst +++ b/docs/transaction-usage.rst @@ -1,11 +1,12 @@ Read-write Transactions ####################### -A :class:`~google.cloud.spanner.transaction.Transaction` represents a +A :class:`~google.cloud.spanner_v1.transaction.Transaction` represents a transaction: when the transaction commits, it will send any accumulated mutations to the server. -To understand more about how transactions work, visit :ref:`spanner-txn`. +To understand more about how transactions work, visit +`Transaction `_. To learn more about how to use them in the Python client, continue reading. @@ -90,8 +91,8 @@ any of the records already exists. Update records using a Transaction ---------------------------------- -:meth:`Transaction.update` updates one or more existing records in a table. Fails -if any of the records does not already exist. +:meth:`Transaction.update` updates one or more existing records in a table. +Fails if any of the records does not already exist. .. code:: python @@ -178,9 +179,9 @@ Using :meth:`~Database.run_in_transaction` Rather than calling :meth:`~Transaction.commit` or :meth:`~Transaction.rollback` manually, you should use :meth:`~Database.run_in_transaction` to run the -function that you need. The transaction's :meth:`~Transaction.commit` method +function that you need. The transaction's :meth:`~Transaction.commit` method will be called automatically if the ``with`` block exits without raising an -exception. The function will automatically be retried for +exception. The function will automatically be retried for :class:`~google.api_core.exceptions.Aborted` errors, but will raise on :class:`~google.api_core.exceptions.GoogleAPICallError` and :meth:`~Transaction.rollback` will be called on all others. @@ -188,25 +189,30 @@ exception. The function will automatically be retried for .. code:: python def _unit_of_work(transaction): - transaction.insert( - 'citizens', columns=['email', 'first_name', 'last_name', 'age'], + 'citizens', + columns=['email', 'first_name', 'last_name', 'age'], values=[ ['phred@exammple.com', 'Phred', 'Phlyntstone', 32], ['bharney@example.com', 'Bharney', 'Rhubble', 31], - ]) + ] + ) transaction.update( - 'citizens', columns=['email', 'age'], + 'citizens', + columns=['email', 'age'], values=[ ['phred@exammple.com', 33], ['bharney@example.com', 32], - ]) + ] + ) ... - transaction.delete('citizens', - keyset['bharney@example.com', 'nonesuch@example.com']) + transaction.delete( + 'citizens', + keyset=['bharney@example.com', 'nonesuch@example.com'] + ) db.run_in_transaction(_unit_of_work) @@ -242,7 +248,7 @@ If an exception is raised inside the ``with`` block, the transaction's ... transaction.delete('citizens', - keyset['bharney@example.com', 'nonesuch@example.com']) + keyset=['bharney@example.com', 'nonesuch@example.com']) Begin a Transaction diff --git a/examples/grpc_instrumentation_enabled.py b/examples/grpc_instrumentation_enabled.py new file mode 100644 index 0000000000..c8bccd0a9d --- /dev/null +++ b/examples/grpc_instrumentation_enabled.py @@ -0,0 +1,73 @@ +# -*- coding: utf-8 -*- +# Copyright 2024 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License + +import os +import time + +import google.cloud.spanner as spanner +from opentelemetry.exporter.cloud_trace import CloudTraceSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import BatchSpanProcessor +from opentelemetry.sdk.trace.sampling import ALWAYS_ON +from opentelemetry import trace + +# Enable the gRPC instrumentation if you'd like more introspection. +from opentelemetry.instrumentation.grpc import GrpcInstrumentorClient + +grpc_client_instrumentor = GrpcInstrumentorClient() +grpc_client_instrumentor.instrument() + + +def main(): + # Setup common variables that'll be used between Spanner and traces. + project_id = os.environ.get('SPANNER_PROJECT_ID', 'test-project') + + # Setup OpenTelemetry, trace and Cloud Trace exporter. + tracer_provider = TracerProvider(sampler=ALWAYS_ON) + trace_exporter = CloudTraceSpanExporter(project_id=project_id) + tracer_provider.add_span_processor(BatchSpanProcessor(trace_exporter)) + trace.set_tracer_provider(tracer_provider) + # Retrieve a tracer from the global tracer provider. + tracer = tracer_provider.get_tracer('MyApp') + + # Setup the Cloud Spanner Client. + spanner_client = spanner.Client(project_id) + + instance = spanner_client.instance('test-instance') + database = instance.database('test-db') + + # Now run our queries + with tracer.start_as_current_span('QueryInformationSchema'): + with database.snapshot() as snapshot: + with tracer.start_as_current_span('InformationSchema'): + info_schema = snapshot.execute_sql( + 'SELECT * FROM INFORMATION_SCHEMA.TABLES') + for row in info_schema: + print(row) + + with tracer.start_as_current_span('ServerTimeQuery'): + with database.snapshot() as snapshot: + # Purposefully issue a bad SQL statement to examine exceptions + # that get recorded and a ERROR span status. + try: + data = snapshot.execute_sql('SELECT CURRENT_TIMESTAMPx()') + for row in data: + print(row) + except Exception as e: + pass + + +if __name__ == '__main__': + main() diff --git a/examples/trace.py b/examples/trace.py new file mode 100644 index 0000000000..5b826ca5ad --- /dev/null +++ b/examples/trace.py @@ -0,0 +1,104 @@ +# -*- coding: utf-8 -*- +# Copyright 2024 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License + +import os +import time + +import google.cloud.spanner as spanner +from opentelemetry.exporter.cloud_trace import CloudTraceSpanExporter +from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter +from opentelemetry.sdk.trace import TracerProvider +from opentelemetry.sdk.trace.export import BatchSpanProcessor +from opentelemetry.sdk.trace.sampling import ALWAYS_ON +from opentelemetry import trace +from opentelemetry.propagate import set_global_textmap +from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator + +# Setup common variables that'll be used between Spanner and traces. +project_id = os.environ.get('SPANNER_PROJECT_ID', 'test-project') + +def spanner_with_cloud_trace(): + # [START spanner_opentelemetry_traces_cloudtrace_usage] + # Setup OpenTelemetry, trace and Cloud Trace exporter. + tracer_provider = TracerProvider(sampler=ALWAYS_ON) + trace_exporter = CloudTraceSpanExporter(project_id=project_id) + tracer_provider.add_span_processor(BatchSpanProcessor(trace_exporter)) + + # Setup the Cloud Spanner Client. + spanner_client = spanner.Client( + project_id, + observability_options=dict(tracer_provider=tracer_provider, enable_extended_tracing=True, enable_end_to_end_tracing=True), + ) + + # [END spanner_opentelemetry_traces_cloudtrace_usage] + return spanner_client + +def spanner_with_otlp(): + # [START spanner_opentelemetry_traces_otlp_usage] + # Setup OpenTelemetry, trace and OTLP exporter. + tracer_provider = TracerProvider(sampler=ALWAYS_ON) + otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317") + tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter)) + + # Setup the Cloud Spanner Client. + spanner_client = spanner.Client( + project_id, + observability_options=dict(tracer_provider=tracer_provider, enable_extended_tracing=True, enable_end_to_end_tracing=True), + ) + # [END spanner_opentelemetry_traces_otlp_usage] + return spanner_client + + +def main(): + # Setup OpenTelemetry, trace and Cloud Trace exporter. + tracer_provider = TracerProvider(sampler=ALWAYS_ON) + trace_exporter = CloudTraceSpanExporter(project_id=project_id) + tracer_provider.add_span_processor(BatchSpanProcessor(trace_exporter)) + + # Setup the Cloud Spanner Client. + # Change to "spanner_client = spanner_with_otlp" to use OTLP exporter + spanner_client = spanner_with_cloud_trace() + instance = spanner_client.instance('test-instance') + database = instance.database('test-db') + + # Set W3C Trace Context as the global propagator for end to end tracing. + set_global_textmap(TraceContextTextMapPropagator()) + + # Retrieve a tracer from our custom tracer provider. + tracer = tracer_provider.get_tracer('MyApp') + + # Now run our queries + with tracer.start_as_current_span('QueryInformationSchema'): + with database.snapshot() as snapshot: + with tracer.start_as_current_span('InformationSchema'): + info_schema = snapshot.execute_sql( + 'SELECT * FROM INFORMATION_SCHEMA.TABLES') + for row in info_schema: + print(row) + + with tracer.start_as_current_span('ServerTimeQuery'): + with database.snapshot() as snapshot: + # Purposefully issue a bad SQL statement to examine exceptions + # that get recorded and a ERROR span status. + try: + data = snapshot.execute_sql('SELECT CURRENT_TIMESTAMPx()') + for row in data: + print(row) + except Exception as e: + print(e) + + +if __name__ == '__main__': + main() diff --git a/google/__init__.py b/google/__init__.py deleted file mode 100644 index 2f4b4738ae..0000000000 --- a/google/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -try: - import pkg_resources - - pkg_resources.declare_namespace(__name__) -except ImportError: - import pkgutil - - __path__ = pkgutil.extend_path(__path__, __name__) diff --git a/google/cloud/__init__.py b/google/cloud/__init__.py deleted file mode 100644 index 2f4b4738ae..0000000000 --- a/google/cloud/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -try: - import pkg_resources - - pkg_resources.declare_namespace(__name__) -except ImportError: - import pkgutil - - __path__ = pkgutil.extend_path(__path__, __name__) diff --git a/google/cloud/spanner.py b/google/cloud/spanner.py index 0b1d3d949f..41a77cf7ce 100644 --- a/google/cloud/spanner.py +++ b/google/cloud/spanner.py @@ -1,4 +1,4 @@ -# Copyright 2016 Google LLC All rights reserved. +# Copyright 2016, Google LLC All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -12,37 +12,36 @@ # See the License for the specific language governing permissions and # limitations under the License. -"""Cloud Spanner API package.""" - from __future__ import absolute_import from google.cloud.spanner_v1 import __version__ -from google.cloud.spanner_v1 import AbstractSessionPool -from google.cloud.spanner_v1 import BurstyPool +from google.cloud.spanner_v1 import param_types from google.cloud.spanner_v1 import Client -from google.cloud.spanner_v1 import COMMIT_TIMESTAMP -from google.cloud.spanner_v1 import enums -from google.cloud.spanner_v1 import FixedSizePool from google.cloud.spanner_v1 import KeyRange from google.cloud.spanner_v1 import KeySet -from google.cloud.spanner_v1 import param_types +from google.cloud.spanner_v1 import AbstractSessionPool +from google.cloud.spanner_v1 import BurstyPool +from google.cloud.spanner_v1 import FixedSizePool from google.cloud.spanner_v1 import PingingPool from google.cloud.spanner_v1 import TransactionPingingPool -from google.cloud.spanner_v1 import types +from google.cloud.spanner_v1 import COMMIT_TIMESTAMP __all__ = ( + # google.cloud.spanner "__version__", - "AbstractSessionPool", - "BurstyPool", + "param_types", + # google.cloud.spanner_v1.client "Client", - "COMMIT_TIMESTAMP", - "enums", - "FixedSizePool", + # google.cloud.spanner_v1.keyset "KeyRange", "KeySet", - "param_types", + # google.cloud.spanner_v1.pool + "AbstractSessionPool", + "BurstyPool", + "FixedSizePool", "PingingPool", "TransactionPingingPool", - "types", + # local + "COMMIT_TIMESTAMP", ) diff --git a/google/cloud/spanner_admin_database_v1/__init__.py b/google/cloud/spanner_admin_database_v1/__init__.py index 3a5b42403c..d7fddf0236 100644 --- a/google/cloud/spanner_admin_database_v1/__init__.py +++ b/google/cloud/spanner_admin_database_v1/__init__.py @@ -1,29 +1,150 @@ # -*- coding: utf-8 -*- -# -# Copyright 2018 Google LLC +# Copyright 2025 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# +from google.cloud.spanner_admin_database_v1 import gapic_version as package_version -from __future__ import absolute_import - -from google.cloud.spanner_admin_database_v1 import types -from google.cloud.spanner_admin_database_v1.gapic import database_admin_client -from google.cloud.spanner_admin_database_v1.gapic import enums +__version__ = package_version.__version__ -class DatabaseAdminClient(database_admin_client.DatabaseAdminClient): - __doc__ = database_admin_client.DatabaseAdminClient.__doc__ - enums = enums +from .services.database_admin import DatabaseAdminClient +from .services.database_admin import DatabaseAdminAsyncClient +from .types.backup import Backup +from .types.backup import BackupInfo +from .types.backup import BackupInstancePartition +from .types.backup import CopyBackupEncryptionConfig +from .types.backup import CopyBackupMetadata +from .types.backup import CopyBackupRequest +from .types.backup import CreateBackupEncryptionConfig +from .types.backup import CreateBackupMetadata +from .types.backup import CreateBackupRequest +from .types.backup import DeleteBackupRequest +from .types.backup import FullBackupSpec +from .types.backup import GetBackupRequest +from .types.backup import IncrementalBackupSpec +from .types.backup import ListBackupOperationsRequest +from .types.backup import ListBackupOperationsResponse +from .types.backup import ListBackupsRequest +from .types.backup import ListBackupsResponse +from .types.backup import UpdateBackupRequest +from .types.backup_schedule import BackupSchedule +from .types.backup_schedule import BackupScheduleSpec +from .types.backup_schedule import CreateBackupScheduleRequest +from .types.backup_schedule import CrontabSpec +from .types.backup_schedule import DeleteBackupScheduleRequest +from .types.backup_schedule import GetBackupScheduleRequest +from .types.backup_schedule import ListBackupSchedulesRequest +from .types.backup_schedule import ListBackupSchedulesResponse +from .types.backup_schedule import UpdateBackupScheduleRequest +from .types.common import EncryptionConfig +from .types.common import EncryptionInfo +from .types.common import OperationProgress +from .types.common import DatabaseDialect +from .types.spanner_database_admin import AddSplitPointsRequest +from .types.spanner_database_admin import AddSplitPointsResponse +from .types.spanner_database_admin import CreateDatabaseMetadata +from .types.spanner_database_admin import CreateDatabaseRequest +from .types.spanner_database_admin import Database +from .types.spanner_database_admin import DatabaseRole +from .types.spanner_database_admin import DdlStatementActionInfo +from .types.spanner_database_admin import DropDatabaseRequest +from .types.spanner_database_admin import GetDatabaseDdlRequest +from .types.spanner_database_admin import GetDatabaseDdlResponse +from .types.spanner_database_admin import GetDatabaseRequest +from .types.spanner_database_admin import InternalUpdateGraphOperationRequest +from .types.spanner_database_admin import InternalUpdateGraphOperationResponse +from .types.spanner_database_admin import ListDatabaseOperationsRequest +from .types.spanner_database_admin import ListDatabaseOperationsResponse +from .types.spanner_database_admin import ListDatabaseRolesRequest +from .types.spanner_database_admin import ListDatabaseRolesResponse +from .types.spanner_database_admin import ListDatabasesRequest +from .types.spanner_database_admin import ListDatabasesResponse +from .types.spanner_database_admin import OptimizeRestoredDatabaseMetadata +from .types.spanner_database_admin import RestoreDatabaseEncryptionConfig +from .types.spanner_database_admin import RestoreDatabaseMetadata +from .types.spanner_database_admin import RestoreDatabaseRequest +from .types.spanner_database_admin import RestoreInfo +from .types.spanner_database_admin import SplitPoints +from .types.spanner_database_admin import UpdateDatabaseDdlMetadata +from .types.spanner_database_admin import UpdateDatabaseDdlRequest +from .types.spanner_database_admin import UpdateDatabaseMetadata +from .types.spanner_database_admin import UpdateDatabaseRequest +from .types.spanner_database_admin import RestoreSourceType -__all__ = ("enums", "types", "DatabaseAdminClient") +__all__ = ( + "DatabaseAdminAsyncClient", + "AddSplitPointsRequest", + "AddSplitPointsResponse", + "Backup", + "BackupInfo", + "BackupInstancePartition", + "BackupSchedule", + "BackupScheduleSpec", + "CopyBackupEncryptionConfig", + "CopyBackupMetadata", + "CopyBackupRequest", + "CreateBackupEncryptionConfig", + "CreateBackupMetadata", + "CreateBackupRequest", + "CreateBackupScheduleRequest", + "CreateDatabaseMetadata", + "CreateDatabaseRequest", + "CrontabSpec", + "Database", + "DatabaseAdminClient", + "DatabaseDialect", + "DatabaseRole", + "DdlStatementActionInfo", + "DeleteBackupRequest", + "DeleteBackupScheduleRequest", + "DropDatabaseRequest", + "EncryptionConfig", + "EncryptionInfo", + "FullBackupSpec", + "GetBackupRequest", + "GetBackupScheduleRequest", + "GetDatabaseDdlRequest", + "GetDatabaseDdlResponse", + "GetDatabaseRequest", + "IncrementalBackupSpec", + "InternalUpdateGraphOperationRequest", + "InternalUpdateGraphOperationResponse", + "ListBackupOperationsRequest", + "ListBackupOperationsResponse", + "ListBackupSchedulesRequest", + "ListBackupSchedulesResponse", + "ListBackupsRequest", + "ListBackupsResponse", + "ListDatabaseOperationsRequest", + "ListDatabaseOperationsResponse", + "ListDatabaseRolesRequest", + "ListDatabaseRolesResponse", + "ListDatabasesRequest", + "ListDatabasesResponse", + "OperationProgress", + "OptimizeRestoredDatabaseMetadata", + "RestoreDatabaseEncryptionConfig", + "RestoreDatabaseMetadata", + "RestoreDatabaseRequest", + "RestoreInfo", + "RestoreSourceType", + "SplitPoints", + "UpdateBackupRequest", + "UpdateBackupScheduleRequest", + "UpdateDatabaseDdlMetadata", + "UpdateDatabaseDdlRequest", + "UpdateDatabaseMetadata", + "UpdateDatabaseRequest", +) diff --git a/google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py b/google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py deleted file mode 100644 index f41559acc1..0000000000 --- a/google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py +++ /dev/null @@ -1,1923 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.spanner.admin.database.v1 DatabaseAdmin API.""" - -import functools -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.gapic_v1.routing_header -import google.api_core.grpc_helpers -import google.api_core.operation -import google.api_core.operations_v1 -import google.api_core.page_iterator -import google.api_core.path_template -import google.api_core.protobuf_helpers -import grpc - -from google.cloud.spanner_admin_database_v1.gapic import database_admin_client_config -from google.cloud.spanner_admin_database_v1.gapic import enums -from google.cloud.spanner_admin_database_v1.gapic.transports import ( - database_admin_grpc_transport, -) -from google.cloud.spanner_admin_database_v1.proto import backup_pb2 -from google.cloud.spanner_admin_database_v1.proto import spanner_database_admin_pb2 -from google.cloud.spanner_admin_database_v1.proto import spanner_database_admin_pb2_grpc -from google.iam.v1 import iam_policy_pb2 -from google.iam.v1 import options_pb2 -from google.iam.v1 import policy_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import empty_pb2 -from google.protobuf import field_mask_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution("google-cloud-spanner").version - - -class DatabaseAdminClient(object): - """ - Cloud Spanner Database Admin API - - The Cloud Spanner Database Admin API can be used to create, drop, and - list databases. It also enables updating the schema of pre-existing - databases. It can be also used to create, delete and list backups for a - database and to restore from an existing backup. - """ - - SERVICE_ADDRESS = "spanner.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = "google.spanner.admin.database.v1.DatabaseAdmin" - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - DatabaseAdminClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - @classmethod - def backup_path(cls, project, instance, backup): - """Return a fully-qualified backup string.""" - return google.api_core.path_template.expand( - "projects/{project}/instances/{instance}/backups/{backup}", - project=project, - instance=instance, - backup=backup, - ) - - @classmethod - def database_path(cls, project, instance, database): - """Return a fully-qualified database string.""" - return google.api_core.path_template.expand( - "projects/{project}/instances/{instance}/databases/{database}", - project=project, - instance=instance, - database=database, - ) - - @classmethod - def instance_path(cls, project, instance): - """Return a fully-qualified instance string.""" - return google.api_core.path_template.expand( - "projects/{project}/instances/{instance}", - project=project, - instance=instance, - ) - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.DatabaseAdminGrpcTransport, - Callable[[~.Credentials, type], ~.DatabaseAdminGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = database_admin_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=database_admin_grpc_transport.DatabaseAdminGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = database_admin_grpc_transport.DatabaseAdminGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME] - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def create_database( - self, - parent, - create_statement, - extra_statements=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Creates a new Cloud Spanner database and starts to prepare it for - serving. The returned ``long-running operation`` will have a name of the - format ``/operations/`` and can be used to - track preparation of the database. The ``metadata`` field type is - ``CreateDatabaseMetadata``. The ``response`` field type is ``Database``, - if successful. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> # TODO: Initialize `create_statement`: - >>> create_statement = '' - >>> - >>> response = client.create_database(parent, create_statement) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - parent (str): Required. The name of the instance that will serve the new database. - Values are of the form ``projects//instances/``. - create_statement (str): Required. A ``CREATE DATABASE`` statement, which specifies the ID of the - new database. The database ID must conform to the regular expression - ``[a-z][a-z0-9_\-]*[a-z0-9]`` and be between 2 and 30 characters in - length. If the database ID is a reserved word or if it contains a - hyphen, the database ID must be enclosed in backticks (`````). - extra_statements (list[str]): Optional. A list of DDL statements to run inside the newly created - database. Statements can create tables, indexes, etc. These - statements execute atomically with the creation of the database: - if there is an error in any statement, the database is not created. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.operation.Operation` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "create_database" not in self._inner_api_calls: - self._inner_api_calls[ - "create_database" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.create_database, - default_retry=self._method_configs["CreateDatabase"].retry, - default_timeout=self._method_configs["CreateDatabase"].timeout, - client_info=self._client_info, - ) - - request = spanner_database_admin_pb2.CreateDatabaseRequest( - parent=parent, - create_statement=create_statement, - extra_statements=extra_statements, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - operation = self._inner_api_calls["create_database"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - spanner_database_admin_pb2.Database, - metadata_type=spanner_database_admin_pb2.CreateDatabaseMetadata, - ) - - def get_database( - self, - name, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Gets the state of a Cloud Spanner database. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> name = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]') - >>> - >>> response = client.get_database(name) - - Args: - name (str): Required. The name of the requested database. Values are of the form - ``projects//instances//databases/``. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_database_v1.types.Database` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "get_database" not in self._inner_api_calls: - self._inner_api_calls[ - "get_database" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.get_database, - default_retry=self._method_configs["GetDatabase"].retry, - default_timeout=self._method_configs["GetDatabase"].timeout, - client_info=self._client_info, - ) - - request = spanner_database_admin_pb2.GetDatabaseRequest(name=name) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("name", name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["get_database"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def update_database_ddl( - self, - database, - statements, - operation_id=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Updates the schema of a Cloud Spanner database by - creating/altering/dropping tables, columns, indexes, etc. The returned - ``long-running operation`` will have a name of the format - ``/operations/`` and can be used to track - execution of the schema change(s). The ``metadata`` field type is - ``UpdateDatabaseDdlMetadata``. The operation has no response. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]') - >>> - >>> # TODO: Initialize `statements`: - >>> statements = [] - >>> - >>> response = client.update_database_ddl(database, statements) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - database (str): Required. The database to update. - statements (list[str]): Required. DDL statements to be applied to the database. - operation_id (str): If empty, the new update request is assigned an automatically-generated - operation ID. Otherwise, ``operation_id`` is used to construct the name - of the resulting ``Operation``. - - Specifying an explicit operation ID simplifies determining whether the - statements were executed in the event that the ``UpdateDatabaseDdl`` - call is replayed, or the return value is otherwise lost: the - ``database`` and ``operation_id`` fields can be combined to form the - ``name`` of the resulting ``longrunning.Operation``: - ``/operations/``. - - ``operation_id`` should be unique within the database, and must be a - valid identifier: ``[a-z][a-z0-9_]*``. Note that automatically-generated - operation IDs always begin with an underscore. If the named operation - already exists, ``UpdateDatabaseDdl`` returns ``ALREADY_EXISTS``. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.operation.Operation` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "update_database_ddl" not in self._inner_api_calls: - self._inner_api_calls[ - "update_database_ddl" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.update_database_ddl, - default_retry=self._method_configs["UpdateDatabaseDdl"].retry, - default_timeout=self._method_configs["UpdateDatabaseDdl"].timeout, - client_info=self._client_info, - ) - - request = spanner_database_admin_pb2.UpdateDatabaseDdlRequest( - database=database, statements=statements, operation_id=operation_id - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("database", database)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - operation = self._inner_api_calls["update_database_ddl"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - empty_pb2.Empty, - metadata_type=spanner_database_admin_pb2.UpdateDatabaseDdlMetadata, - ) - - def drop_database( - self, - database, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Drops (aka deletes) a Cloud Spanner database. Completed backups for the - database will be retained according to their ``expire_time``. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]') - >>> - >>> client.drop_database(database) - - Args: - database (str): Required. The database to be dropped. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "drop_database" not in self._inner_api_calls: - self._inner_api_calls[ - "drop_database" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.drop_database, - default_retry=self._method_configs["DropDatabase"].retry, - default_timeout=self._method_configs["DropDatabase"].timeout, - client_info=self._client_info, - ) - - request = spanner_database_admin_pb2.DropDatabaseRequest(database=database) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("database", database)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - self._inner_api_calls["drop_database"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def get_database_ddl( - self, - database, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Returns the schema of a Cloud Spanner database as a list of formatted - DDL statements. This method does not show pending schema updates, those - may be queried using the ``Operations`` API. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]') - >>> - >>> response = client.get_database_ddl(database) - - Args: - database (str): Required. The database whose schema we wish to get. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlResponse` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "get_database_ddl" not in self._inner_api_calls: - self._inner_api_calls[ - "get_database_ddl" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.get_database_ddl, - default_retry=self._method_configs["GetDatabaseDdl"].retry, - default_timeout=self._method_configs["GetDatabaseDdl"].timeout, - client_info=self._client_info, - ) - - request = spanner_database_admin_pb2.GetDatabaseDdlRequest(database=database) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("database", database)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["get_database_ddl"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def set_iam_policy( - self, - resource, - policy, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Sets the access control policy on a database or backup resource. - Replaces any existing policy. - - Authorization requires ``spanner.databases.setIamPolicy`` permission on - ``resource``. For backups, authorization requires - ``spanner.backups.setIamPolicy`` permission on ``resource``. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> # TODO: Initialize `resource`: - >>> resource = '' - >>> - >>> # TODO: Initialize `policy`: - >>> policy = {} - >>> - >>> response = client.set_iam_policy(resource, policy) - - Args: - resource (str): REQUIRED: The resource for which the policy is being specified. - See the operation documentation for the appropriate value for this field. - policy (Union[dict, ~google.cloud.spanner_admin_database_v1.types.Policy]): REQUIRED: The complete policy to be applied to the ``resource``. The - size of the policy is limited to a few 10s of KB. An empty policy is a - valid policy but certain Cloud Platform services (such as Projects) - might reject them. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_database_v1.types.Policy` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_database_v1.types.Policy` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "set_iam_policy" not in self._inner_api_calls: - self._inner_api_calls[ - "set_iam_policy" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.set_iam_policy, - default_retry=self._method_configs["SetIamPolicy"].retry, - default_timeout=self._method_configs["SetIamPolicy"].timeout, - client_info=self._client_info, - ) - - request = iam_policy_pb2.SetIamPolicyRequest(resource=resource, policy=policy) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("resource", resource)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["set_iam_policy"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def get_iam_policy( - self, - resource, - options_=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Gets the access control policy for a database or backup resource. - Returns an empty policy if a database or backup exists but does not have - a policy set. - - Authorization requires ``spanner.databases.getIamPolicy`` permission on - ``resource``. For backups, authorization requires - ``spanner.backups.getIamPolicy`` permission on ``resource``. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> # TODO: Initialize `resource`: - >>> resource = '' - >>> - >>> response = client.get_iam_policy(resource) - - Args: - resource (str): REQUIRED: The resource for which the policy is being requested. - See the operation documentation for the appropriate value for this field. - options_ (Union[dict, ~google.cloud.spanner_admin_database_v1.types.GetPolicyOptions]): OPTIONAL: A ``GetPolicyOptions`` object for specifying options to - ``GetIamPolicy``. This field is only used by Cloud IAM. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_database_v1.types.GetPolicyOptions` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_database_v1.types.Policy` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "get_iam_policy" not in self._inner_api_calls: - self._inner_api_calls[ - "get_iam_policy" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.get_iam_policy, - default_retry=self._method_configs["GetIamPolicy"].retry, - default_timeout=self._method_configs["GetIamPolicy"].timeout, - client_info=self._client_info, - ) - - request = iam_policy_pb2.GetIamPolicyRequest( - resource=resource, options=options_ - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("resource", resource)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["get_iam_policy"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def test_iam_permissions( - self, - resource, - permissions, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Returns permissions that the caller has on the specified database or - backup resource. - - Attempting this RPC on a non-existent Cloud Spanner database will result - in a NOT\_FOUND error if the user has ``spanner.databases.list`` - permission on the containing Cloud Spanner instance. Otherwise returns - an empty set of permissions. Calling this method on a backup that does - not exist will result in a NOT\_FOUND error if the user has - ``spanner.backups.list`` permission on the containing instance. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> # TODO: Initialize `resource`: - >>> resource = '' - >>> - >>> # TODO: Initialize `permissions`: - >>> permissions = [] - >>> - >>> response = client.test_iam_permissions(resource, permissions) - - Args: - resource (str): REQUIRED: The resource for which the policy detail is being requested. - See the operation documentation for the appropriate value for this field. - permissions (list[str]): The set of permissions to check for the ``resource``. Permissions with - wildcards (such as '*' or 'storage.*') are not allowed. For more - information see `IAM - Overview `__. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_database_v1.types.TestIamPermissionsResponse` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "test_iam_permissions" not in self._inner_api_calls: - self._inner_api_calls[ - "test_iam_permissions" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.test_iam_permissions, - default_retry=self._method_configs["TestIamPermissions"].retry, - default_timeout=self._method_configs["TestIamPermissions"].timeout, - client_info=self._client_info, - ) - - request = iam_policy_pb2.TestIamPermissionsRequest( - resource=resource, permissions=permissions - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("resource", resource)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["test_iam_permissions"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def create_backup( - self, - parent, - backup_id, - backup, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Starts creating a new Cloud Spanner Backup. The returned backup - ``long-running operation`` will have a name of the format - ``projects//instances//backups//operations/`` - and can be used to track creation of the backup. The ``metadata`` field - type is ``CreateBackupMetadata``. The ``response`` field type is - ``Backup``, if successful. Cancelling the returned operation will stop - the creation and delete the backup. There can be only one pending backup - creation per database. Backup creation of different databases can run - concurrently. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> # TODO: Initialize `backup_id`: - >>> backup_id = '' - >>> - >>> # TODO: Initialize `backup`: - >>> backup = {} - >>> - >>> response = client.create_backup(parent, backup_id, backup) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - parent (str): Required. The name of the instance in which the backup will be created. - This must be the same instance that contains the database the backup - will be created from. The backup will be stored in the location(s) - specified in the instance configuration of this instance. Values are of - the form ``projects//instances/``. - backup_id (str): Required. The id of the backup to be created. The ``backup_id`` appended - to ``parent`` forms the full backup name of the form - ``projects//instances//backups/``. - backup (Union[dict, ~google.cloud.spanner_admin_database_v1.types.Backup]): Required. The backup to create. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_database_v1.types.Backup` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.operation.Operation` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "create_backup" not in self._inner_api_calls: - self._inner_api_calls[ - "create_backup" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.create_backup, - default_retry=self._method_configs["CreateBackup"].retry, - default_timeout=self._method_configs["CreateBackup"].timeout, - client_info=self._client_info, - ) - - request = backup_pb2.CreateBackupRequest( - parent=parent, backup_id=backup_id, backup=backup - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - operation = self._inner_api_calls["create_backup"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - backup_pb2.Backup, - metadata_type=backup_pb2.CreateBackupMetadata, - ) - - def get_backup( - self, - name, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Gets metadata on a pending or completed ``Backup``. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> name = client.backup_path('[PROJECT]', '[INSTANCE]', '[BACKUP]') - >>> - >>> response = client.get_backup(name) - - Args: - name (str): Required. Name of the backup. Values are of the form - ``projects//instances//backups/``. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_database_v1.types.Backup` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "get_backup" not in self._inner_api_calls: - self._inner_api_calls[ - "get_backup" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.get_backup, - default_retry=self._method_configs["GetBackup"].retry, - default_timeout=self._method_configs["GetBackup"].timeout, - client_info=self._client_info, - ) - - request = backup_pb2.GetBackupRequest(name=name) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("name", name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["get_backup"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def update_backup( - self, - backup, - update_mask, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Updates a pending or completed ``Backup``. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> # TODO: Initialize `backup`: - >>> backup = {} - >>> - >>> # TODO: Initialize `update_mask`: - >>> update_mask = {} - >>> - >>> response = client.update_backup(backup, update_mask) - - Args: - backup (Union[dict, ~google.cloud.spanner_admin_database_v1.types.Backup]): Required. The backup to update. ``backup.name``, and the fields to be - updated as specified by ``update_mask`` are required. Other fields are - ignored. Update is only supported for the following fields: - - - ``backup.expire_time``. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_database_v1.types.Backup` - update_mask (Union[dict, ~google.cloud.spanner_admin_database_v1.types.FieldMask]): Required. A mask specifying which fields (e.g. ``expire_time``) in the - Backup resource should be updated. This mask is relative to the Backup - resource, not to the request message. The field mask must always be - specified; this prevents any future fields from being erased - accidentally by clients that do not know about them. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_database_v1.types.FieldMask` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_database_v1.types.Backup` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "update_backup" not in self._inner_api_calls: - self._inner_api_calls[ - "update_backup" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.update_backup, - default_retry=self._method_configs["UpdateBackup"].retry, - default_timeout=self._method_configs["UpdateBackup"].timeout, - client_info=self._client_info, - ) - - request = backup_pb2.UpdateBackupRequest(backup=backup, update_mask=update_mask) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("backup.name", backup.name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["update_backup"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def delete_backup( - self, - name, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Deletes a pending or completed ``Backup``. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> name = client.backup_path('[PROJECT]', '[INSTANCE]', '[BACKUP]') - >>> - >>> client.delete_backup(name) - - Args: - name (str): Required. Name of the backup to delete. Values are of the form - ``projects//instances//backups/``. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "delete_backup" not in self._inner_api_calls: - self._inner_api_calls[ - "delete_backup" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.delete_backup, - default_retry=self._method_configs["DeleteBackup"].retry, - default_timeout=self._method_configs["DeleteBackup"].timeout, - client_info=self._client_info, - ) - - request = backup_pb2.DeleteBackupRequest(name=name) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("name", name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - self._inner_api_calls["delete_backup"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def list_backups( - self, - parent, - filter_=None, - page_size=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Lists completed and pending backups. Backups returned are ordered by - ``create_time`` in descending order, starting from the most recent - ``create_time``. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> # Iterate over all results - >>> for element in client.list_backups(parent): - ... # process element - ... pass - >>> - >>> - >>> # Alternatively: - >>> - >>> # Iterate over results one page at a time - >>> for page in client.list_backups(parent).pages: - ... for element in page: - ... # process element - ... pass - - Args: - parent (str): Required. The instance to list backups from. Values are of the form - ``projects//instances/``. - filter_ (str): An expression that filters the list of returned backups. - - A filter expression consists of a field name, a comparison operator, and - a value for filtering. The value must be a string, a number, or a - boolean. The comparison operator must be one of: ``<``, ``>``, ``<=``, - ``>=``, ``!=``, ``=``, or ``:``. Colon ``:`` is the contains operator. - Filter rules are not case sensitive. - - The following fields in the ``Backup`` are eligible for filtering: - - - ``name`` - - ``database`` - - ``state`` - - ``create_time`` (and values are of the format YYYY-MM-DDTHH:MM:SSZ) - - ``expire_time`` (and values are of the format YYYY-MM-DDTHH:MM:SSZ) - - ``size_bytes`` - - You can combine multiple expressions by enclosing each expression in - parentheses. By default, expressions are combined with AND logic, but - you can specify AND, OR, and NOT logic explicitly. - - Here are a few examples: - - - ``name:Howl`` - The backup's name contains the string "howl". - - ``database:prod`` - The database's name contains the string "prod". - - ``state:CREATING`` - The backup is pending creation. - - ``state:READY`` - The backup is fully created and ready for use. - - ``(name:howl) AND (create_time < \"2018-03-28T14:50:00Z\")`` - The - backup name contains the string "howl" and ``create_time`` of the - backup is before 2018-03-28T14:50:00Z. - - ``expire_time < \"2018-03-28T14:50:00Z\"`` - The backup - ``expire_time`` is before 2018-03-28T14:50:00Z. - - ``size_bytes > 10000000000`` - The backup's size is greater than 10GB - page_size (int): The maximum number of resources contained in the - underlying API response. If page streaming is performed per- - resource, this parameter does not affect the return value. If page - streaming is performed per-page, this determines the maximum number - of resources in a page. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.page_iterator.PageIterator` instance. - An iterable of :class:`~google.cloud.spanner_admin_database_v1.types.Backup` instances. - You can also iterate over the pages of the response - using its `pages` property. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "list_backups" not in self._inner_api_calls: - self._inner_api_calls[ - "list_backups" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.list_backups, - default_retry=self._method_configs["ListBackups"].retry, - default_timeout=self._method_configs["ListBackups"].timeout, - client_info=self._client_info, - ) - - request = backup_pb2.ListBackupsRequest( - parent=parent, filter=filter_, page_size=page_size - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - iterator = google.api_core.page_iterator.GRPCIterator( - client=None, - method=functools.partial( - self._inner_api_calls["list_backups"], - retry=retry, - timeout=timeout, - metadata=metadata, - ), - request=request, - items_field="backups", - request_token_field="page_token", - response_token_field="next_page_token", - ) - return iterator - - def restore_database( - self, - parent, - database_id, - backup=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Create a new database by restoring from a completed backup. The new - database must be in the same project and in an instance with the same - instance configuration as the instance containing the backup. The - returned database ``long-running operation`` has a name of the format - ``projects//instances//databases//operations/``, - and can be used to track the progress of the operation, and to cancel - it. The ``metadata`` field type is ``RestoreDatabaseMetadata``. The - ``response`` type is ``Database``, if successful. Cancelling the - returned operation will stop the restore and delete the database. There - can be only one database being restored into an instance at a time. Once - the restore operation completes, a new restore operation can be - initiated, without waiting for the optimize operation associated with - the first restore to complete. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> # TODO: Initialize `database_id`: - >>> database_id = '' - >>> - >>> response = client.restore_database(parent, database_id) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - parent (str): Required. The name of the instance in which to create the restored - database. This instance must be in the same project and have the same - instance configuration as the instance containing the source backup. - Values are of the form ``projects//instances/``. - database_id (str): Required. The id of the database to create and restore to. This database - must not already exist. The ``database_id`` appended to ``parent`` forms - the full database name of the form - ``projects//instances//databases/``. - backup (str): Name of the backup from which to restore. Values are of the form - ``projects//instances//backups/``. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.operation.Operation` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "restore_database" not in self._inner_api_calls: - self._inner_api_calls[ - "restore_database" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.restore_database, - default_retry=self._method_configs["RestoreDatabase"].retry, - default_timeout=self._method_configs["RestoreDatabase"].timeout, - client_info=self._client_info, - ) - - # Sanity check: We have some fields which are mutually exclusive; - # raise ValueError if more than one is sent. - google.api_core.protobuf_helpers.check_oneof(backup=backup) - - request = spanner_database_admin_pb2.RestoreDatabaseRequest( - parent=parent, database_id=database_id, backup=backup - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - operation = self._inner_api_calls["restore_database"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - spanner_database_admin_pb2.Database, - metadata_type=spanner_database_admin_pb2.RestoreDatabaseMetadata, - ) - - def list_database_operations( - self, - parent, - filter_=None, - page_size=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Lists database ``longrunning-operations``. A database operation has a - name of the form - ``projects//instances//databases//operations/``. - The long-running operation ``metadata`` field type ``metadata.type_url`` - describes the type of the metadata. Operations returned include those - that have completed/failed/canceled within the last 7 days, and pending - operations. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> # Iterate over all results - >>> for element in client.list_database_operations(parent): - ... # process element - ... pass - >>> - >>> - >>> # Alternatively: - >>> - >>> # Iterate over results one page at a time - >>> for page in client.list_database_operations(parent).pages: - ... for element in page: - ... # process element - ... pass - - Args: - parent (str): Required. The instance of the database operations. Values are of the - form ``projects//instances/``. - filter_ (str): An expression that filters the list of returned operations. - - A filter expression consists of a field name, a comparison operator, and - a value for filtering. The value must be a string, a number, or a - boolean. The comparison operator must be one of: ``<``, ``>``, ``<=``, - ``>=``, ``!=``, ``=``, or ``:``. Colon ``:`` is the contains operator. - Filter rules are not case sensitive. - - The following fields in the ``Operation`` are eligible for filtering: - - - ``name`` - The name of the long-running operation - - ``done`` - False if the operation is in progress, else true. - - ``metadata.@type`` - the type of metadata. For example, the type - string for ``RestoreDatabaseMetadata`` is - ``type.googleapis.com/google.spanner.admin.database.v1.RestoreDatabaseMetadata``. - - ``metadata.`` - any field in metadata.value. - - ``error`` - Error associated with the long-running operation. - - ``response.@type`` - the type of response. - - ``response.`` - any field in response.value. - - You can combine multiple expressions by enclosing each expression in - parentheses. By default, expressions are combined with AND logic. - However, you can specify AND, OR, and NOT logic explicitly. - - Here are a few examples: - - - ``done:true`` - The operation is complete. - - ``(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.RestoreDatabaseMetadata) AND`` - ``(metadata.source_type:BACKUP) AND`` - ``(metadata.backup_info.backup:backup_howl) AND`` - ``(metadata.name:restored_howl) AND`` - ``(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND`` - ``(error:*)`` - Return operations where: - - - The operation's metadata type is ``RestoreDatabaseMetadata``. - - The database is restored from a backup. - - The backup name contains "backup\_howl". - - The restored database's name contains "restored\_howl". - - The operation started before 2018-03-28T14:50:00Z. - - The operation resulted in an error. - page_size (int): The maximum number of resources contained in the - underlying API response. If page streaming is performed per- - resource, this parameter does not affect the return value. If page - streaming is performed per-page, this determines the maximum number - of resources in a page. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.page_iterator.PageIterator` instance. - An iterable of :class:`~google.cloud.spanner_admin_database_v1.types.Operation` instances. - You can also iterate over the pages of the response - using its `pages` property. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "list_database_operations" not in self._inner_api_calls: - self._inner_api_calls[ - "list_database_operations" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.list_database_operations, - default_retry=self._method_configs["ListDatabaseOperations"].retry, - default_timeout=self._method_configs["ListDatabaseOperations"].timeout, - client_info=self._client_info, - ) - - request = spanner_database_admin_pb2.ListDatabaseOperationsRequest( - parent=parent, filter=filter_, page_size=page_size - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - iterator = google.api_core.page_iterator.GRPCIterator( - client=None, - method=functools.partial( - self._inner_api_calls["list_database_operations"], - retry=retry, - timeout=timeout, - metadata=metadata, - ), - request=request, - items_field="operations", - request_token_field="page_token", - response_token_field="next_page_token", - ) - return iterator - - def list_backup_operations( - self, - parent, - filter_=None, - page_size=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Lists the backup ``long-running operations`` in the given instance. A - backup operation has a name of the form - ``projects//instances//backups//operations/``. - The long-running operation ``metadata`` field type ``metadata.type_url`` - describes the type of the metadata. Operations returned include those - that have completed/failed/canceled within the last 7 days, and pending - operations. Operations returned are ordered by - ``operation.metadata.value.progress.start_time`` in descending order - starting from the most recently started operation. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> # Iterate over all results - >>> for element in client.list_backup_operations(parent): - ... # process element - ... pass - >>> - >>> - >>> # Alternatively: - >>> - >>> # Iterate over results one page at a time - >>> for page in client.list_backup_operations(parent).pages: - ... for element in page: - ... # process element - ... pass - - Args: - parent (str): Required. The instance of the backup operations. Values are of the form - ``projects//instances/``. - filter_ (str): An expression that filters the list of returned backup operations. - - A filter expression consists of a field name, a comparison operator, and - a value for filtering. The value must be a string, a number, or a - boolean. The comparison operator must be one of: ``<``, ``>``, ``<=``, - ``>=``, ``!=``, ``=``, or ``:``. Colon ``:`` is the contains operator. - Filter rules are not case sensitive. - - The following fields in the ``operation`` are eligible for filtering: - - - ``name`` - The name of the long-running operation - - ``done`` - False if the operation is in progress, else true. - - ``metadata.@type`` - the type of metadata. For example, the type - string for ``CreateBackupMetadata`` is - ``type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata``. - - ``metadata.`` - any field in metadata.value. - - ``error`` - Error associated with the long-running operation. - - ``response.@type`` - the type of response. - - ``response.`` - any field in response.value. - - You can combine multiple expressions by enclosing each expression in - parentheses. By default, expressions are combined with AND logic, but - you can specify AND, OR, and NOT logic explicitly. - - Here are a few examples: - - - ``done:true`` - The operation is complete. - - ``metadata.database:prod`` - The database the backup was taken from - has a name containing the string "prod". - - ``(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND`` - ``(metadata.name:howl) AND`` - ``(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND`` - ``(error:*)`` - Returns operations where: - - - The operation's metadata type is ``CreateBackupMetadata``. - - The backup name contains the string "howl". - - The operation started before 2018-03-28T14:50:00Z. - - The operation resulted in an error. - page_size (int): The maximum number of resources contained in the - underlying API response. If page streaming is performed per- - resource, this parameter does not affect the return value. If page - streaming is performed per-page, this determines the maximum number - of resources in a page. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.page_iterator.PageIterator` instance. - An iterable of :class:`~google.cloud.spanner_admin_database_v1.types.Operation` instances. - You can also iterate over the pages of the response - using its `pages` property. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "list_backup_operations" not in self._inner_api_calls: - self._inner_api_calls[ - "list_backup_operations" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.list_backup_operations, - default_retry=self._method_configs["ListBackupOperations"].retry, - default_timeout=self._method_configs["ListBackupOperations"].timeout, - client_info=self._client_info, - ) - - request = backup_pb2.ListBackupOperationsRequest( - parent=parent, filter=filter_, page_size=page_size - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - iterator = google.api_core.page_iterator.GRPCIterator( - client=None, - method=functools.partial( - self._inner_api_calls["list_backup_operations"], - retry=retry, - timeout=timeout, - metadata=metadata, - ), - request=request, - items_field="operations", - request_token_field="page_token", - response_token_field="next_page_token", - ) - return iterator - - def list_databases( - self, - parent, - page_size=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Lists Cloud Spanner databases. - - Example: - >>> from google.cloud import spanner_admin_database_v1 - >>> - >>> client = spanner_admin_database_v1.DatabaseAdminClient() - >>> - >>> parent = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> # Iterate over all results - >>> for element in client.list_databases(parent): - ... # process element - ... pass - >>> - >>> - >>> # Alternatively: - >>> - >>> # Iterate over results one page at a time - >>> for page in client.list_databases(parent).pages: - ... for element in page: - ... # process element - ... pass - - Args: - parent (str): Required. The instance whose databases should be listed. Values are of - the form ``projects//instances/``. - page_size (int): The maximum number of resources contained in the - underlying API response. If page streaming is performed per- - resource, this parameter does not affect the return value. If page - streaming is performed per-page, this determines the maximum number - of resources in a page. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.page_iterator.PageIterator` instance. - An iterable of :class:`~google.cloud.spanner_admin_database_v1.types.Database` instances. - You can also iterate over the pages of the response - using its `pages` property. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "list_databases" not in self._inner_api_calls: - self._inner_api_calls[ - "list_databases" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.list_databases, - default_retry=self._method_configs["ListDatabases"].retry, - default_timeout=self._method_configs["ListDatabases"].timeout, - client_info=self._client_info, - ) - - request = spanner_database_admin_pb2.ListDatabasesRequest( - parent=parent, page_size=page_size - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - iterator = google.api_core.page_iterator.GRPCIterator( - client=None, - method=functools.partial( - self._inner_api_calls["list_databases"], - retry=retry, - timeout=timeout, - metadata=metadata, - ), - request=request, - items_field="databases", - request_token_field="page_token", - response_token_field="next_page_token", - ) - return iterator diff --git a/google/cloud/spanner_admin_database_v1/gapic/database_admin_client_config.py b/google/cloud/spanner_admin_database_v1/gapic/database_admin_client_config.py deleted file mode 100644 index d6f830eeee..0000000000 --- a/google/cloud/spanner_admin_database_v1/gapic/database_admin_client_config.py +++ /dev/null @@ -1,108 +0,0 @@ -config = { - "interfaces": { - "google.spanner.admin.database.v1.DatabaseAdmin": { - "retry_codes": { - "idempotent": ["DEADLINE_EXCEEDED", "UNAVAILABLE"], - "non_idempotent": [], - }, - "retry_params": { - "default": { - "initial_retry_delay_millis": 1000, - "retry_delay_multiplier": 1.3, - "max_retry_delay_millis": 32000, - "initial_rpc_timeout_millis": 60000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 60000, - "total_timeout_millis": 600000, - } - }, - "methods": { - "CreateDatabase": { - "timeout_millis": 3600000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "GetDatabase": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "UpdateDatabaseDdl": { - "timeout_millis": 3600000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "DropDatabase": { - "timeout_millis": 3600000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "GetDatabaseDdl": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "SetIamPolicy": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "GetIamPolicy": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "TestIamPermissions": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "CreateBackup": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "GetBackup": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "UpdateBackup": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "DeleteBackup": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "ListBackups": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "RestoreDatabase": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "ListDatabaseOperations": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "ListBackupOperations": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "ListDatabases": { - "timeout_millis": 60000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - }, - } - } -} diff --git a/google/cloud/spanner_admin_database_v1/gapic/enums.py b/google/cloud/spanner_admin_database_v1/gapic/enums.py deleted file mode 100644 index d972ddfc57..0000000000 --- a/google/cloud/spanner_admin_database_v1/gapic/enums.py +++ /dev/null @@ -1,74 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Wrappers for protocol buffer enum types.""" - -import enum - - -class RestoreSourceType(enum.IntEnum): - """ - Indicates the type of the restore source. - - Attributes: - TYPE_UNSPECIFIED (int): No restore associated. - BACKUP (int): A backup was used as the source of the restore. - """ - - TYPE_UNSPECIFIED = 0 - BACKUP = 1 - - -class Backup(object): - class State(enum.IntEnum): - """ - Indicates the current state of the backup. - - Attributes: - STATE_UNSPECIFIED (int): Not specified. - CREATING (int): The pending backup is still being created. Operations on the backup may - fail with ``FAILED_PRECONDITION`` in this state. - READY (int): The backup is complete and ready for use. - """ - - STATE_UNSPECIFIED = 0 - CREATING = 1 - READY = 2 - - -class Database(object): - class State(enum.IntEnum): - """ - Indicates the current state of the database. - - Attributes: - STATE_UNSPECIFIED (int): Not specified. - CREATING (int): The database is still being created. Operations on the database may fail - with ``FAILED_PRECONDITION`` in this state. - READY (int): The database is fully created and ready for use. - READY_OPTIMIZING (int): The database is fully created and ready for use, but is still being - optimized for performance and cannot handle full load. - - In this state, the database still references the backup it was restore - from, preventing the backup from being deleted. When optimizations are - complete, the full performance of the database will be restored, and the - database will transition to ``READY`` state. - """ - - STATE_UNSPECIFIED = 0 - CREATING = 1 - READY = 2 - READY_OPTIMIZING = 3 diff --git a/google/cloud/spanner_admin_database_v1/gapic/transports/database_admin_grpc_transport.py b/google/cloud/spanner_admin_database_v1/gapic/transports/database_admin_grpc_transport.py deleted file mode 100644 index 2fb41caab2..0000000000 --- a/google/cloud/spanner_admin_database_v1/gapic/transports/database_admin_grpc_transport.py +++ /dev/null @@ -1,410 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import google.api_core.grpc_helpers -import google.api_core.operations_v1 - -from google.cloud.spanner_admin_database_v1.proto import spanner_database_admin_pb2_grpc - - -class DatabaseAdminGrpcTransport(object): - """gRPC transport class providing stubs for - google.spanner.admin.database.v1 DatabaseAdmin API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ( - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/spanner.admin", - ) - - def __init__( - self, channel=None, credentials=None, address="spanner.googleapis.com:443" - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive." - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = { - "database_admin_stub": spanner_database_admin_pb2_grpc.DatabaseAdminStub( - channel - ) - } - - # Because this API includes a method that returns a - # long-running operation (proto: google.longrunning.Operation), - # instantiate an LRO client. - self._operations_client = google.api_core.operations_v1.OperationsClient( - channel - ) - - @classmethod - def create_channel( - cls, address="spanner.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def create_database(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.create_database`. - - Creates a new Cloud Spanner database and starts to prepare it for - serving. The returned ``long-running operation`` will have a name of the - format ``/operations/`` and can be used to - track preparation of the database. The ``metadata`` field type is - ``CreateDatabaseMetadata``. The ``response`` field type is ``Database``, - if successful. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].CreateDatabase - - @property - def get_database(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.get_database`. - - Gets the state of a Cloud Spanner database. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].GetDatabase - - @property - def update_database_ddl(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.update_database_ddl`. - - Updates the schema of a Cloud Spanner database by - creating/altering/dropping tables, columns, indexes, etc. The returned - ``long-running operation`` will have a name of the format - ``/operations/`` and can be used to track - execution of the schema change(s). The ``metadata`` field type is - ``UpdateDatabaseDdlMetadata``. The operation has no response. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].UpdateDatabaseDdl - - @property - def drop_database(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.drop_database`. - - Drops (aka deletes) a Cloud Spanner database. Completed backups for the - database will be retained according to their ``expire_time``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].DropDatabase - - @property - def get_database_ddl(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.get_database_ddl`. - - Returns the schema of a Cloud Spanner database as a list of formatted - DDL statements. This method does not show pending schema updates, those - may be queried using the ``Operations`` API. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].GetDatabaseDdl - - @property - def set_iam_policy(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.set_iam_policy`. - - Sets the access control policy on a database or backup resource. - Replaces any existing policy. - - Authorization requires ``spanner.databases.setIamPolicy`` permission on - ``resource``. For backups, authorization requires - ``spanner.backups.setIamPolicy`` permission on ``resource``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].SetIamPolicy - - @property - def get_iam_policy(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.get_iam_policy`. - - Gets the access control policy for a database or backup resource. - Returns an empty policy if a database or backup exists but does not have - a policy set. - - Authorization requires ``spanner.databases.getIamPolicy`` permission on - ``resource``. For backups, authorization requires - ``spanner.backups.getIamPolicy`` permission on ``resource``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].GetIamPolicy - - @property - def test_iam_permissions(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.test_iam_permissions`. - - Returns permissions that the caller has on the specified database or - backup resource. - - Attempting this RPC on a non-existent Cloud Spanner database will result - in a NOT\_FOUND error if the user has ``spanner.databases.list`` - permission on the containing Cloud Spanner instance. Otherwise returns - an empty set of permissions. Calling this method on a backup that does - not exist will result in a NOT\_FOUND error if the user has - ``spanner.backups.list`` permission on the containing instance. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].TestIamPermissions - - @property - def create_backup(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.create_backup`. - - Starts creating a new Cloud Spanner Backup. The returned backup - ``long-running operation`` will have a name of the format - ``projects//instances//backups//operations/`` - and can be used to track creation of the backup. The ``metadata`` field - type is ``CreateBackupMetadata``. The ``response`` field type is - ``Backup``, if successful. Cancelling the returned operation will stop - the creation and delete the backup. There can be only one pending backup - creation per database. Backup creation of different databases can run - concurrently. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].CreateBackup - - @property - def get_backup(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.get_backup`. - - Gets metadata on a pending or completed ``Backup``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].GetBackup - - @property - def update_backup(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.update_backup`. - - Updates a pending or completed ``Backup``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].UpdateBackup - - @property - def delete_backup(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.delete_backup`. - - Deletes a pending or completed ``Backup``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].DeleteBackup - - @property - def list_backups(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.list_backups`. - - Lists completed and pending backups. Backups returned are ordered by - ``create_time`` in descending order, starting from the most recent - ``create_time``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].ListBackups - - @property - def restore_database(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.restore_database`. - - Create a new database by restoring from a completed backup. The new - database must be in the same project and in an instance with the same - instance configuration as the instance containing the backup. The - returned database ``long-running operation`` has a name of the format - ``projects//instances//databases//operations/``, - and can be used to track the progress of the operation, and to cancel - it. The ``metadata`` field type is ``RestoreDatabaseMetadata``. The - ``response`` type is ``Database``, if successful. Cancelling the - returned operation will stop the restore and delete the database. There - can be only one database being restored into an instance at a time. Once - the restore operation completes, a new restore operation can be - initiated, without waiting for the optimize operation associated with - the first restore to complete. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].RestoreDatabase - - @property - def list_database_operations(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.list_database_operations`. - - Lists database ``longrunning-operations``. A database operation has a - name of the form - ``projects//instances//databases//operations/``. - The long-running operation ``metadata`` field type ``metadata.type_url`` - describes the type of the metadata. Operations returned include those - that have completed/failed/canceled within the last 7 days, and pending - operations. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].ListDatabaseOperations - - @property - def list_backup_operations(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.list_backup_operations`. - - Lists the backup ``long-running operations`` in the given instance. A - backup operation has a name of the form - ``projects//instances//backups//operations/``. - The long-running operation ``metadata`` field type ``metadata.type_url`` - describes the type of the metadata. Operations returned include those - that have completed/failed/canceled within the last 7 days, and pending - operations. Operations returned are ordered by - ``operation.metadata.value.progress.start_time`` in descending order - starting from the most recently started operation. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].ListBackupOperations - - @property - def list_databases(self): - """Return the gRPC stub for :meth:`DatabaseAdminClient.list_databases`. - - Lists Cloud Spanner databases. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["database_admin_stub"].ListDatabases diff --git a/google/cloud/spanner_admin_database_v1/gapic_metadata.json b/google/cloud/spanner_admin_database_v1/gapic_metadata.json new file mode 100644 index 0000000000..027a4f612b --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/gapic_metadata.json @@ -0,0 +1,433 @@ + { + "comment": "This file maps proto services/RPCs to the corresponding library clients/methods", + "language": "python", + "libraryPackage": "google.cloud.spanner_admin_database_v1", + "protoPackage": "google.spanner.admin.database.v1", + "schema": "1.0", + "services": { + "DatabaseAdmin": { + "clients": { + "grpc": { + "libraryClient": "DatabaseAdminClient", + "rpcs": { + "AddSplitPoints": { + "methods": [ + "add_split_points" + ] + }, + "CopyBackup": { + "methods": [ + "copy_backup" + ] + }, + "CreateBackup": { + "methods": [ + "create_backup" + ] + }, + "CreateBackupSchedule": { + "methods": [ + "create_backup_schedule" + ] + }, + "CreateDatabase": { + "methods": [ + "create_database" + ] + }, + "DeleteBackup": { + "methods": [ + "delete_backup" + ] + }, + "DeleteBackupSchedule": { + "methods": [ + "delete_backup_schedule" + ] + }, + "DropDatabase": { + "methods": [ + "drop_database" + ] + }, + "GetBackup": { + "methods": [ + "get_backup" + ] + }, + "GetBackupSchedule": { + "methods": [ + "get_backup_schedule" + ] + }, + "GetDatabase": { + "methods": [ + "get_database" + ] + }, + "GetDatabaseDdl": { + "methods": [ + "get_database_ddl" + ] + }, + "GetIamPolicy": { + "methods": [ + "get_iam_policy" + ] + }, + "InternalUpdateGraphOperation": { + "methods": [ + "internal_update_graph_operation" + ] + }, + "ListBackupOperations": { + "methods": [ + "list_backup_operations" + ] + }, + "ListBackupSchedules": { + "methods": [ + "list_backup_schedules" + ] + }, + "ListBackups": { + "methods": [ + "list_backups" + ] + }, + "ListDatabaseOperations": { + "methods": [ + "list_database_operations" + ] + }, + "ListDatabaseRoles": { + "methods": [ + "list_database_roles" + ] + }, + "ListDatabases": { + "methods": [ + "list_databases" + ] + }, + "RestoreDatabase": { + "methods": [ + "restore_database" + ] + }, + "SetIamPolicy": { + "methods": [ + "set_iam_policy" + ] + }, + "TestIamPermissions": { + "methods": [ + "test_iam_permissions" + ] + }, + "UpdateBackup": { + "methods": [ + "update_backup" + ] + }, + "UpdateBackupSchedule": { + "methods": [ + "update_backup_schedule" + ] + }, + "UpdateDatabase": { + "methods": [ + "update_database" + ] + }, + "UpdateDatabaseDdl": { + "methods": [ + "update_database_ddl" + ] + } + } + }, + "grpc-async": { + "libraryClient": "DatabaseAdminAsyncClient", + "rpcs": { + "AddSplitPoints": { + "methods": [ + "add_split_points" + ] + }, + "CopyBackup": { + "methods": [ + "copy_backup" + ] + }, + "CreateBackup": { + "methods": [ + "create_backup" + ] + }, + "CreateBackupSchedule": { + "methods": [ + "create_backup_schedule" + ] + }, + "CreateDatabase": { + "methods": [ + "create_database" + ] + }, + "DeleteBackup": { + "methods": [ + "delete_backup" + ] + }, + "DeleteBackupSchedule": { + "methods": [ + "delete_backup_schedule" + ] + }, + "DropDatabase": { + "methods": [ + "drop_database" + ] + }, + "GetBackup": { + "methods": [ + "get_backup" + ] + }, + "GetBackupSchedule": { + "methods": [ + "get_backup_schedule" + ] + }, + "GetDatabase": { + "methods": [ + "get_database" + ] + }, + "GetDatabaseDdl": { + "methods": [ + "get_database_ddl" + ] + }, + "GetIamPolicy": { + "methods": [ + "get_iam_policy" + ] + }, + "InternalUpdateGraphOperation": { + "methods": [ + "internal_update_graph_operation" + ] + }, + "ListBackupOperations": { + "methods": [ + "list_backup_operations" + ] + }, + "ListBackupSchedules": { + "methods": [ + "list_backup_schedules" + ] + }, + "ListBackups": { + "methods": [ + "list_backups" + ] + }, + "ListDatabaseOperations": { + "methods": [ + "list_database_operations" + ] + }, + "ListDatabaseRoles": { + "methods": [ + "list_database_roles" + ] + }, + "ListDatabases": { + "methods": [ + "list_databases" + ] + }, + "RestoreDatabase": { + "methods": [ + "restore_database" + ] + }, + "SetIamPolicy": { + "methods": [ + "set_iam_policy" + ] + }, + "TestIamPermissions": { + "methods": [ + "test_iam_permissions" + ] + }, + "UpdateBackup": { + "methods": [ + "update_backup" + ] + }, + "UpdateBackupSchedule": { + "methods": [ + "update_backup_schedule" + ] + }, + "UpdateDatabase": { + "methods": [ + "update_database" + ] + }, + "UpdateDatabaseDdl": { + "methods": [ + "update_database_ddl" + ] + } + } + }, + "rest": { + "libraryClient": "DatabaseAdminClient", + "rpcs": { + "AddSplitPoints": { + "methods": [ + "add_split_points" + ] + }, + "CopyBackup": { + "methods": [ + "copy_backup" + ] + }, + "CreateBackup": { + "methods": [ + "create_backup" + ] + }, + "CreateBackupSchedule": { + "methods": [ + "create_backup_schedule" + ] + }, + "CreateDatabase": { + "methods": [ + "create_database" + ] + }, + "DeleteBackup": { + "methods": [ + "delete_backup" + ] + }, + "DeleteBackupSchedule": { + "methods": [ + "delete_backup_schedule" + ] + }, + "DropDatabase": { + "methods": [ + "drop_database" + ] + }, + "GetBackup": { + "methods": [ + "get_backup" + ] + }, + "GetBackupSchedule": { + "methods": [ + "get_backup_schedule" + ] + }, + "GetDatabase": { + "methods": [ + "get_database" + ] + }, + "GetDatabaseDdl": { + "methods": [ + "get_database_ddl" + ] + }, + "GetIamPolicy": { + "methods": [ + "get_iam_policy" + ] + }, + "InternalUpdateGraphOperation": { + "methods": [ + "internal_update_graph_operation" + ] + }, + "ListBackupOperations": { + "methods": [ + "list_backup_operations" + ] + }, + "ListBackupSchedules": { + "methods": [ + "list_backup_schedules" + ] + }, + "ListBackups": { + "methods": [ + "list_backups" + ] + }, + "ListDatabaseOperations": { + "methods": [ + "list_database_operations" + ] + }, + "ListDatabaseRoles": { + "methods": [ + "list_database_roles" + ] + }, + "ListDatabases": { + "methods": [ + "list_databases" + ] + }, + "RestoreDatabase": { + "methods": [ + "restore_database" + ] + }, + "SetIamPolicy": { + "methods": [ + "set_iam_policy" + ] + }, + "TestIamPermissions": { + "methods": [ + "test_iam_permissions" + ] + }, + "UpdateBackup": { + "methods": [ + "update_backup" + ] + }, + "UpdateBackupSchedule": { + "methods": [ + "update_backup_schedule" + ] + }, + "UpdateDatabase": { + "methods": [ + "update_database" + ] + }, + "UpdateDatabaseDdl": { + "methods": [ + "update_database_ddl" + ] + } + } + } + } + } + } +} diff --git a/google/cloud/spanner_admin_database_v1/gapic_version.py b/google/cloud/spanner_admin_database_v1/gapic_version.py new file mode 100644 index 0000000000..fa3f4c040d --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/gapic_version.py @@ -0,0 +1,16 @@ +# -*- coding: utf-8 -*- +# Copyright 2022 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +__version__ = "3.58.0" # {x-release-please-version} diff --git a/google/cloud/spanner_admin_database_v1/proto/backup.proto b/google/cloud/spanner_admin_database_v1/proto/backup.proto deleted file mode 100644 index d9b6fd74cd..0000000000 --- a/google/cloud/spanner_admin_database_v1/proto/backup.proto +++ /dev/null @@ -1,363 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.admin.database.v1; - -import "google/api/field_behavior.proto"; -import "google/api/resource.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/field_mask.proto"; -import "google/protobuf/timestamp.proto"; -import "google/spanner/admin/database/v1/common.proto"; -import "google/api/annotations.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.Admin.Database.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/admin/database/v1;database"; -option java_multiple_files = true; -option java_outer_classname = "BackupProto"; -option java_package = "com.google.spanner.admin.database.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\Admin\\Database\\V1"; - -// A backup of a Cloud Spanner database. -message Backup { - option (google.api.resource) = { - type: "spanner.googleapis.com/Backup" - pattern: "projects/{project}/instances/{instance}/backups/{backup}" - }; - - // Indicates the current state of the backup. - enum State { - // Not specified. - STATE_UNSPECIFIED = 0; - - // The pending backup is still being created. Operations on the - // backup may fail with `FAILED_PRECONDITION` in this state. - CREATING = 1; - - // The backup is complete and ready for use. - READY = 2; - } - - // Required for the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] operation. - // Name of the database from which this backup was - // created. This needs to be in the same instance as the backup. - // Values are of the form - // `projects//instances//databases/`. - string database = 2; - - // Required for the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] - // operation. The expiration time of the backup, with microseconds - // granularity that must be at least 6 hours and at most 366 days - // from the time the CreateBackup request is processed. Once the `expire_time` - // has passed, the backup is eligible to be automatically deleted by Cloud - // Spanner to free the resources used by the backup. - google.protobuf.Timestamp expire_time = 3; - - // Output only for the [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] operation. - // Required for the [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup] operation. - // - // A globally unique identifier for the backup which cannot be - // changed. Values are of the form - // `projects//instances//backups/[a-z][a-z0-9_\-]*[a-z0-9]` - // The final segment of the name must be between 2 and 60 characters - // in length. - // - // The backup is stored in the location(s) specified in the instance - // configuration of the instance containing the backup, identified - // by the prefix of the backup name of the form - // `projects//instances/`. - string name = 1; - - // Output only. The backup will contain an externally consistent - // copy of the database at the timestamp specified by - // `create_time`. `create_time` is approximately the time the - // [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] request is received. - google.protobuf.Timestamp create_time = 4 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. Size of the backup in bytes. - int64 size_bytes = 5 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. The current state of the backup. - State state = 6 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. The names of the restored databases that reference the backup. - // The database names are of - // the form `projects//instances//databases/`. - // Referencing databases may exist in different instances. The existence of - // any referencing database prevents the backup from being deleted. When a - // restored database from the backup enters the `READY` state, the reference - // to the backup is removed. - repeated string referencing_databases = 7 [(google.api.field_behavior) = OUTPUT_ONLY]; -} - -// The request for [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. -message CreateBackupRequest { - // Required. The name of the instance in which the backup will be - // created. This must be the same instance that contains the database the - // backup will be created from. The backup will be stored in the - // location(s) specified in the instance configuration of this - // instance. Values are of the form - // `projects//instances/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; - - // Required. The id of the backup to be created. The `backup_id` appended to - // `parent` forms the full backup name of the form - // `projects//instances//backups/`. - string backup_id = 2 [(google.api.field_behavior) = REQUIRED]; - - // Required. The backup to create. - Backup backup = 3 [(google.api.field_behavior) = REQUIRED]; -} - -// Metadata type for the operation returned by -// [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. -message CreateBackupMetadata { - // The name of the backup being created. - string name = 1; - - // The name of the database the backup is created from. - string database = 2; - - // The progress of the - // [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] operation. - OperationProgress progress = 3; - - // The time at which cancellation of this operation was received. - // [Operations.CancelOperation][google.longrunning.Operations.CancelOperation] - // starts asynchronous cancellation on a long-running operation. The server - // makes a best effort to cancel the operation, but success is not guaranteed. - // Clients can use - // [Operations.GetOperation][google.longrunning.Operations.GetOperation] or - // other methods to check whether the cancellation succeeded or whether the - // operation completed despite cancellation. On successful cancellation, - // the operation is not deleted; instead, it becomes an operation with - // an [Operation.error][] value with a [google.rpc.Status.code][google.rpc.Status.code] of 1, - // corresponding to `Code.CANCELLED`. - google.protobuf.Timestamp cancel_time = 4; -} - -// The request for [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. -message UpdateBackupRequest { - // Required. The backup to update. `backup.name`, and the fields to be updated - // as specified by `update_mask` are required. Other fields are ignored. - // Update is only supported for the following fields: - // * `backup.expire_time`. - Backup backup = 1 [(google.api.field_behavior) = REQUIRED]; - - // Required. A mask specifying which fields (e.g. `expire_time`) in the - // Backup resource should be updated. This mask is relative to the Backup - // resource, not to the request message. The field mask must always be - // specified; this prevents any future fields from being erased accidentally - // by clients that do not know about them. - google.protobuf.FieldMask update_mask = 2 [(google.api.field_behavior) = REQUIRED]; -} - -// The request for [GetBackup][google.spanner.admin.database.v1.DatabaseAdmin.GetBackup]. -message GetBackupRequest { - // Required. Name of the backup. - // Values are of the form - // `projects//instances//backups/`. - string name = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Backup" - } - ]; -} - -// The request for [DeleteBackup][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup]. -message DeleteBackupRequest { - // Required. Name of the backup to delete. - // Values are of the form - // `projects//instances//backups/`. - string name = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Backup" - } - ]; -} - -// The request for [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. -message ListBackupsRequest { - // Required. The instance to list backups from. Values are of the - // form `projects//instances/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; - - // An expression that filters the list of returned backups. - // - // A filter expression consists of a field name, a comparison operator, and a - // value for filtering. - // The value must be a string, a number, or a boolean. The comparison operator - // must be one of: `<`, `>`, `<=`, `>=`, `!=`, `=`, or `:`. - // Colon `:` is the contains operator. Filter rules are not case sensitive. - // - // The following fields in the [Backup][google.spanner.admin.database.v1.Backup] are eligible for filtering: - // - // * `name` - // * `database` - // * `state` - // * `create_time` (and values are of the format YYYY-MM-DDTHH:MM:SSZ) - // * `expire_time` (and values are of the format YYYY-MM-DDTHH:MM:SSZ) - // * `size_bytes` - // - // You can combine multiple expressions by enclosing each expression in - // parentheses. By default, expressions are combined with AND logic, but - // you can specify AND, OR, and NOT logic explicitly. - // - // Here are a few examples: - // - // * `name:Howl` - The backup's name contains the string "howl". - // * `database:prod` - // - The database's name contains the string "prod". - // * `state:CREATING` - The backup is pending creation. - // * `state:READY` - The backup is fully created and ready for use. - // * `(name:howl) AND (create_time < \"2018-03-28T14:50:00Z\")` - // - The backup name contains the string "howl" and `create_time` - // of the backup is before 2018-03-28T14:50:00Z. - // * `expire_time < \"2018-03-28T14:50:00Z\"` - // - The backup `expire_time` is before 2018-03-28T14:50:00Z. - // * `size_bytes > 10000000000` - The backup's size is greater than 10GB - string filter = 2; - - // Number of backups to be returned in the response. If 0 or - // less, defaults to the server's maximum allowed page size. - int32 page_size = 3; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.database.v1.ListBackupsResponse.next_page_token] from a - // previous [ListBackupsResponse][google.spanner.admin.database.v1.ListBackupsResponse] to the same `parent` and with the same - // `filter`. - string page_token = 4; -} - -// The response for [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. -message ListBackupsResponse { - // The list of matching backups. Backups returned are ordered by `create_time` - // in descending order, starting from the most recent `create_time`. - repeated Backup backups = 1; - - // `next_page_token` can be sent in a subsequent - // [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups] call to fetch more - // of the matching backups. - string next_page_token = 2; -} - -// The request for -// [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. -message ListBackupOperationsRequest { - // Required. The instance of the backup operations. Values are of - // the form `projects//instances/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; - - // An expression that filters the list of returned backup operations. - // - // A filter expression consists of a field name, a - // comparison operator, and a value for filtering. - // The value must be a string, a number, or a boolean. The comparison operator - // must be one of: `<`, `>`, `<=`, `>=`, `!=`, `=`, or `:`. - // Colon `:` is the contains operator. Filter rules are not case sensitive. - // - // The following fields in the [operation][google.longrunning.Operation] - // are eligible for filtering: - // - // * `name` - The name of the long-running operation - // * `done` - False if the operation is in progress, else true. - // * `metadata.@type` - the type of metadata. For example, the type string - // for [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata] is - // `type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata`. - // * `metadata.` - any field in metadata.value. - // * `error` - Error associated with the long-running operation. - // * `response.@type` - the type of response. - // * `response.` - any field in response.value. - // - // You can combine multiple expressions by enclosing each expression in - // parentheses. By default, expressions are combined with AND logic, but - // you can specify AND, OR, and NOT logic explicitly. - // - // Here are a few examples: - // - // * `done:true` - The operation is complete. - // * `metadata.database:prod` - The database the backup was taken from has - // a name containing the string "prod". - // * `(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND`
- // `(metadata.name:howl) AND`
- // `(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND`
- // `(error:*)` - Returns operations where: - // * The operation's metadata type is [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. - // * The backup name contains the string "howl". - // * The operation started before 2018-03-28T14:50:00Z. - // * The operation resulted in an error. - string filter = 2; - - // Number of operations to be returned in the response. If 0 or - // less, defaults to the server's maximum allowed page size. - int32 page_size = 3; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.database.v1.ListBackupOperationsResponse.next_page_token] - // from a previous [ListBackupOperationsResponse][google.spanner.admin.database.v1.ListBackupOperationsResponse] to the - // same `parent` and with the same `filter`. - string page_token = 4; -} - -// The response for -// [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. -message ListBackupOperationsResponse { - // The list of matching backup [long-running - // operations][google.longrunning.Operation]. Each operation's name will be - // prefixed by the backup's name and the operation's - // [metadata][google.longrunning.Operation.metadata] will be of type - // [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. Operations returned include those that are - // pending or have completed/failed/canceled within the last 7 days. - // Operations returned are ordered by - // `operation.metadata.value.progress.start_time` in descending order starting - // from the most recently started operation. - repeated google.longrunning.Operation operations = 1; - - // `next_page_token` can be sent in a subsequent - // [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations] - // call to fetch more of the matching metadata. - string next_page_token = 2; -} - -// Information about a backup. -message BackupInfo { - // Name of the backup. - string backup = 1; - - // The backup contains an externally consistent copy of `source_database` at - // the timestamp specified by `create_time`. - google.protobuf.Timestamp create_time = 2; - - // Name of the database the backup was created from. - string source_database = 3; -} diff --git a/google/cloud/spanner_admin_database_v1/proto/backup_pb2.py b/google/cloud/spanner_admin_database_v1/proto/backup_pb2.py deleted file mode 100644 index edc596bd94..0000000000 --- a/google/cloud/spanner_admin_database_v1/proto/backup_pb2.py +++ /dev/null @@ -1,1379 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner/admin/database_v1/proto/backup.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.api import resource_pb2 as google_dot_api_dot_resource__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import field_mask_pb2 as google_dot_protobuf_dot_field__mask__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.cloud.spanner_admin_database_v1.proto import ( - common_pb2 as google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_common__pb2, -) -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner/admin/database_v1/proto/backup.proto", - package="google.spanner.admin.database.v1", - syntax="proto3", - serialized_options=_b( - "\n$com.google.spanner.admin.database.v1B\013BackupProtoP\001ZHgoogle.golang.org/genproto/googleapis/spanner/admin/database/v1;database\252\002&Google.Cloud.Spanner.Admin.Database.V1\312\002&Google\\Cloud\\Spanner\\Admin\\Database\\V1" - ), - serialized_pb=_b( - '\n9google/cloud/spanner/admin/database_v1/proto/backup.proto\x12 google.spanner.admin.database.v1\x1a\x1fgoogle/api/field_behavior.proto\x1a\x19google/api/resource.proto\x1a#google/longrunning/operations.proto\x1a google/protobuf/field_mask.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x39google/cloud/spanner/admin/database_v1/proto/common.proto\x1a\x1cgoogle/api/annotations.proto"\xa7\x03\n\x06\x42\x61\x63kup\x12\x10\n\x08\x64\x61tabase\x18\x02 \x01(\t\x12/\n\x0b\x65xpire_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x34\n\x0b\x63reate_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\x03\xe0\x41\x03\x12\x17\n\nsize_bytes\x18\x05 \x01(\x03\x42\x03\xe0\x41\x03\x12\x42\n\x05state\x18\x06 \x01(\x0e\x32..google.spanner.admin.database.v1.Backup.StateB\x03\xe0\x41\x03\x12"\n\x15referencing_databases\x18\x07 \x03(\tB\x03\xe0\x41\x03"7\n\x05State\x12\x15\n\x11STATE_UNSPECIFIED\x10\x00\x12\x0c\n\x08\x43REATING\x10\x01\x12\t\n\x05READY\x10\x02:\\\xea\x41Y\n\x1dspanner.googleapis.com/Backup\x12\x38projects/{project}/instances/{instance}/backups/{backup}"\xa5\x01\n\x13\x43reateBackupRequest\x12\x37\n\x06parent\x18\x01 \x01(\tB\'\xe0\x41\x02\xfa\x41!\n\x1fspanner.googleapis.com/Instance\x12\x16\n\tbackup_id\x18\x02 \x01(\tB\x03\xe0\x41\x02\x12=\n\x06\x62\x61\x63kup\x18\x03 \x01(\x0b\x32(.google.spanner.admin.database.v1.BackupB\x03\xe0\x41\x02"\xae\x01\n\x14\x43reateBackupMetadata\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x10\n\x08\x64\x61tabase\x18\x02 \x01(\t\x12\x45\n\x08progress\x18\x03 \x01(\x0b\x32\x33.google.spanner.admin.database.v1.OperationProgress\x12/\n\x0b\x63\x61ncel_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp"\x8a\x01\n\x13UpdateBackupRequest\x12=\n\x06\x62\x61\x63kup\x18\x01 \x01(\x0b\x32(.google.spanner.admin.database.v1.BackupB\x03\xe0\x41\x02\x12\x34\n\x0bupdate_mask\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.FieldMaskB\x03\xe0\x41\x02"G\n\x10GetBackupRequest\x12\x33\n\x04name\x18\x01 \x01(\tB%\xe0\x41\x02\xfa\x41\x1f\n\x1dspanner.googleapis.com/Backup"J\n\x13\x44\x65leteBackupRequest\x12\x33\n\x04name\x18\x01 \x01(\tB%\xe0\x41\x02\xfa\x41\x1f\n\x1dspanner.googleapis.com/Backup"\x84\x01\n\x12ListBackupsRequest\x12\x37\n\x06parent\x18\x01 \x01(\tB\'\xe0\x41\x02\xfa\x41!\n\x1fspanner.googleapis.com/Instance\x12\x0e\n\x06\x66ilter\x18\x02 \x01(\t\x12\x11\n\tpage_size\x18\x03 \x01(\x05\x12\x12\n\npage_token\x18\x04 \x01(\t"i\n\x13ListBackupsResponse\x12\x39\n\x07\x62\x61\x63kups\x18\x01 \x03(\x0b\x32(.google.spanner.admin.database.v1.Backup\x12\x17\n\x0fnext_page_token\x18\x02 \x01(\t"\x8d\x01\n\x1bListBackupOperationsRequest\x12\x37\n\x06parent\x18\x01 \x01(\tB\'\xe0\x41\x02\xfa\x41!\n\x1fspanner.googleapis.com/Instance\x12\x0e\n\x06\x66ilter\x18\x02 \x01(\t\x12\x11\n\tpage_size\x18\x03 \x01(\x05\x12\x12\n\npage_token\x18\x04 \x01(\t"j\n\x1cListBackupOperationsResponse\x12\x31\n\noperations\x18\x01 \x03(\x0b\x32\x1d.google.longrunning.Operation\x12\x17\n\x0fnext_page_token\x18\x02 \x01(\t"f\n\nBackupInfo\x12\x0e\n\x06\x62\x61\x63kup\x18\x01 \x01(\t\x12/\n\x0b\x63reate_time\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x17\n\x0fsource_database\x18\x03 \x01(\tB\xd1\x01\n$com.google.spanner.admin.database.v1B\x0b\x42\x61\x63kupProtoP\x01ZHgoogle.golang.org/genproto/googleapis/spanner/admin/database/v1;database\xaa\x02&Google.Cloud.Spanner.Admin.Database.V1\xca\x02&Google\\Cloud\\Spanner\\Admin\\Database\\V1b\x06proto3' - ), - dependencies=[ - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_api_dot_resource__pb2.DESCRIPTOR, - google_dot_longrunning_dot_operations__pb2.DESCRIPTOR, - google_dot_protobuf_dot_field__mask__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_common__pb2.DESCRIPTOR, - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - ], -) - - -_BACKUP_STATE = _descriptor.EnumDescriptor( - name="State", - full_name="google.spanner.admin.database.v1.Backup.State", - filename=None, - file=DESCRIPTOR, - values=[ - _descriptor.EnumValueDescriptor( - name="STATE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - ), - _descriptor.EnumValueDescriptor( - name="CREATING", index=1, number=1, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="READY", index=2, number=2, serialized_options=None, type=None - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=623, - serialized_end=678, -) -_sym_db.RegisterEnumDescriptor(_BACKUP_STATE) - - -_BACKUP = _descriptor.Descriptor( - name="Backup", - full_name="google.spanner.admin.database.v1.Backup", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="database", - full_name="google.spanner.admin.database.v1.Backup.database", - index=0, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="expire_time", - full_name="google.spanner.admin.database.v1.Backup.expire_time", - index=1, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.database.v1.Backup.name", - index=2, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="create_time", - full_name="google.spanner.admin.database.v1.Backup.create_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\003"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="size_bytes", - full_name="google.spanner.admin.database.v1.Backup.size_bytes", - index=4, - number=5, - type=3, - cpp_type=2, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\003"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="state", - full_name="google.spanner.admin.database.v1.Backup.state", - index=5, - number=6, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\003"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="referencing_databases", - full_name="google.spanner.admin.database.v1.Backup.referencing_databases", - index=6, - number=7, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\003"), - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[_BACKUP_STATE], - serialized_options=_b( - "\352AY\n\035spanner.googleapis.com/Backup\0228projects/{project}/instances/{instance}/backups/{backup}" - ), - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=349, - serialized_end=772, -) - - -_CREATEBACKUPREQUEST = _descriptor.Descriptor( - name="CreateBackupRequest", - full_name="google.spanner.admin.database.v1.CreateBackupRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="parent", - full_name="google.spanner.admin.database.v1.CreateBackupRequest.parent", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A!\n\037spanner.googleapis.com/Instance" - ), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="backup_id", - full_name="google.spanner.admin.database.v1.CreateBackupRequest.backup_id", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="backup", - full_name="google.spanner.admin.database.v1.CreateBackupRequest.backup", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=775, - serialized_end=940, -) - - -_CREATEBACKUPMETADATA = _descriptor.Descriptor( - name="CreateBackupMetadata", - full_name="google.spanner.admin.database.v1.CreateBackupMetadata", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.database.v1.CreateBackupMetadata.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="database", - full_name="google.spanner.admin.database.v1.CreateBackupMetadata.database", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="progress", - full_name="google.spanner.admin.database.v1.CreateBackupMetadata.progress", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="cancel_time", - full_name="google.spanner.admin.database.v1.CreateBackupMetadata.cancel_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=943, - serialized_end=1117, -) - - -_UPDATEBACKUPREQUEST = _descriptor.Descriptor( - name="UpdateBackupRequest", - full_name="google.spanner.admin.database.v1.UpdateBackupRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="backup", - full_name="google.spanner.admin.database.v1.UpdateBackupRequest.backup", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="update_mask", - full_name="google.spanner.admin.database.v1.UpdateBackupRequest.update_mask", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1120, - serialized_end=1258, -) - - -_GETBACKUPREQUEST = _descriptor.Descriptor( - name="GetBackupRequest", - full_name="google.spanner.admin.database.v1.GetBackupRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.database.v1.GetBackupRequest.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A\037\n\035spanner.googleapis.com/Backup" - ), - file=DESCRIPTOR, - ) - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1260, - serialized_end=1331, -) - - -_DELETEBACKUPREQUEST = _descriptor.Descriptor( - name="DeleteBackupRequest", - full_name="google.spanner.admin.database.v1.DeleteBackupRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.database.v1.DeleteBackupRequest.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A\037\n\035spanner.googleapis.com/Backup" - ), - file=DESCRIPTOR, - ) - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1333, - serialized_end=1407, -) - - -_LISTBACKUPSREQUEST = _descriptor.Descriptor( - name="ListBackupsRequest", - full_name="google.spanner.admin.database.v1.ListBackupsRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="parent", - full_name="google.spanner.admin.database.v1.ListBackupsRequest.parent", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A!\n\037spanner.googleapis.com/Instance" - ), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="filter", - full_name="google.spanner.admin.database.v1.ListBackupsRequest.filter", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="page_size", - full_name="google.spanner.admin.database.v1.ListBackupsRequest.page_size", - index=2, - number=3, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="page_token", - full_name="google.spanner.admin.database.v1.ListBackupsRequest.page_token", - index=3, - number=4, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1410, - serialized_end=1542, -) - - -_LISTBACKUPSRESPONSE = _descriptor.Descriptor( - name="ListBackupsResponse", - full_name="google.spanner.admin.database.v1.ListBackupsResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="backups", - full_name="google.spanner.admin.database.v1.ListBackupsResponse.backups", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="next_page_token", - full_name="google.spanner.admin.database.v1.ListBackupsResponse.next_page_token", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1544, - serialized_end=1649, -) - - -_LISTBACKUPOPERATIONSREQUEST = _descriptor.Descriptor( - name="ListBackupOperationsRequest", - full_name="google.spanner.admin.database.v1.ListBackupOperationsRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="parent", - full_name="google.spanner.admin.database.v1.ListBackupOperationsRequest.parent", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A!\n\037spanner.googleapis.com/Instance" - ), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="filter", - full_name="google.spanner.admin.database.v1.ListBackupOperationsRequest.filter", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="page_size", - full_name="google.spanner.admin.database.v1.ListBackupOperationsRequest.page_size", - index=2, - number=3, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="page_token", - full_name="google.spanner.admin.database.v1.ListBackupOperationsRequest.page_token", - index=3, - number=4, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1652, - serialized_end=1793, -) - - -_LISTBACKUPOPERATIONSRESPONSE = _descriptor.Descriptor( - name="ListBackupOperationsResponse", - full_name="google.spanner.admin.database.v1.ListBackupOperationsResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="operations", - full_name="google.spanner.admin.database.v1.ListBackupOperationsResponse.operations", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="next_page_token", - full_name="google.spanner.admin.database.v1.ListBackupOperationsResponse.next_page_token", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1795, - serialized_end=1901, -) - - -_BACKUPINFO = _descriptor.Descriptor( - name="BackupInfo", - full_name="google.spanner.admin.database.v1.BackupInfo", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="backup", - full_name="google.spanner.admin.database.v1.BackupInfo.backup", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="create_time", - full_name="google.spanner.admin.database.v1.BackupInfo.create_time", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="source_database", - full_name="google.spanner.admin.database.v1.BackupInfo.source_database", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1903, - serialized_end=2005, -) - -_BACKUP.fields_by_name[ - "expire_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_BACKUP.fields_by_name[ - "create_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_BACKUP.fields_by_name["state"].enum_type = _BACKUP_STATE -_BACKUP_STATE.containing_type = _BACKUP -_CREATEBACKUPREQUEST.fields_by_name["backup"].message_type = _BACKUP -_CREATEBACKUPMETADATA.fields_by_name[ - "progress" -].message_type = ( - google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_common__pb2._OPERATIONPROGRESS -) -_CREATEBACKUPMETADATA.fields_by_name[ - "cancel_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_UPDATEBACKUPREQUEST.fields_by_name["backup"].message_type = _BACKUP -_UPDATEBACKUPREQUEST.fields_by_name[ - "update_mask" -].message_type = google_dot_protobuf_dot_field__mask__pb2._FIELDMASK -_LISTBACKUPSRESPONSE.fields_by_name["backups"].message_type = _BACKUP -_LISTBACKUPOPERATIONSRESPONSE.fields_by_name[ - "operations" -].message_type = google_dot_longrunning_dot_operations__pb2._OPERATION -_BACKUPINFO.fields_by_name[ - "create_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -DESCRIPTOR.message_types_by_name["Backup"] = _BACKUP -DESCRIPTOR.message_types_by_name["CreateBackupRequest"] = _CREATEBACKUPREQUEST -DESCRIPTOR.message_types_by_name["CreateBackupMetadata"] = _CREATEBACKUPMETADATA -DESCRIPTOR.message_types_by_name["UpdateBackupRequest"] = _UPDATEBACKUPREQUEST -DESCRIPTOR.message_types_by_name["GetBackupRequest"] = _GETBACKUPREQUEST -DESCRIPTOR.message_types_by_name["DeleteBackupRequest"] = _DELETEBACKUPREQUEST -DESCRIPTOR.message_types_by_name["ListBackupsRequest"] = _LISTBACKUPSREQUEST -DESCRIPTOR.message_types_by_name["ListBackupsResponse"] = _LISTBACKUPSRESPONSE -DESCRIPTOR.message_types_by_name[ - "ListBackupOperationsRequest" -] = _LISTBACKUPOPERATIONSREQUEST -DESCRIPTOR.message_types_by_name[ - "ListBackupOperationsResponse" -] = _LISTBACKUPOPERATIONSRESPONSE -DESCRIPTOR.message_types_by_name["BackupInfo"] = _BACKUPINFO -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -Backup = _reflection.GeneratedProtocolMessageType( - "Backup", - (_message.Message,), - dict( - DESCRIPTOR=_BACKUP, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""A backup of a Cloud Spanner database. - - - Attributes: - database: - Required for the [CreateBackup][google.spanner.admin.database. - v1.DatabaseAdmin.CreateBackup] operation. Name of the database - from which this backup was created. This needs to be in the - same instance as the backup. Values are of the form ``projects - //instances//databases/``. - expire_time: - Required for the [CreateBackup][google.spanner.admin.database. - v1.DatabaseAdmin.CreateBackup] operation. The expiration time - of the backup, with microseconds granularity that must be at - least 6 hours and at most 366 days from the time the - CreateBackup request is processed. Once the ``expire_time`` - has passed, the backup is eligible to be automatically deleted - by Cloud Spanner to free the resources used by the backup. - name: - Output only for the [CreateBackup][google.spanner.admin.databa - se.v1.DatabaseAdmin.CreateBackup] operation. Required for the - [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin. - UpdateBackup] operation. A globally unique identifier for the - backup which cannot be changed. Values are of the form ``proje - cts//instances//backups/[a-z][a-z0-9_\-]*[a - -z0-9]`` The final segment of the name must be between 2 and - 60 characters in length. The backup is stored in the - location(s) specified in the instance configuration of the - instance containing the backup, identified by the prefix of - the backup name of the form - ``projects//instances/``. - create_time: - Output only. The backup will contain an externally consistent - copy of the database at the timestamp specified by - ``create_time``. ``create_time`` is approximately the time the - [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin. - CreateBackup] request is received. - size_bytes: - Output only. Size of the backup in bytes. - state: - Output only. The current state of the backup. - referencing_databases: - Output only. The names of the restored databases that - reference the backup. The database names are of the form ``pro - jects//instances//databases/``. - Referencing databases may exist in different instances. The - existence of any referencing database prevents the backup from - being deleted. When a restored database from the backup enters - the ``READY`` state, the reference to the backup is removed. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.Backup) - ), -) -_sym_db.RegisterMessage(Backup) - -CreateBackupRequest = _reflection.GeneratedProtocolMessageType( - "CreateBackupRequest", - (_message.Message,), - dict( - DESCRIPTOR=_CREATEBACKUPREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""The request for - [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. - - - Attributes: - parent: - Required. The name of the instance in which the backup will be - created. This must be the same instance that contains the - database the backup will be created from. The backup will be - stored in the location(s) specified in the instance - configuration of this instance. Values are of the form - ``projects//instances/``. - backup_id: - Required. The id of the backup to be created. The - ``backup_id`` appended to ``parent`` forms the full backup - name of the form ``projects//instances//bac - kups/``. - backup: - Required. The backup to create. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.CreateBackupRequest) - ), -) -_sym_db.RegisterMessage(CreateBackupRequest) - -CreateBackupMetadata = _reflection.GeneratedProtocolMessageType( - "CreateBackupMetadata", - (_message.Message,), - dict( - DESCRIPTOR=_CREATEBACKUPMETADATA, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""Metadata type for the operation returned by - [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. - - - Attributes: - name: - The name of the backup being created. - database: - The name of the database the backup is created from. - progress: - The progress of the [CreateBackup][google.spanner.admin.databa - se.v1.DatabaseAdmin.CreateBackup] operation. - cancel_time: - The time at which cancellation of this operation was received. - [Operations.CancelOperation][google.longrunning.Operations.Can - celOperation] starts asynchronous cancellation on a long- - running operation. The server makes a best effort to cancel - the operation, but success is not guaranteed. Clients can use - [Operations.GetOperation][google.longrunning.Operations.GetOpe - ration] or other methods to check whether the cancellation - succeeded or whether the operation completed despite - cancellation. On successful cancellation, the operation is not - deleted; instead, it becomes an operation with an - [Operation.error][] value with a - [google.rpc.Status.code][google.rpc.Status.code] of 1, - corresponding to ``Code.CANCELLED``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.CreateBackupMetadata) - ), -) -_sym_db.RegisterMessage(CreateBackupMetadata) - -UpdateBackupRequest = _reflection.GeneratedProtocolMessageType( - "UpdateBackupRequest", - (_message.Message,), - dict( - DESCRIPTOR=_UPDATEBACKUPREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""The request for - [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. - - - Attributes: - backup: - Required. The backup to update. ``backup.name``, and the - fields to be updated as specified by ``update_mask`` are - required. Other fields are ignored. Update is only supported - for the following fields: \* ``backup.expire_time``. - update_mask: - Required. A mask specifying which fields (e.g. - ``expire_time``) in the Backup resource should be updated. - This mask is relative to the Backup resource, not to the - request message. The field mask must always be specified; this - prevents any future fields from being erased accidentally by - clients that do not know about them. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.UpdateBackupRequest) - ), -) -_sym_db.RegisterMessage(UpdateBackupRequest) - -GetBackupRequest = _reflection.GeneratedProtocolMessageType( - "GetBackupRequest", - (_message.Message,), - dict( - DESCRIPTOR=_GETBACKUPREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""The request for - [GetBackup][google.spanner.admin.database.v1.DatabaseAdmin.GetBackup]. - - - Attributes: - name: - Required. Name of the backup. Values are of the form - ``projects//instances//backups/``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.GetBackupRequest) - ), -) -_sym_db.RegisterMessage(GetBackupRequest) - -DeleteBackupRequest = _reflection.GeneratedProtocolMessageType( - "DeleteBackupRequest", - (_message.Message,), - dict( - DESCRIPTOR=_DELETEBACKUPREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""The request for - [DeleteBackup][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup]. - - - Attributes: - name: - Required. Name of the backup to delete. Values are of the form - ``projects//instances//backups/``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.DeleteBackupRequest) - ), -) -_sym_db.RegisterMessage(DeleteBackupRequest) - -ListBackupsRequest = _reflection.GeneratedProtocolMessageType( - "ListBackupsRequest", - (_message.Message,), - dict( - DESCRIPTOR=_LISTBACKUPSREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""The request for - [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. - - - Attributes: - parent: - Required. The instance to list backups from. Values are of the - form ``projects//instances/``. - filter: - An expression that filters the list of returned backups. A - filter expression consists of a field name, a comparison - operator, and a value for filtering. The value must be a - string, a number, or a boolean. The comparison operator must - be one of: ``<``, ``>``, ``<=``, ``>=``, ``!=``, ``=``, or - ``:``. Colon ``:`` is the contains operator. Filter rules are - not case sensitive. The following fields in the - [Backup][google.spanner.admin.database.v1.Backup] are eligible - for filtering: - ``name`` - ``database`` - ``state`` - - ``create_time`` (and values are of the format YYYY-MM- - DDTHH:MM:SSZ) - ``expire_time`` (and values are of the format - YYYY-MM-DDTHH:MM:SSZ) - ``size_bytes`` You can combine - multiple expressions by enclosing each expression in - parentheses. By default, expressions are combined with AND - logic, but you can specify AND, OR, and NOT logic explicitly. - Here are a few examples: - ``name:Howl`` - The backup's name - contains the string "howl". - ``database:prod`` - The - database's name contains the string "prod". - - ``state:CREATING`` - The backup is pending creation. - - ``state:READY`` - The backup is fully created and ready for - use. - ``(name:howl) AND (create_time < - \"2018-03-28T14:50:00Z\")`` - The backup name contains the - string "howl" and ``create_time`` of the backup is before - 2018-03-28T14:50:00Z. - ``expire_time < - \"2018-03-28T14:50:00Z\"`` - The backup ``expire_time`` is - before 2018-03-28T14:50:00Z. - ``size_bytes > 10000000000`` - - The backup's size is greater than 10GB - page_size: - Number of backups to be returned in the response. If 0 or - less, defaults to the server's maximum allowed page size. - page_token: - If non-empty, ``page_token`` should contain a [next\_page\_tok - en][google.spanner.admin.database.v1.ListBackupsResponse.next\ - _page\_token] from a previous [ListBackupsResponse][google.spa - nner.admin.database.v1.ListBackupsResponse] to the same - ``parent`` and with the same ``filter``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.ListBackupsRequest) - ), -) -_sym_db.RegisterMessage(ListBackupsRequest) - -ListBackupsResponse = _reflection.GeneratedProtocolMessageType( - "ListBackupsResponse", - (_message.Message,), - dict( - DESCRIPTOR=_LISTBACKUPSRESPONSE, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""The response for - [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. - - - Attributes: - backups: - The list of matching backups. Backups returned are ordered by - ``create_time`` in descending order, starting from the most - recent ``create_time``. - next_page_token: - \ ``next_page_token`` can be sent in a subsequent [ListBackups - ][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups] - call to fetch more of the matching backups. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.ListBackupsResponse) - ), -) -_sym_db.RegisterMessage(ListBackupsResponse) - -ListBackupOperationsRequest = _reflection.GeneratedProtocolMessageType( - "ListBackupOperationsRequest", - (_message.Message,), - dict( - DESCRIPTOR=_LISTBACKUPOPERATIONSREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""The request for - [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. - - - Attributes: - parent: - Required. The instance of the backup operations. Values are of - the form ``projects//instances/``. - filter: - An expression that filters the list of returned backup - operations. A filter expression consists of a field name, a - comparison operator, and a value for filtering. The value must - be a string, a number, or a boolean. The comparison operator - must be one of: ``<``, ``>``, ``<=``, ``>=``, ``!=``, ``=``, - or ``:``. Colon ``:`` is the contains operator. Filter rules - are not case sensitive. The following fields in the - [operation][google.longrunning.Operation] are eligible for - filtering: - ``name`` - The name of the long-running - operation - ``done`` - False if the operation is in progress, - else true. - ``metadata.@type`` - the type of metadata. For - example, the type string for [CreateBackupMetadata][goog - le.spanner.admin.database.v1.CreateBackupMetadata] is `` - type.googleapis.com/google.spanner.admin.database.v1.CreateBac - kupMetadata``. - ``metadata.`` - any field in - metadata.value. - ``error`` - Error associated with the long- - running operation. - ``response.@type`` - the type of - response. - ``response.`` - any field in - response.value. You can combine multiple expressions by - enclosing each expression in parentheses. By default, - expressions are combined with AND logic, but you can specify - AND, OR, and NOT logic explicitly. Here are a few examples: - - ``done:true`` - The operation is complete. - - ``metadata.database:prod`` - The database the backup was taken - from has a name containing the string "prod". - ``(metadat - a.@type=type.googleapis.com/google.spanner.admin.database.v1.C - reateBackupMetadata) AND`` ``(metadata.name:howl) AND`` - ``(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") - AND`` ``(error:*)`` - Returns operations where: - The - operation's metadata type is [CreateBackupMetadata][goog - le.spanner.admin.database.v1.CreateBackupMetadata]. - The - backup name contains the string "howl". - The operation - started before 2018-03-28T14:50:00Z. - The operation - resulted in an error. - page_size: - Number of operations to be returned in the response. If 0 or - less, defaults to the server's maximum allowed page size. - page_token: - If non-empty, ``page_token`` should contain a [next\_page\_tok - en][google.spanner.admin.database.v1.ListBackupOperationsRespo - nse.next\_page\_token] from a previous [ListBackupOperationsRe - sponse][google.spanner.admin.database.v1.ListBackupOperationsR - esponse] to the same ``parent`` and with the same ``filter``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.ListBackupOperationsRequest) - ), -) -_sym_db.RegisterMessage(ListBackupOperationsRequest) - -ListBackupOperationsResponse = _reflection.GeneratedProtocolMessageType( - "ListBackupOperationsResponse", - (_message.Message,), - dict( - DESCRIPTOR=_LISTBACKUPOPERATIONSRESPONSE, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""The response for - [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. - - - Attributes: - operations: - The list of matching backup [long-running - operations][google.longrunning.Operation]. Each operation's - name will be prefixed by the backup's name and the operation's - [metadata][google.longrunning.Operation.metadata] will be of - type [CreateBackupMetadata][google.spanner.admin.database.v1.C - reateBackupMetadata]. Operations returned include those that - are pending or have completed/failed/canceled within the last - 7 days. Operations returned are ordered by - ``operation.metadata.value.progress.start_time`` in descending - order starting from the most recently started operation. - next_page_token: - \ ``next_page_token`` can be sent in a subsequent [ListBackupO - perations][google.spanner.admin.database.v1.DatabaseAdmin.List - BackupOperations] call to fetch more of the matching metadata. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.ListBackupOperationsResponse) - ), -) -_sym_db.RegisterMessage(ListBackupOperationsResponse) - -BackupInfo = _reflection.GeneratedProtocolMessageType( - "BackupInfo", - (_message.Message,), - dict( - DESCRIPTOR=_BACKUPINFO, - __module__="google.cloud.spanner.admin.database_v1.proto.backup_pb2", - __doc__="""Information about a backup. - - - Attributes: - backup: - Name of the backup. - create_time: - The backup contains an externally consistent copy of - ``source_database`` at the timestamp specified by - ``create_time``. - source_database: - Name of the database the backup was created from. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.BackupInfo) - ), -) -_sym_db.RegisterMessage(BackupInfo) - - -DESCRIPTOR._options = None -_BACKUP.fields_by_name["create_time"]._options = None -_BACKUP.fields_by_name["size_bytes"]._options = None -_BACKUP.fields_by_name["state"]._options = None -_BACKUP.fields_by_name["referencing_databases"]._options = None -_BACKUP._options = None -_CREATEBACKUPREQUEST.fields_by_name["parent"]._options = None -_CREATEBACKUPREQUEST.fields_by_name["backup_id"]._options = None -_CREATEBACKUPREQUEST.fields_by_name["backup"]._options = None -_UPDATEBACKUPREQUEST.fields_by_name["backup"]._options = None -_UPDATEBACKUPREQUEST.fields_by_name["update_mask"]._options = None -_GETBACKUPREQUEST.fields_by_name["name"]._options = None -_DELETEBACKUPREQUEST.fields_by_name["name"]._options = None -_LISTBACKUPSREQUEST.fields_by_name["parent"]._options = None -_LISTBACKUPOPERATIONSREQUEST.fields_by_name["parent"]._options = None -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_admin_database_v1/proto/backup_pb2_grpc.py b/google/cloud/spanner_admin_database_v1/proto/backup_pb2_grpc.py deleted file mode 100644 index 07cb78fe03..0000000000 --- a/google/cloud/spanner_admin_database_v1/proto/backup_pb2_grpc.py +++ /dev/null @@ -1,2 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc diff --git a/google/cloud/spanner_admin_database_v1/proto/common.proto b/google/cloud/spanner_admin_database_v1/proto/common.proto deleted file mode 100644 index 4914cb8ac7..0000000000 --- a/google/cloud/spanner_admin_database_v1/proto/common.proto +++ /dev/null @@ -1,43 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.admin.database.v1; - -import "google/api/field_behavior.proto"; -import "google/protobuf/timestamp.proto"; -import "google/api/annotations.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.Admin.Database.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/admin/database/v1;database"; -option java_multiple_files = true; -option java_outer_classname = "CommonProto"; -option java_package = "com.google.spanner.admin.database.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\Admin\\Database\\V1"; - -// Encapsulates progress related information for a Cloud Spanner long -// running operation. -message OperationProgress { - // Percent completion of the operation. - // Values are between 0 and 100 inclusive. - int32 progress_percent = 1; - - // Time the request was received. - google.protobuf.Timestamp start_time = 2; - - // If set, the time at which this operation failed or was completed - // successfully. - google.protobuf.Timestamp end_time = 3; -} diff --git a/google/cloud/spanner_admin_database_v1/proto/common_pb2.py b/google/cloud/spanner_admin_database_v1/proto/common_pb2.py deleted file mode 100644 index 3acf791486..0000000000 --- a/google/cloud/spanner_admin_database_v1/proto/common_pb2.py +++ /dev/null @@ -1,151 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner/admin/database_v1/proto/common.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner/admin/database_v1/proto/common.proto", - package="google.spanner.admin.database.v1", - syntax="proto3", - serialized_options=_b( - "\n$com.google.spanner.admin.database.v1B\013CommonProtoP\001ZHgoogle.golang.org/genproto/googleapis/spanner/admin/database/v1;database\252\002&Google.Cloud.Spanner.Admin.Database.V1\312\002&Google\\Cloud\\Spanner\\Admin\\Database\\V1" - ), - serialized_pb=_b( - '\n9google/cloud/spanner/admin/database_v1/proto/common.proto\x12 google.spanner.admin.database.v1\x1a\x1fgoogle/api/field_behavior.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1cgoogle/api/annotations.proto"\x8b\x01\n\x11OperationProgress\x12\x18\n\x10progress_percent\x18\x01 \x01(\x05\x12.\n\nstart_time\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12,\n\x08\x65nd_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\xd1\x01\n$com.google.spanner.admin.database.v1B\x0b\x43ommonProtoP\x01ZHgoogle.golang.org/genproto/googleapis/spanner/admin/database/v1;database\xaa\x02&Google.Cloud.Spanner.Admin.Database.V1\xca\x02&Google\\Cloud\\Spanner\\Admin\\Database\\V1b\x06proto3' - ), - dependencies=[ - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - ], -) - - -_OPERATIONPROGRESS = _descriptor.Descriptor( - name="OperationProgress", - full_name="google.spanner.admin.database.v1.OperationProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="progress_percent", - full_name="google.spanner.admin.database.v1.OperationProgress.progress_percent", - index=0, - number=1, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.spanner.admin.database.v1.OperationProgress.start_time", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="end_time", - full_name="google.spanner.admin.database.v1.OperationProgress.end_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=192, - serialized_end=331, -) - -_OPERATIONPROGRESS.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_OPERATIONPROGRESS.fields_by_name[ - "end_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -DESCRIPTOR.message_types_by_name["OperationProgress"] = _OPERATIONPROGRESS -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -OperationProgress = _reflection.GeneratedProtocolMessageType( - "OperationProgress", - (_message.Message,), - dict( - DESCRIPTOR=_OPERATIONPROGRESS, - __module__="google.cloud.spanner.admin.database_v1.proto.common_pb2", - __doc__="""Encapsulates progress related information for a Cloud Spanner long - running operation. - - - Attributes: - progress_percent: - Percent completion of the operation. Values are between 0 and - 100 inclusive. - start_time: - Time the request was received. - end_time: - If set, the time at which this operation failed or was - completed successfully. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.OperationProgress) - ), -) -_sym_db.RegisterMessage(OperationProgress) - - -DESCRIPTOR._options = None -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_admin_database_v1/proto/common_pb2_grpc.py b/google/cloud/spanner_admin_database_v1/proto/common_pb2_grpc.py deleted file mode 100644 index 07cb78fe03..0000000000 --- a/google/cloud/spanner_admin_database_v1/proto/common_pb2_grpc.py +++ /dev/null @@ -1,2 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc diff --git a/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin.proto b/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin.proto deleted file mode 100644 index d48adc8aba..0000000000 --- a/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin.proto +++ /dev/null @@ -1,726 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.admin.database.v1; - -import "google/api/annotations.proto"; -import "google/api/client.proto"; -import "google/api/field_behavior.proto"; -import "google/api/resource.proto"; -import "google/iam/v1/iam_policy.proto"; -import "google/iam/v1/policy.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/empty.proto"; -import "google/protobuf/timestamp.proto"; -import "google/spanner/admin/database/v1/backup.proto"; -import "google/spanner/admin/database/v1/common.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.Admin.Database.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/admin/database/v1;database"; -option java_multiple_files = true; -option java_outer_classname = "SpannerDatabaseAdminProto"; -option java_package = "com.google.spanner.admin.database.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\Admin\\Database\\V1"; -option (google.api.resource_definition) = { - type: "spanner.googleapis.com/Instance" - pattern: "projects/{project}/instances/{instance}" -}; - -// Cloud Spanner Database Admin API -// -// The Cloud Spanner Database Admin API can be used to create, drop, and -// list databases. It also enables updating the schema of pre-existing -// databases. It can be also used to create, delete and list backups for a -// database and to restore from an existing backup. -service DatabaseAdmin { - option (google.api.default_host) = "spanner.googleapis.com"; - option (google.api.oauth_scopes) = - "https://www.googleapis.com/auth/cloud-platform," - "https://www.googleapis.com/auth/spanner.admin"; - - // Lists Cloud Spanner databases. - rpc ListDatabases(ListDatabasesRequest) returns (ListDatabasesResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*/instances/*}/databases" - }; - option (google.api.method_signature) = "parent"; - } - - // Creates a new Cloud Spanner database and starts to prepare it for serving. - // The returned [long-running operation][google.longrunning.Operation] will - // have a name of the format `/operations/` and - // can be used to track preparation of the database. The - // [metadata][google.longrunning.Operation.metadata] field type is - // [CreateDatabaseMetadata][google.spanner.admin.database.v1.CreateDatabaseMetadata]. The - // [response][google.longrunning.Operation.response] field type is - // [Database][google.spanner.admin.database.v1.Database], if successful. - rpc CreateDatabase(CreateDatabaseRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1/{parent=projects/*/instances/*}/databases" - body: "*" - }; - option (google.api.method_signature) = "parent,create_statement"; - option (google.longrunning.operation_info) = { - response_type: "google.spanner.admin.database.v1.Database" - metadata_type: "google.spanner.admin.database.v1.CreateDatabaseMetadata" - }; - } - - // Gets the state of a Cloud Spanner database. - rpc GetDatabase(GetDatabaseRequest) returns (Database) { - option (google.api.http) = { - get: "/v1/{name=projects/*/instances/*/databases/*}" - }; - option (google.api.method_signature) = "name"; - } - - // Updates the schema of a Cloud Spanner database by - // creating/altering/dropping tables, columns, indexes, etc. The returned - // [long-running operation][google.longrunning.Operation] will have a name of - // the format `/operations/` and can be used to - // track execution of the schema change(s). The - // [metadata][google.longrunning.Operation.metadata] field type is - // [UpdateDatabaseDdlMetadata][google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata]. The operation has no response. - rpc UpdateDatabaseDdl(UpdateDatabaseDdlRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - patch: "/v1/{database=projects/*/instances/*/databases/*}/ddl" - body: "*" - }; - option (google.api.method_signature) = "database,statements"; - option (google.longrunning.operation_info) = { - response_type: "google.protobuf.Empty" - metadata_type: "google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata" - }; - } - - // Drops (aka deletes) a Cloud Spanner database. - // Completed backups for the database will be retained according to their - // `expire_time`. - rpc DropDatabase(DropDatabaseRequest) returns (google.protobuf.Empty) { - option (google.api.http) = { - delete: "/v1/{database=projects/*/instances/*/databases/*}" - }; - option (google.api.method_signature) = "database"; - } - - // Returns the schema of a Cloud Spanner database as a list of formatted - // DDL statements. This method does not show pending schema updates, those may - // be queried using the [Operations][google.longrunning.Operations] API. - rpc GetDatabaseDdl(GetDatabaseDdlRequest) returns (GetDatabaseDdlResponse) { - option (google.api.http) = { - get: "/v1/{database=projects/*/instances/*/databases/*}/ddl" - }; - option (google.api.method_signature) = "database"; - } - - // Sets the access control policy on a database or backup resource. - // Replaces any existing policy. - // - // Authorization requires `spanner.databases.setIamPolicy` - // permission on [resource][google.iam.v1.SetIamPolicyRequest.resource]. - // For backups, authorization requires `spanner.backups.setIamPolicy` - // permission on [resource][google.iam.v1.SetIamPolicyRequest.resource]. - rpc SetIamPolicy(google.iam.v1.SetIamPolicyRequest) returns (google.iam.v1.Policy) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*/databases/*}:setIamPolicy" - body: "*" - additional_bindings { - post: "/v1/{resource=projects/*/instances/*/backups/*}:setIamPolicy" - body: "*" - } - }; - option (google.api.method_signature) = "resource,policy"; - } - - // Gets the access control policy for a database or backup resource. - // Returns an empty policy if a database or backup exists but does not have a - // policy set. - // - // Authorization requires `spanner.databases.getIamPolicy` permission on - // [resource][google.iam.v1.GetIamPolicyRequest.resource]. - // For backups, authorization requires `spanner.backups.getIamPolicy` - // permission on [resource][google.iam.v1.GetIamPolicyRequest.resource]. - rpc GetIamPolicy(google.iam.v1.GetIamPolicyRequest) returns (google.iam.v1.Policy) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*/databases/*}:getIamPolicy" - body: "*" - additional_bindings { - post: "/v1/{resource=projects/*/instances/*/backups/*}:getIamPolicy" - body: "*" - } - }; - option (google.api.method_signature) = "resource"; - } - - // Returns permissions that the caller has on the specified database or backup - // resource. - // - // Attempting this RPC on a non-existent Cloud Spanner database will - // result in a NOT_FOUND error if the user has - // `spanner.databases.list` permission on the containing Cloud - // Spanner instance. Otherwise returns an empty set of permissions. - // Calling this method on a backup that does not exist will - // result in a NOT_FOUND error if the user has - // `spanner.backups.list` permission on the containing instance. - rpc TestIamPermissions(google.iam.v1.TestIamPermissionsRequest) returns (google.iam.v1.TestIamPermissionsResponse) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*/databases/*}:testIamPermissions" - body: "*" - additional_bindings { - post: "/v1/{resource=projects/*/instances/*/backups/*}:testIamPermissions" - body: "*" - } - }; - option (google.api.method_signature) = "resource,permissions"; - } - - // Starts creating a new Cloud Spanner Backup. - // The returned backup [long-running operation][google.longrunning.Operation] - // will have a name of the format - // `projects//instances//backups//operations/` - // and can be used to track creation of the backup. The - // [metadata][google.longrunning.Operation.metadata] field type is - // [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. The - // [response][google.longrunning.Operation.response] field type is - // [Backup][google.spanner.admin.database.v1.Backup], if successful. Cancelling the returned operation will stop the - // creation and delete the backup. - // There can be only one pending backup creation per database. Backup creation - // of different databases can run concurrently. - rpc CreateBackup(CreateBackupRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1/{parent=projects/*/instances/*}/backups" - body: "backup" - }; - option (google.api.method_signature) = "parent,backup,backup_id"; - option (google.longrunning.operation_info) = { - response_type: "Backup" - metadata_type: "google.spanner.admin.database.v1.CreateBackupMetadata" - }; - } - - // Gets metadata on a pending or completed [Backup][google.spanner.admin.database.v1.Backup]. - rpc GetBackup(GetBackupRequest) returns (Backup) { - option (google.api.http) = { - get: "/v1/{name=projects/*/instances/*/backups/*}" - }; - option (google.api.method_signature) = "name"; - } - - // Updates a pending or completed [Backup][google.spanner.admin.database.v1.Backup]. - rpc UpdateBackup(UpdateBackupRequest) returns (Backup) { - option (google.api.http) = { - patch: "/v1/{backup.name=projects/*/instances/*/backups/*}" - body: "backup" - }; - option (google.api.method_signature) = "backup,update_mask"; - } - - // Deletes a pending or completed [Backup][google.spanner.admin.database.v1.Backup]. - rpc DeleteBackup(DeleteBackupRequest) returns (google.protobuf.Empty) { - option (google.api.http) = { - delete: "/v1/{name=projects/*/instances/*/backups/*}" - }; - option (google.api.method_signature) = "name"; - } - - // Lists completed and pending backups. - // Backups returned are ordered by `create_time` in descending order, - // starting from the most recent `create_time`. - rpc ListBackups(ListBackupsRequest) returns (ListBackupsResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*/instances/*}/backups" - }; - option (google.api.method_signature) = "parent"; - } - - // Create a new database by restoring from a completed backup. The new - // database must be in the same project and in an instance with the same - // instance configuration as the instance containing - // the backup. The returned database [long-running - // operation][google.longrunning.Operation] has a name of the format - // `projects//instances//databases//operations/`, - // and can be used to track the progress of the operation, and to cancel it. - // The [metadata][google.longrunning.Operation.metadata] field type is - // [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. - // The [response][google.longrunning.Operation.response] type - // is [Database][google.spanner.admin.database.v1.Database], if - // successful. Cancelling the returned operation will stop the restore and - // delete the database. - // There can be only one database being restored into an instance at a time. - // Once the restore operation completes, a new restore operation can be - // initiated, without waiting for the optimize operation associated with the - // first restore to complete. - rpc RestoreDatabase(RestoreDatabaseRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1/{parent=projects/*/instances/*}/databases:restore" - body: "*" - }; - option (google.api.method_signature) = "parent,database_id,backup"; - option (google.longrunning.operation_info) = { - response_type: "google.spanner.admin.database.v1.Database" - metadata_type: "google.spanner.admin.database.v1.RestoreDatabaseMetadata" - }; - } - - // Lists database [longrunning-operations][google.longrunning.Operation]. - // A database operation has a name of the form - // `projects//instances//databases//operations/`. - // The long-running operation - // [metadata][google.longrunning.Operation.metadata] field type - // `metadata.type_url` describes the type of the metadata. Operations returned - // include those that have completed/failed/canceled within the last 7 days, - // and pending operations. - rpc ListDatabaseOperations(ListDatabaseOperationsRequest) returns (ListDatabaseOperationsResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*/instances/*}/databaseOperations" - }; - option (google.api.method_signature) = "parent"; - } - - // Lists the backup [long-running operations][google.longrunning.Operation] in - // the given instance. A backup operation has a name of the form - // `projects//instances//backups//operations/`. - // The long-running operation - // [metadata][google.longrunning.Operation.metadata] field type - // `metadata.type_url` describes the type of the metadata. Operations returned - // include those that have completed/failed/canceled within the last 7 days, - // and pending operations. Operations returned are ordered by - // `operation.metadata.value.progress.start_time` in descending order starting - // from the most recently started operation. - rpc ListBackupOperations(ListBackupOperationsRequest) returns (ListBackupOperationsResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*/instances/*}/backupOperations" - }; - option (google.api.method_signature) = "parent"; - } -} - -// Information about the database restore. -message RestoreInfo { - // The type of the restore source. - RestoreSourceType source_type = 1; - - // Information about the source used to restore the database. - oneof source_info { - // Information about the backup used to restore the database. The backup - // may no longer exist. - BackupInfo backup_info = 2; - } -} - -// A Cloud Spanner database. -message Database { - option (google.api.resource) = { - type: "spanner.googleapis.com/Database" - pattern: "projects/{project}/instances/{instance}/databases/{database}" - }; - - // Indicates the current state of the database. - enum State { - // Not specified. - STATE_UNSPECIFIED = 0; - - // The database is still being created. Operations on the database may fail - // with `FAILED_PRECONDITION` in this state. - CREATING = 1; - - // The database is fully created and ready for use. - READY = 2; - - // The database is fully created and ready for use, but is still - // being optimized for performance and cannot handle full load. - // - // In this state, the database still references the backup - // it was restore from, preventing the backup - // from being deleted. When optimizations are complete, the full performance - // of the database will be restored, and the database will transition to - // `READY` state. - READY_OPTIMIZING = 3; - } - - // Required. The name of the database. Values are of the form - // `projects//instances//databases/`, - // where `` is as specified in the `CREATE DATABASE` - // statement. This name can be passed to other API methods to - // identify the database. - string name = 1 [(google.api.field_behavior) = REQUIRED]; - - // Output only. The current database state. - State state = 2 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. If exists, the time at which the database creation started. - google.protobuf.Timestamp create_time = 3 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. Applicable only for restored databases. Contains information - // about the restore source. - RestoreInfo restore_info = 4 [(google.api.field_behavior) = OUTPUT_ONLY]; -} - -// The request for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. -message ListDatabasesRequest { - // Required. The instance whose databases should be listed. - // Values are of the form `projects//instances/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; - - // Number of databases to be returned in the response. If 0 or less, - // defaults to the server's maximum allowed page size. - int32 page_size = 3; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.database.v1.ListDatabasesResponse.next_page_token] from a - // previous [ListDatabasesResponse][google.spanner.admin.database.v1.ListDatabasesResponse]. - string page_token = 4; -} - -// The response for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. -message ListDatabasesResponse { - // Databases that matched the request. - repeated Database databases = 1; - - // `next_page_token` can be sent in a subsequent - // [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases] call to fetch more - // of the matching databases. - string next_page_token = 2; -} - -// The request for [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. -message CreateDatabaseRequest { - // Required. The name of the instance that will serve the new database. - // Values are of the form `projects//instances/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; - - // Required. A `CREATE DATABASE` statement, which specifies the ID of the - // new database. The database ID must conform to the regular expression - // `[a-z][a-z0-9_\-]*[a-z0-9]` and be between 2 and 30 characters in length. - // If the database ID is a reserved word or if it contains a hyphen, the - // database ID must be enclosed in backticks (`` ` ``). - string create_statement = 2 [(google.api.field_behavior) = REQUIRED]; - - // Optional. A list of DDL statements to run inside the newly created - // database. Statements can create tables, indexes, etc. These - // statements execute atomically with the creation of the database: - // if there is an error in any statement, the database is not created. - repeated string extra_statements = 3 [(google.api.field_behavior) = OPTIONAL]; -} - -// Metadata type for the operation returned by -// [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. -message CreateDatabaseMetadata { - // The database being created. - string database = 1 [(google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - }]; -} - -// The request for [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. -message GetDatabaseRequest { - // Required. The name of the requested database. Values are of the form - // `projects//instances//databases/`. - string name = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - } - ]; -} - -// Enqueues the given DDL statements to be applied, in order but not -// necessarily all at once, to the database schema at some point (or -// points) in the future. The server checks that the statements -// are executable (syntactically valid, name tables that exist, etc.) -// before enqueueing them, but they may still fail upon -// later execution (e.g., if a statement from another batch of -// statements is applied first and it conflicts in some way, or if -// there is some data-related problem like a `NULL` value in a column to -// which `NOT NULL` would be added). If a statement fails, all -// subsequent statements in the batch are automatically cancelled. -// -// Each batch of statements is assigned a name which can be used with -// the [Operations][google.longrunning.Operations] API to monitor -// progress. See the -// [operation_id][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.operation_id] field for more -// details. -message UpdateDatabaseDdlRequest { - // Required. The database to update. - string database = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - } - ]; - - // Required. DDL statements to be applied to the database. - repeated string statements = 2 [(google.api.field_behavior) = REQUIRED]; - - // If empty, the new update request is assigned an - // automatically-generated operation ID. Otherwise, `operation_id` - // is used to construct the name of the resulting - // [Operation][google.longrunning.Operation]. - // - // Specifying an explicit operation ID simplifies determining - // whether the statements were executed in the event that the - // [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl] call is replayed, - // or the return value is otherwise lost: the [database][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.database] and - // `operation_id` fields can be combined to form the - // [name][google.longrunning.Operation.name] of the resulting - // [longrunning.Operation][google.longrunning.Operation]: `/operations/`. - // - // `operation_id` should be unique within the database, and must be - // a valid identifier: `[a-z][a-z0-9_]*`. Note that - // automatically-generated operation IDs always begin with an - // underscore. If the named operation already exists, - // [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl] returns - // `ALREADY_EXISTS`. - string operation_id = 3; -} - -// Metadata type for the operation returned by -// [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl]. -message UpdateDatabaseDdlMetadata { - // The database being modified. - string database = 1 [(google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - }]; - - // For an update this list contains all the statements. For an - // individual statement, this list contains only that statement. - repeated string statements = 2; - - // Reports the commit timestamps of all statements that have - // succeeded so far, where `commit_timestamps[i]` is the commit - // timestamp for the statement `statements[i]`. - repeated google.protobuf.Timestamp commit_timestamps = 3; -} - -// The request for [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. -message DropDatabaseRequest { - // Required. The database to be dropped. - string database = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - } - ]; -} - -// The request for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. -message GetDatabaseDdlRequest { - // Required. The database whose schema we wish to get. - string database = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - } - ]; -} - -// The response for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. -message GetDatabaseDdlResponse { - // A list of formatted DDL statements defining the schema of the database - // specified in the request. - repeated string statements = 1; -} - -// The request for -// [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. -message ListDatabaseOperationsRequest { - // Required. The instance of the database operations. - // Values are of the form `projects//instances/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; - - // An expression that filters the list of returned operations. - // - // A filter expression consists of a field name, a - // comparison operator, and a value for filtering. - // The value must be a string, a number, or a boolean. The comparison operator - // must be one of: `<`, `>`, `<=`, `>=`, `!=`, `=`, or `:`. - // Colon `:` is the contains operator. Filter rules are not case sensitive. - // - // The following fields in the [Operation][google.longrunning.Operation] - // are eligible for filtering: - // - // * `name` - The name of the long-running operation - // * `done` - False if the operation is in progress, else true. - // * `metadata.@type` - the type of metadata. For example, the type string - // for [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata] is - // `type.googleapis.com/google.spanner.admin.database.v1.RestoreDatabaseMetadata`. - // * `metadata.` - any field in metadata.value. - // * `error` - Error associated with the long-running operation. - // * `response.@type` - the type of response. - // * `response.` - any field in response.value. - // - // You can combine multiple expressions by enclosing each expression in - // parentheses. By default, expressions are combined with AND logic. However, - // you can specify AND, OR, and NOT logic explicitly. - // - // Here are a few examples: - // - // * `done:true` - The operation is complete. - // * `(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.RestoreDatabaseMetadata) AND`
- // `(metadata.source_type:BACKUP) AND`
- // `(metadata.backup_info.backup:backup_howl) AND`
- // `(metadata.name:restored_howl) AND`
- // `(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND`
- // `(error:*)` - Return operations where: - // * The operation's metadata type is [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. - // * The database is restored from a backup. - // * The backup name contains "backup_howl". - // * The restored database's name contains "restored_howl". - // * The operation started before 2018-03-28T14:50:00Z. - // * The operation resulted in an error. - string filter = 2; - - // Number of operations to be returned in the response. If 0 or - // less, defaults to the server's maximum allowed page size. - int32 page_size = 3; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.database.v1.ListDatabaseOperationsResponse.next_page_token] - // from a previous [ListDatabaseOperationsResponse][google.spanner.admin.database.v1.ListDatabaseOperationsResponse] to the - // same `parent` and with the same `filter`. - string page_token = 4; -} - -// The response for -// [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. -message ListDatabaseOperationsResponse { - // The list of matching database [long-running - // operations][google.longrunning.Operation]. Each operation's name will be - // prefixed by the database's name. The operation's - // [metadata][google.longrunning.Operation.metadata] field type - // `metadata.type_url` describes the type of the metadata. - repeated google.longrunning.Operation operations = 1; - - // `next_page_token` can be sent in a subsequent - // [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations] - // call to fetch more of the matching metadata. - string next_page_token = 2; -} - -// The request for -// [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. -message RestoreDatabaseRequest { - // Required. The name of the instance in which to create the - // restored database. This instance must be in the same project and - // have the same instance configuration as the instance containing - // the source backup. Values are of the form - // `projects//instances/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; - - // Required. The id of the database to create and restore to. This - // database must not already exist. The `database_id` appended to - // `parent` forms the full database name of the form - // `projects//instances//databases/`. - string database_id = 2 [(google.api.field_behavior) = REQUIRED]; - - // Required. The source from which to restore. - oneof source { - // Name of the backup from which to restore. Values are of the form - // `projects//instances//backups/`. - string backup = 3 [(google.api.resource_reference) = { - type: "spanner.googleapis.com/Backup" - }]; - } -} - -// Metadata type for the long-running operation returned by -// [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. -message RestoreDatabaseMetadata { - // Name of the database being created and restored to. - string name = 1; - - // The type of the restore source. - RestoreSourceType source_type = 2; - - // Information about the source used to restore the database, as specified by - // `source` in [RestoreDatabaseRequest][google.spanner.admin.database.v1.RestoreDatabaseRequest]. - oneof source_info { - // Information about the backup used to restore the database. - BackupInfo backup_info = 3; - } - - // The progress of the - // [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase] - // operation. - OperationProgress progress = 4; - - // The time at which cancellation of this operation was received. - // [Operations.CancelOperation][google.longrunning.Operations.CancelOperation] - // starts asynchronous cancellation on a long-running operation. The server - // makes a best effort to cancel the operation, but success is not guaranteed. - // Clients can use - // [Operations.GetOperation][google.longrunning.Operations.GetOperation] or - // other methods to check whether the cancellation succeeded or whether the - // operation completed despite cancellation. On successful cancellation, - // the operation is not deleted; instead, it becomes an operation with - // an [Operation.error][google.longrunning.Operation.error] value with a - // [google.rpc.Status.code][google.rpc.Status.code] of 1, corresponding to `Code.CANCELLED`. - google.protobuf.Timestamp cancel_time = 5; - - // If exists, the name of the long-running operation that will be used to - // track the post-restore optimization process to optimize the performance of - // the restored database, and remove the dependency on the restore source. - // The name is of the form - // `projects//instances//databases//operations/` - // where the is the name of database being created and restored to. - // The metadata type of the long-running operation is - // [OptimizeRestoredDatabaseMetadata][google.spanner.admin.database.v1.OptimizeRestoredDatabaseMetadata]. This long-running operation will be - // automatically created by the system after the RestoreDatabase long-running - // operation completes successfully. This operation will not be created if the - // restore was not successful. - string optimize_database_operation_name = 6; -} - -// Metadata type for the long-running operation used to track the progress -// of optimizations performed on a newly restored database. This long-running -// operation is automatically created by the system after the successful -// completion of a database restore, and cannot be cancelled. -message OptimizeRestoredDatabaseMetadata { - // Name of the restored database being optimized. - string name = 1; - - // The progress of the post-restore optimizations. - OperationProgress progress = 2; -} - -// Indicates the type of the restore source. -enum RestoreSourceType { - // No restore associated. - TYPE_UNSPECIFIED = 0; - - // A backup was used as the source of the restore. - BACKUP = 1; -} diff --git a/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin_pb2.py b/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin_pb2.py deleted file mode 100644 index 125ab3f86b..0000000000 --- a/google/cloud/spanner_admin_database_v1/proto/spanner_database_admin_pb2.py +++ /dev/null @@ -1,2124 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner/admin/database_v1/proto/spanner_database_admin.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf.internal import enum_type_wrapper -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 -from google.api import client_pb2 as google_dot_api_dot_client__pb2 -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.api import resource_pb2 as google_dot_api_dot_resource__pb2 -from google.iam.v1 import iam_policy_pb2 as google_dot_iam_dot_v1_dot_iam__policy__pb2 -from google.iam.v1 import policy_pb2 as google_dot_iam_dot_v1_dot_policy__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.cloud.spanner_admin_database_v1.proto import ( - backup_pb2 as google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2, -) -from google.cloud.spanner_admin_database_v1.proto import ( - common_pb2 as google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_common__pb2, -) - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner/admin/database_v1/proto/spanner_database_admin.proto", - package="google.spanner.admin.database.v1", - syntax="proto3", - serialized_options=_b( - "\n$com.google.spanner.admin.database.v1B\031SpannerDatabaseAdminProtoP\001ZHgoogle.golang.org/genproto/googleapis/spanner/admin/database/v1;database\252\002&Google.Cloud.Spanner.Admin.Database.V1\312\002&Google\\Cloud\\Spanner\\Admin\\Database\\V1\352AJ\n\037spanner.googleapis.com/Instance\022'projects/{project}/instances/{instance}" - ), - serialized_pb=_b( - '\nIgoogle/cloud/spanner/admin/database_v1/proto/spanner_database_admin.proto\x12 google.spanner.admin.database.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a\x19google/api/resource.proto\x1a\x1egoogle/iam/v1/iam_policy.proto\x1a\x1agoogle/iam/v1/policy.proto\x1a#google/longrunning/operations.proto\x1a\x1bgoogle/protobuf/empty.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x39google/cloud/spanner/admin/database_v1/proto/backup.proto\x1a\x39google/cloud/spanner/admin/database_v1/proto/common.proto"\xab\x01\n\x0bRestoreInfo\x12H\n\x0bsource_type\x18\x01 \x01(\x0e\x32\x33.google.spanner.admin.database.v1.RestoreSourceType\x12\x43\n\x0b\x62\x61\x63kup_info\x18\x02 \x01(\x0b\x32,.google.spanner.admin.database.v1.BackupInfoH\x00\x42\r\n\x0bsource_info"\x96\x03\n\x08\x44\x61tabase\x12\x11\n\x04name\x18\x01 \x01(\tB\x03\xe0\x41\x02\x12\x44\n\x05state\x18\x02 \x01(\x0e\x32\x30.google.spanner.admin.database.v1.Database.StateB\x03\xe0\x41\x03\x12\x34\n\x0b\x63reate_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\x03\xe0\x41\x03\x12H\n\x0crestore_info\x18\x04 \x01(\x0b\x32-.google.spanner.admin.database.v1.RestoreInfoB\x03\xe0\x41\x03"M\n\x05State\x12\x15\n\x11STATE_UNSPECIFIED\x10\x00\x12\x0c\n\x08\x43REATING\x10\x01\x12\t\n\x05READY\x10\x02\x12\x14\n\x10READY_OPTIMIZING\x10\x03:b\xea\x41_\n\x1fspanner.googleapis.com/Database\x12\x82\xd3\xe4\x93\x02/\x12-/v1/{parent=projects/*/instances/*}/databases\xda\x41\x06parent\x12\xa4\x02\n\x0e\x43reateDatabase\x12\x37.google.spanner.admin.database.v1.CreateDatabaseRequest\x1a\x1d.google.longrunning.Operation"\xb9\x01\x82\xd3\xe4\x93\x02\x32"-/v1/{parent=projects/*/instances/*}/databases:\x01*\xda\x41\x17parent,create_statement\xca\x41\x64\n)google.spanner.admin.database.v1.Database\x12\x37google.spanner.admin.database.v1.CreateDatabaseMetadata\x12\xad\x01\n\x0bGetDatabase\x12\x34.google.spanner.admin.database.v1.GetDatabaseRequest\x1a*.google.spanner.admin.database.v1.Database"<\x82\xd3\xe4\x93\x02/\x12-/v1/{name=projects/*/instances/*/databases/*}\xda\x41\x04name\x12\x9d\x02\n\x11UpdateDatabaseDdl\x12:.google.spanner.admin.database.v1.UpdateDatabaseDdlRequest\x1a\x1d.google.longrunning.Operation"\xac\x01\x82\xd3\xe4\x93\x02:25/v1/{database=projects/*/instances/*/databases/*}/ddl:\x01*\xda\x41\x13\x64\x61tabase,statements\xca\x41S\n\x15google.protobuf.Empty\x12:google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata\x12\xa3\x01\n\x0c\x44ropDatabase\x12\x35.google.spanner.admin.database.v1.DropDatabaseRequest\x1a\x16.google.protobuf.Empty"D\x82\xd3\xe4\x93\x02\x33*1/v1/{database=projects/*/instances/*/databases/*}\xda\x41\x08\x64\x61tabase\x12\xcd\x01\n\x0eGetDatabaseDdl\x12\x37.google.spanner.admin.database.v1.GetDatabaseDdlRequest\x1a\x38.google.spanner.admin.database.v1.GetDatabaseDdlResponse"H\x82\xd3\xe4\x93\x02\x37\x12\x35/v1/{database=projects/*/instances/*/databases/*}/ddl\xda\x41\x08\x64\x61tabase\x12\xeb\x01\n\x0cSetIamPolicy\x12".google.iam.v1.SetIamPolicyRequest\x1a\x15.google.iam.v1.Policy"\x9f\x01\x82\xd3\xe4\x93\x02\x86\x01">/v1/{resource=projects/*/instances/*/databases/*}:setIamPolicy:\x01*ZA"/v1/{resource=projects/*/instances/*/databases/*}:getIamPolicy:\x01*ZA".google.spanner.admin.database.v1.ListBackupOperationsResponse"E\x82\xd3\xe4\x93\x02\x36\x12\x34/v1/{parent=projects/*/instances/*}/backupOperations\xda\x41\x06parent\x1ax\xca\x41\x16spanner.googleapis.com\xd2\x41\\https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spanner.adminB\xac\x02\n$com.google.spanner.admin.database.v1B\x19SpannerDatabaseAdminProtoP\x01ZHgoogle.golang.org/genproto/googleapis/spanner/admin/database/v1;database\xaa\x02&Google.Cloud.Spanner.Admin.Database.V1\xca\x02&Google\\Cloud\\Spanner\\Admin\\Database\\V1\xea\x41J\n\x1fspanner.googleapis.com/Instance\x12\'projects/{project}/instances/{instance}b\x06proto3' - ), - dependencies=[ - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - google_dot_api_dot_client__pb2.DESCRIPTOR, - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_api_dot_resource__pb2.DESCRIPTOR, - google_dot_iam_dot_v1_dot_iam__policy__pb2.DESCRIPTOR, - google_dot_iam_dot_v1_dot_policy__pb2.DESCRIPTOR, - google_dot_longrunning_dot_operations__pb2.DESCRIPTOR, - google_dot_protobuf_dot_empty__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.DESCRIPTOR, - google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_common__pb2.DESCRIPTOR, - ], -) - -_RESTORESOURCETYPE = _descriptor.EnumDescriptor( - name="RestoreSourceType", - full_name="google.spanner.admin.database.v1.RestoreSourceType", - filename=None, - file=DESCRIPTOR, - values=[ - _descriptor.EnumValueDescriptor( - name="TYPE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - ), - _descriptor.EnumValueDescriptor( - name="BACKUP", index=1, number=1, serialized_options=None, type=None - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=3044, - serialized_end=3097, -) -_sym_db.RegisterEnumDescriptor(_RESTORESOURCETYPE) - -RestoreSourceType = enum_type_wrapper.EnumTypeWrapper(_RESTORESOURCETYPE) -TYPE_UNSPECIFIED = 0 -BACKUP = 1 - - -_DATABASE_STATE = _descriptor.EnumDescriptor( - name="State", - full_name="google.spanner.admin.database.v1.Database.State", - filename=None, - file=DESCRIPTOR, - values=[ - _descriptor.EnumValueDescriptor( - name="STATE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - ), - _descriptor.EnumValueDescriptor( - name="CREATING", index=1, number=1, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="READY", index=2, number=2, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="READY_OPTIMIZING", - index=3, - number=3, - serialized_options=None, - type=None, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=907, - serialized_end=984, -) -_sym_db.RegisterEnumDescriptor(_DATABASE_STATE) - - -_RESTOREINFO = _descriptor.Descriptor( - name="RestoreInfo", - full_name="google.spanner.admin.database.v1.RestoreInfo", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="source_type", - full_name="google.spanner.admin.database.v1.RestoreInfo.source_type", - index=0, - number=1, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="backup_info", - full_name="google.spanner.admin.database.v1.RestoreInfo.backup_info", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="source_info", - full_name="google.spanner.admin.database.v1.RestoreInfo.source_info", - index=0, - containing_type=None, - fields=[], - ) - ], - serialized_start=504, - serialized_end=675, -) - - -_DATABASE = _descriptor.Descriptor( - name="Database", - full_name="google.spanner.admin.database.v1.Database", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.database.v1.Database.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="state", - full_name="google.spanner.admin.database.v1.Database.state", - index=1, - number=2, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\003"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="create_time", - full_name="google.spanner.admin.database.v1.Database.create_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\003"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="restore_info", - full_name="google.spanner.admin.database.v1.Database.restore_info", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\003"), - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[_DATABASE_STATE], - serialized_options=_b( - "\352A_\n\037spanner.googleapis.com/Database\022/instances//databases/``, - where ```` is as specified in the ``CREATE - DATABASE`` statement. This name can be passed to other API - methods to identify the database. - state: - Output only. The current database state. - create_time: - Output only. If exists, the time at which the database - creation started. - restore_info: - Output only. Applicable only for restored databases. Contains - information about the restore source. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.Database) - ), -) -_sym_db.RegisterMessage(Database) - -ListDatabasesRequest = _reflection.GeneratedProtocolMessageType( - "ListDatabasesRequest", - (_message.Message,), - dict( - DESCRIPTOR=_LISTDATABASESREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The request for - [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. - - - Attributes: - parent: - Required. The instance whose databases should be listed. - Values are of the form - ``projects//instances/``. - page_size: - Number of databases to be returned in the response. If 0 or - less, defaults to the server's maximum allowed page size. - page_token: - If non-empty, ``page_token`` should contain a [next\_page\_tok - en][google.spanner.admin.database.v1.ListDatabasesResponse.nex - t\_page\_token] from a previous [ListDatabasesResponse][google - .spanner.admin.database.v1.ListDatabasesResponse]. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.ListDatabasesRequest) - ), -) -_sym_db.RegisterMessage(ListDatabasesRequest) - -ListDatabasesResponse = _reflection.GeneratedProtocolMessageType( - "ListDatabasesResponse", - (_message.Message,), - dict( - DESCRIPTOR=_LISTDATABASESRESPONSE, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The response for - [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. - - - Attributes: - databases: - Databases that matched the request. - next_page_token: - \ ``next_page_token`` can be sent in a subsequent [ListDatabas - es][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabas - es] call to fetch more of the matching databases. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.ListDatabasesResponse) - ), -) -_sym_db.RegisterMessage(ListDatabasesResponse) - -CreateDatabaseRequest = _reflection.GeneratedProtocolMessageType( - "CreateDatabaseRequest", - (_message.Message,), - dict( - DESCRIPTOR=_CREATEDATABASEREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The request for - [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. - - - Attributes: - parent: - Required. The name of the instance that will serve the new - database. Values are of the form - ``projects//instances/``. - create_statement: - Required. A ``CREATE DATABASE`` statement, which specifies the - ID of the new database. The database ID must conform to the - regular expression ``[a-z][a-z0-9_\-]*[a-z0-9]`` and be - between 2 and 30 characters in length. If the database ID is a - reserved word or if it contains a hyphen, the database ID must - be enclosed in backticks (`````). - extra_statements: - Optional. A list of DDL statements to run inside the newly - created database. Statements can create tables, indexes, etc. - These statements execute atomically with the creation of the - database: if there is an error in any statement, the database - is not created. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.CreateDatabaseRequest) - ), -) -_sym_db.RegisterMessage(CreateDatabaseRequest) - -CreateDatabaseMetadata = _reflection.GeneratedProtocolMessageType( - "CreateDatabaseMetadata", - (_message.Message,), - dict( - DESCRIPTOR=_CREATEDATABASEMETADATA, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""Metadata type for the operation returned by - [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. - - - Attributes: - database: - The database being created. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.CreateDatabaseMetadata) - ), -) -_sym_db.RegisterMessage(CreateDatabaseMetadata) - -GetDatabaseRequest = _reflection.GeneratedProtocolMessageType( - "GetDatabaseRequest", - (_message.Message,), - dict( - DESCRIPTOR=_GETDATABASEREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The request for - [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. - - - Attributes: - name: - Required. The name of the requested database. Values are of - the form ``projects//instances//databases/< - database>``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.GetDatabaseRequest) - ), -) -_sym_db.RegisterMessage(GetDatabaseRequest) - -UpdateDatabaseDdlRequest = _reflection.GeneratedProtocolMessageType( - "UpdateDatabaseDdlRequest", - (_message.Message,), - dict( - DESCRIPTOR=_UPDATEDATABASEDDLREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""Enqueues the given DDL statements to be applied, in order - but not necessarily all at once, to the database schema at some point - (or points) in the future. The server checks that the statements are - executable (syntactically valid, name tables that exist, etc.) before - enqueueing them, but they may still fail upon later execution (e.g., if - a statement from another batch of statements is applied first and it - conflicts in some way, or if there is some data-related problem like a - ``NULL`` value in a column to which ``NOT NULL`` would be added). If a - statement fails, all subsequent statements in the batch are - automatically cancelled. - - Each batch of statements is assigned a name which can be used with the - [Operations][google.longrunning.Operations] API to monitor progress. See - the - [operation\_id][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.operation\_id] - field for more details. - - - Attributes: - database: - Required. The database to update. - statements: - Required. DDL statements to be applied to the database. - operation_id: - If empty, the new update request is assigned an automatically- - generated operation ID. Otherwise, ``operation_id`` is used to - construct the name of the resulting - [Operation][google.longrunning.Operation]. Specifying an - explicit operation ID simplifies determining whether the - statements were executed in the event that the [UpdateDatabase - Ddl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateData - baseDdl] call is replayed, or the return value is otherwise - lost: the [database][google.spanner.admin.database.v1.UpdateDa - tabaseDdlRequest.database] and ``operation_id`` fields can be - combined to form the [name][google.longrunning.Operation.name] - of the resulting - [longrunning.Operation][google.longrunning.Operation]: - ``/operations/``. ``operation_id`` - should be unique within the database, and must be a valid - identifier: ``[a-z][a-z0-9_]*``. Note that automatically- - generated operation IDs always begin with an underscore. If - the named operation already exists, [UpdateDatabaseDdl][google - .spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl] - returns ``ALREADY_EXISTS``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.UpdateDatabaseDdlRequest) - ), -) -_sym_db.RegisterMessage(UpdateDatabaseDdlRequest) - -UpdateDatabaseDdlMetadata = _reflection.GeneratedProtocolMessageType( - "UpdateDatabaseDdlMetadata", - (_message.Message,), - dict( - DESCRIPTOR=_UPDATEDATABASEDDLMETADATA, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""Metadata type for the operation returned by - [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl]. - - - Attributes: - database: - The database being modified. - statements: - For an update this list contains all the statements. For an - individual statement, this list contains only that statement. - commit_timestamps: - Reports the commit timestamps of all statements that have - succeeded so far, where ``commit_timestamps[i]`` is the commit - timestamp for the statement ``statements[i]``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata) - ), -) -_sym_db.RegisterMessage(UpdateDatabaseDdlMetadata) - -DropDatabaseRequest = _reflection.GeneratedProtocolMessageType( - "DropDatabaseRequest", - (_message.Message,), - dict( - DESCRIPTOR=_DROPDATABASEREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The request for - [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. - - - Attributes: - database: - Required. The database to be dropped. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.DropDatabaseRequest) - ), -) -_sym_db.RegisterMessage(DropDatabaseRequest) - -GetDatabaseDdlRequest = _reflection.GeneratedProtocolMessageType( - "GetDatabaseDdlRequest", - (_message.Message,), - dict( - DESCRIPTOR=_GETDATABASEDDLREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The request for - [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. - - - Attributes: - database: - Required. The database whose schema we wish to get. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.GetDatabaseDdlRequest) - ), -) -_sym_db.RegisterMessage(GetDatabaseDdlRequest) - -GetDatabaseDdlResponse = _reflection.GeneratedProtocolMessageType( - "GetDatabaseDdlResponse", - (_message.Message,), - dict( - DESCRIPTOR=_GETDATABASEDDLRESPONSE, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The response for - [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. - - - Attributes: - statements: - A list of formatted DDL statements defining the schema of the - database specified in the request. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.GetDatabaseDdlResponse) - ), -) -_sym_db.RegisterMessage(GetDatabaseDdlResponse) - -ListDatabaseOperationsRequest = _reflection.GeneratedProtocolMessageType( - "ListDatabaseOperationsRequest", - (_message.Message,), - dict( - DESCRIPTOR=_LISTDATABASEOPERATIONSREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The request for - [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. - - - Attributes: - parent: - Required. The instance of the database operations. Values are - of the form ``projects//instances/``. - filter: - An expression that filters the list of returned operations. A - filter expression consists of a field name, a comparison - operator, and a value for filtering. The value must be a - string, a number, or a boolean. The comparison operator must - be one of: ``<``, ``>``, ``<=``, ``>=``, ``!=``, ``=``, or - ``:``. Colon ``:`` is the contains operator. Filter rules are - not case sensitive. The following fields in the - [Operation][google.longrunning.Operation] are eligible for - filtering: - ``name`` - The name of the long-running - operation - ``done`` - False if the operation is in progress, - else true. - ``metadata.@type`` - the type of metadata. For - example, the type string for [RestoreDatabaseMetadata][g - oogle.spanner.admin.database.v1.RestoreDatabaseMetadata] is - ``type.googleapis.com/google.spanner.admin.database.v1.Restore - DatabaseMetadata``. - ``metadata.`` - any field - in metadata.value. - ``error`` - Error associated with the - long-running operation. - ``response.@type`` - the type of - response. - ``response.`` - any field in - response.value. You can combine multiple expressions by - enclosing each expression in parentheses. By default, - expressions are combined with AND logic. However, you can - specify AND, OR, and NOT logic explicitly. Here are a few - examples: - ``done:true`` - The operation is complete. - `` - (metadata.@type=type.googleapis.com/google.spanner.admin.datab - ase.v1.RestoreDatabaseMetadata) AND`` - ``(metadata.source_type:BACKUP) AND`` - ``(metadata.backup_info.backup:backup_howl) AND`` - ``(metadata.name:restored_howl) AND`` - ``(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") - AND`` ``(error:*)`` - Return operations where: - The - operation's metadata type is [RestoreDatabaseMetadata][g - oogle.spanner.admin.database.v1.RestoreDatabaseMetadata]. - - The database is restored from a backup. - The backup name - contains "backup\_howl". - The restored database's name - contains "restored\_howl". - The operation started before - 2018-03-28T14:50:00Z. - The operation resulted in an - error. - page_size: - Number of operations to be returned in the response. If 0 or - less, defaults to the server's maximum allowed page size. - page_token: - If non-empty, ``page_token`` should contain a [next\_page\_tok - en][google.spanner.admin.database.v1.ListDatabaseOperationsRes - ponse.next\_page\_token] from a previous [ListDatabaseOperatio - nsResponse][google.spanner.admin.database.v1.ListDatabaseOpera - tionsResponse] to the same ``parent`` and with the same - ``filter``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.ListDatabaseOperationsRequest) - ), -) -_sym_db.RegisterMessage(ListDatabaseOperationsRequest) - -ListDatabaseOperationsResponse = _reflection.GeneratedProtocolMessageType( - "ListDatabaseOperationsResponse", - (_message.Message,), - dict( - DESCRIPTOR=_LISTDATABASEOPERATIONSRESPONSE, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The response for - [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. - - - Attributes: - operations: - The list of matching database [long-running - operations][google.longrunning.Operation]. Each operation's - name will be prefixed by the database's name. The operation's - [metadata][google.longrunning.Operation.metadata] field type - ``metadata.type_url`` describes the type of the metadata. - next_page_token: - \ ``next_page_token`` can be sent in a subsequent [ListDatabas - eOperations][google.spanner.admin.database.v1.DatabaseAdmin.Li - stDatabaseOperations] call to fetch more of the matching - metadata. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.ListDatabaseOperationsResponse) - ), -) -_sym_db.RegisterMessage(ListDatabaseOperationsResponse) - -RestoreDatabaseRequest = _reflection.GeneratedProtocolMessageType( - "RestoreDatabaseRequest", - (_message.Message,), - dict( - DESCRIPTOR=_RESTOREDATABASEREQUEST, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""The request for - [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. - - - Attributes: - parent: - Required. The name of the instance in which to create the - restored database. This instance must be in the same project - and have the same instance configuration as the instance - containing the source backup. Values are of the form - ``projects//instances/``. - database_id: - Required. The id of the database to create and restore to. - This database must not already exist. The ``database_id`` - appended to ``parent`` forms the full database name of the - form ``projects//instances//databases/``. - source: - Required. The source from which to restore. - backup: - Name of the backup from which to restore. Values are of the - form - ``projects//instances//backups/``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.RestoreDatabaseRequest) - ), -) -_sym_db.RegisterMessage(RestoreDatabaseRequest) - -RestoreDatabaseMetadata = _reflection.GeneratedProtocolMessageType( - "RestoreDatabaseMetadata", - (_message.Message,), - dict( - DESCRIPTOR=_RESTOREDATABASEMETADATA, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""Metadata type for the long-running operation returned by - [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. - - - Attributes: - name: - Name of the database being created and restored to. - source_type: - The type of the restore source. - source_info: - Information about the source used to restore the database, as - specified by ``source`` in [RestoreDatabaseRequest][google.spa - nner.admin.database.v1.RestoreDatabaseRequest]. - backup_info: - Information about the backup used to restore the database. - progress: - The progress of the [RestoreDatabase][google.spanner.admin.dat - abase.v1.DatabaseAdmin.RestoreDatabase] operation. - cancel_time: - The time at which cancellation of this operation was received. - [Operations.CancelOperation][google.longrunning.Operations.Can - celOperation] starts asynchronous cancellation on a long- - running operation. The server makes a best effort to cancel - the operation, but success is not guaranteed. Clients can use - [Operations.GetOperation][google.longrunning.Operations.GetOpe - ration] or other methods to check whether the cancellation - succeeded or whether the operation completed despite - cancellation. On successful cancellation, the operation is not - deleted; instead, it becomes an operation with an - [Operation.error][google.longrunning.Operation.error] value - with a [google.rpc.Status.code][google.rpc.Status.code] of 1, - corresponding to ``Code.CANCELLED``. - optimize_database_operation_name: - If exists, the name of the long-running operation that will be - used to track the post-restore optimization process to - optimize the performance of the restored database, and remove - the dependency on the restore source. The name is of the form - ``projects//instances//databases/ - /operations/`` where the is the name of database - being created and restored to. The metadata type of the long- - running operation is [OptimizeRestoredDatabaseMetadata][google - .spanner.admin.database.v1.OptimizeRestoredDatabaseMetadata]. - This long-running operation will be automatically created by - the system after the RestoreDatabase long-running operation - completes successfully. This operation will not be created if - the restore was not successful. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.RestoreDatabaseMetadata) - ), -) -_sym_db.RegisterMessage(RestoreDatabaseMetadata) - -OptimizeRestoredDatabaseMetadata = _reflection.GeneratedProtocolMessageType( - "OptimizeRestoredDatabaseMetadata", - (_message.Message,), - dict( - DESCRIPTOR=_OPTIMIZERESTOREDDATABASEMETADATA, - __module__="google.cloud.spanner.admin.database_v1.proto.spanner_database_admin_pb2", - __doc__="""Metadata type for the long-running operation used to track - the progress of optimizations performed on a newly restored database. - This long-running operation is automatically created by the system after - the successful completion of a database restore, and cannot be - cancelled. - - - Attributes: - name: - Name of the restored database being optimized. - progress: - The progress of the post-restore optimizations. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.database.v1.OptimizeRestoredDatabaseMetadata) - ), -) -_sym_db.RegisterMessage(OptimizeRestoredDatabaseMetadata) - - -DESCRIPTOR._options = None -_DATABASE.fields_by_name["name"]._options = None -_DATABASE.fields_by_name["state"]._options = None -_DATABASE.fields_by_name["create_time"]._options = None -_DATABASE.fields_by_name["restore_info"]._options = None -_DATABASE._options = None -_LISTDATABASESREQUEST.fields_by_name["parent"]._options = None -_CREATEDATABASEREQUEST.fields_by_name["parent"]._options = None -_CREATEDATABASEREQUEST.fields_by_name["create_statement"]._options = None -_CREATEDATABASEREQUEST.fields_by_name["extra_statements"]._options = None -_CREATEDATABASEMETADATA.fields_by_name["database"]._options = None -_GETDATABASEREQUEST.fields_by_name["name"]._options = None -_UPDATEDATABASEDDLREQUEST.fields_by_name["database"]._options = None -_UPDATEDATABASEDDLREQUEST.fields_by_name["statements"]._options = None -_UPDATEDATABASEDDLMETADATA.fields_by_name["database"]._options = None -_DROPDATABASEREQUEST.fields_by_name["database"]._options = None -_GETDATABASEDDLREQUEST.fields_by_name["database"]._options = None -_LISTDATABASEOPERATIONSREQUEST.fields_by_name["parent"]._options = None -_RESTOREDATABASEREQUEST.fields_by_name["parent"]._options = None -_RESTOREDATABASEREQUEST.fields_by_name["database_id"]._options = None -_RESTOREDATABASEREQUEST.fields_by_name["backup"]._options = None - -_DATABASEADMIN = _descriptor.ServiceDescriptor( - name="DatabaseAdmin", - full_name="google.spanner.admin.database.v1.DatabaseAdmin", - file=DESCRIPTOR, - index=0, - serialized_options=_b( - "\312A\026spanner.googleapis.com\322A\\https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spanner.admin" - ), - serialized_start=3100, - serialized_end=7054, - methods=[ - _descriptor.MethodDescriptor( - name="ListDatabases", - full_name="google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases", - index=0, - containing_service=None, - input_type=_LISTDATABASESREQUEST, - output_type=_LISTDATABASESRESPONSE, - serialized_options=_b( - "\202\323\344\223\002/\022-/v1/{parent=projects/*/instances/*}/databases\332A\006parent" - ), - ), - _descriptor.MethodDescriptor( - name="CreateDatabase", - full_name="google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase", - index=1, - containing_service=None, - input_type=_CREATEDATABASEREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=_b( - '\202\323\344\223\0022"-/v1/{parent=projects/*/instances/*}/databases:\001*\332A\027parent,create_statement\312Ad\n)google.spanner.admin.database.v1.Database\0227google.spanner.admin.database.v1.CreateDatabaseMetadata' - ), - ), - _descriptor.MethodDescriptor( - name="GetDatabase", - full_name="google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase", - index=2, - containing_service=None, - input_type=_GETDATABASEREQUEST, - output_type=_DATABASE, - serialized_options=_b( - "\202\323\344\223\002/\022-/v1/{name=projects/*/instances/*/databases/*}\332A\004name" - ), - ), - _descriptor.MethodDescriptor( - name="UpdateDatabaseDdl", - full_name="google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl", - index=3, - containing_service=None, - input_type=_UPDATEDATABASEDDLREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=_b( - "\202\323\344\223\002:25/v1/{database=projects/*/instances/*/databases/*}/ddl:\001*\332A\023database,statements\312AS\n\025google.protobuf.Empty\022:google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata" - ), - ), - _descriptor.MethodDescriptor( - name="DropDatabase", - full_name="google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase", - index=4, - containing_service=None, - input_type=_DROPDATABASEREQUEST, - output_type=google_dot_protobuf_dot_empty__pb2._EMPTY, - serialized_options=_b( - "\202\323\344\223\0023*1/v1/{database=projects/*/instances/*/databases/*}\332A\010database" - ), - ), - _descriptor.MethodDescriptor( - name="GetDatabaseDdl", - full_name="google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl", - index=5, - containing_service=None, - input_type=_GETDATABASEDDLREQUEST, - output_type=_GETDATABASEDDLRESPONSE, - serialized_options=_b( - "\202\323\344\223\0027\0225/v1/{database=projects/*/instances/*/databases/*}/ddl\332A\010database" - ), - ), - _descriptor.MethodDescriptor( - name="SetIamPolicy", - full_name="google.spanner.admin.database.v1.DatabaseAdmin.SetIamPolicy", - index=6, - containing_service=None, - input_type=google_dot_iam_dot_v1_dot_iam__policy__pb2._SETIAMPOLICYREQUEST, - output_type=google_dot_iam_dot_v1_dot_policy__pb2._POLICY, - serialized_options=_b( - '\202\323\344\223\002\206\001">/v1/{resource=projects/*/instances/*/databases/*}:setIamPolicy:\001*ZA"/v1/{resource=projects/*/instances/*/databases/*}:getIamPolicy:\001*ZA"/operations/` and - can be used to track preparation of the database. The - [metadata][google.longrunning.Operation.metadata] field type is - [CreateDatabaseMetadata][google.spanner.admin.database.v1.CreateDatabaseMetadata]. The - [response][google.longrunning.Operation.response] field type is - [Database][google.spanner.admin.database.v1.Database], if successful. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def GetDatabase(self, request, context): - """Gets the state of a Cloud Spanner database. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def UpdateDatabaseDdl(self, request, context): - """Updates the schema of a Cloud Spanner database by - creating/altering/dropping tables, columns, indexes, etc. The returned - [long-running operation][google.longrunning.Operation] will have a name of - the format `/operations/` and can be used to - track execution of the schema change(s). The - [metadata][google.longrunning.Operation.metadata] field type is - [UpdateDatabaseDdlMetadata][google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata]. The operation has no response. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def DropDatabase(self, request, context): - """Drops (aka deletes) a Cloud Spanner database. - Completed backups for the database will be retained according to their - `expire_time`. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def GetDatabaseDdl(self, request, context): - """Returns the schema of a Cloud Spanner database as a list of formatted - DDL statements. This method does not show pending schema updates, those may - be queried using the [Operations][google.longrunning.Operations] API. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def SetIamPolicy(self, request, context): - """Sets the access control policy on a database or backup resource. - Replaces any existing policy. - - Authorization requires `spanner.databases.setIamPolicy` - permission on [resource][google.iam.v1.SetIamPolicyRequest.resource]. - For backups, authorization requires `spanner.backups.setIamPolicy` - permission on [resource][google.iam.v1.SetIamPolicyRequest.resource]. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def GetIamPolicy(self, request, context): - """Gets the access control policy for a database or backup resource. - Returns an empty policy if a database or backup exists but does not have a - policy set. - - Authorization requires `spanner.databases.getIamPolicy` permission on - [resource][google.iam.v1.GetIamPolicyRequest.resource]. - For backups, authorization requires `spanner.backups.getIamPolicy` - permission on [resource][google.iam.v1.GetIamPolicyRequest.resource]. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def TestIamPermissions(self, request, context): - """Returns permissions that the caller has on the specified database or backup - resource. - - Attempting this RPC on a non-existent Cloud Spanner database will - result in a NOT_FOUND error if the user has - `spanner.databases.list` permission on the containing Cloud - Spanner instance. Otherwise returns an empty set of permissions. - Calling this method on a backup that does not exist will - result in a NOT_FOUND error if the user has - `spanner.backups.list` permission on the containing instance. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def CreateBackup(self, request, context): - """Starts creating a new Cloud Spanner Backup. - The returned backup [long-running operation][google.longrunning.Operation] - will have a name of the format - `projects//instances//backups//operations/` - and can be used to track creation of the backup. The - [metadata][google.longrunning.Operation.metadata] field type is - [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. The - [response][google.longrunning.Operation.response] field type is - [Backup][google.spanner.admin.database.v1.Backup], if successful. Cancelling the returned operation will stop the - creation and delete the backup. - There can be only one pending backup creation per database. Backup creation - of different databases can run concurrently. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def GetBackup(self, request, context): - """Gets metadata on a pending or completed [Backup][google.spanner.admin.database.v1.Backup]. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def UpdateBackup(self, request, context): - """Updates a pending or completed [Backup][google.spanner.admin.database.v1.Backup]. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def DeleteBackup(self, request, context): - """Deletes a pending or completed [Backup][google.spanner.admin.database.v1.Backup]. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def ListBackups(self, request, context): - """Lists completed and pending backups. - Backups returned are ordered by `create_time` in descending order, - starting from the most recent `create_time`. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def RestoreDatabase(self, request, context): - """Create a new database by restoring from a completed backup. The new - database must be in the same project and in an instance with the same - instance configuration as the instance containing - the backup. The returned database [long-running - operation][google.longrunning.Operation] has a name of the format - `projects//instances//databases//operations/`, - and can be used to track the progress of the operation, and to cancel it. - The [metadata][google.longrunning.Operation.metadata] field type is - [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. - The [response][google.longrunning.Operation.response] type - is [Database][google.spanner.admin.database.v1.Database], if - successful. Cancelling the returned operation will stop the restore and - delete the database. - There can be only one database being restored into an instance at a time. - Once the restore operation completes, a new restore operation can be - initiated, without waiting for the optimize operation associated with the - first restore to complete. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def ListDatabaseOperations(self, request, context): - """Lists database [longrunning-operations][google.longrunning.Operation]. - A database operation has a name of the form - `projects//instances//databases//operations/`. - The long-running operation - [metadata][google.longrunning.Operation.metadata] field type - `metadata.type_url` describes the type of the metadata. Operations returned - include those that have completed/failed/canceled within the last 7 days, - and pending operations. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def ListBackupOperations(self, request, context): - """Lists the backup [long-running operations][google.longrunning.Operation] in - the given instance. A backup operation has a name of the form - `projects//instances//backups//operations/`. - The long-running operation - [metadata][google.longrunning.Operation.metadata] field type - `metadata.type_url` describes the type of the metadata. Operations returned - include those that have completed/failed/canceled within the last 7 days, - and pending operations. Operations returned are ordered by - `operation.metadata.value.progress.start_time` in descending order starting - from the most recently started operation. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_DatabaseAdminServicer_to_server(servicer, server): - rpc_method_handlers = { - "ListDatabases": grpc.unary_unary_rpc_method_handler( - servicer.ListDatabases, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.ListDatabasesRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.ListDatabasesResponse.SerializeToString, - ), - "CreateDatabase": grpc.unary_unary_rpc_method_handler( - servicer.CreateDatabase, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.CreateDatabaseRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - "GetDatabase": grpc.unary_unary_rpc_method_handler( - servicer.GetDatabase, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.GetDatabaseRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.Database.SerializeToString, - ), - "UpdateDatabaseDdl": grpc.unary_unary_rpc_method_handler( - servicer.UpdateDatabaseDdl, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.UpdateDatabaseDdlRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - "DropDatabase": grpc.unary_unary_rpc_method_handler( - servicer.DropDatabase, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.DropDatabaseRequest.FromString, - response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, - ), - "GetDatabaseDdl": grpc.unary_unary_rpc_method_handler( - servicer.GetDatabaseDdl, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.GetDatabaseDdlRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.GetDatabaseDdlResponse.SerializeToString, - ), - "SetIamPolicy": grpc.unary_unary_rpc_method_handler( - servicer.SetIamPolicy, - request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.SetIamPolicyRequest.FromString, - response_serializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.SerializeToString, - ), - "GetIamPolicy": grpc.unary_unary_rpc_method_handler( - servicer.GetIamPolicy, - request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.GetIamPolicyRequest.FromString, - response_serializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.SerializeToString, - ), - "TestIamPermissions": grpc.unary_unary_rpc_method_handler( - servicer.TestIamPermissions, - request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsRequest.FromString, - response_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsResponse.SerializeToString, - ), - "CreateBackup": grpc.unary_unary_rpc_method_handler( - servicer.CreateBackup, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.CreateBackupRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - "GetBackup": grpc.unary_unary_rpc_method_handler( - servicer.GetBackup, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.GetBackupRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.Backup.SerializeToString, - ), - "UpdateBackup": grpc.unary_unary_rpc_method_handler( - servicer.UpdateBackup, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.UpdateBackupRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.Backup.SerializeToString, - ), - "DeleteBackup": grpc.unary_unary_rpc_method_handler( - servicer.DeleteBackup, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.DeleteBackupRequest.FromString, - response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, - ), - "ListBackups": grpc.unary_unary_rpc_method_handler( - servicer.ListBackups, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.ListBackupsRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.ListBackupsResponse.SerializeToString, - ), - "RestoreDatabase": grpc.unary_unary_rpc_method_handler( - servicer.RestoreDatabase, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.RestoreDatabaseRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - "ListDatabaseOperations": grpc.unary_unary_rpc_method_handler( - servicer.ListDatabaseOperations, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.ListDatabaseOperationsRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_spanner__database__admin__pb2.ListDatabaseOperationsResponse.SerializeToString, - ), - "ListBackupOperations": grpc.unary_unary_rpc_method_handler( - servicer.ListBackupOperations, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.ListBackupOperationsRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_database__v1_dot_proto_dot_backup__pb2.ListBackupOperationsResponse.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.spanner.admin.database.v1.DatabaseAdmin", rpc_method_handlers - ) - server.add_generic_rpc_handlers((generic_handler,)) diff --git a/google/cloud/spanner_admin_database_v1/py.typed b/google/cloud/spanner_admin_database_v1/py.typed new file mode 100644 index 0000000000..29f334aad6 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-spanner-admin-database package uses inline types. diff --git a/google/cloud/spanner_admin_database_v1/services/__init__.py b/google/cloud/spanner_admin_database_v1/services/__init__.py new file mode 100644 index 0000000000..cbf94b283c --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/__init__.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/__init__.py b/google/cloud/spanner_admin_database_v1/services/database_admin/__init__.py new file mode 100644 index 0000000000..580a7ed2a2 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/__init__.py @@ -0,0 +1,22 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from .client import DatabaseAdminClient +from .async_client import DatabaseAdminAsyncClient + +__all__ = ( + "DatabaseAdminClient", + "DatabaseAdminAsyncClient", +) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/async_client.py b/google/cloud/spanner_admin_database_v1/services/database_admin/async_client.py new file mode 100644 index 0000000000..0e08065a7d --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/async_client.py @@ -0,0 +1,4218 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import logging as std_logging +from collections import OrderedDict +import re +from typing import ( + Dict, + Callable, + Mapping, + MutableMapping, + MutableSequence, + Optional, + Sequence, + Tuple, + Type, + Union, +) +import uuid + +from google.cloud.spanner_admin_database_v1 import gapic_version as package_version + +from google.api_core.client_options import ClientOptions +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry_async as retries +from google.auth import credentials as ga_credentials # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + + +try: + OptionalRetry = Union[retries.AsyncRetry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.AsyncRetry, object, None] # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.spanner_admin_database_v1.services.database_admin import pagers +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as gsad_backup_schedule, +) +from google.cloud.spanner_admin_database_v1.types import common +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import duration_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from .transports.base import DatabaseAdminTransport, DEFAULT_CLIENT_INFO +from .transports.grpc_asyncio import DatabaseAdminGrpcAsyncIOTransport +from .client import DatabaseAdminClient + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class DatabaseAdminAsyncClient: + """Cloud Spanner Database Admin API + + The Cloud Spanner Database Admin API can be used to: + + - create, drop, and list databases + - update the schema of pre-existing databases + - create, delete, copy and list backups for a database + - restore a database from an existing backup + """ + + _client: DatabaseAdminClient + + # Copy defaults from the synchronous client for use here. + # Note: DEFAULT_ENDPOINT is deprecated. Use _DEFAULT_ENDPOINT_TEMPLATE instead. + DEFAULT_ENDPOINT = DatabaseAdminClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = DatabaseAdminClient.DEFAULT_MTLS_ENDPOINT + _DEFAULT_ENDPOINT_TEMPLATE = DatabaseAdminClient._DEFAULT_ENDPOINT_TEMPLATE + _DEFAULT_UNIVERSE = DatabaseAdminClient._DEFAULT_UNIVERSE + + backup_path = staticmethod(DatabaseAdminClient.backup_path) + parse_backup_path = staticmethod(DatabaseAdminClient.parse_backup_path) + backup_schedule_path = staticmethod(DatabaseAdminClient.backup_schedule_path) + parse_backup_schedule_path = staticmethod( + DatabaseAdminClient.parse_backup_schedule_path + ) + crypto_key_path = staticmethod(DatabaseAdminClient.crypto_key_path) + parse_crypto_key_path = staticmethod(DatabaseAdminClient.parse_crypto_key_path) + crypto_key_version_path = staticmethod(DatabaseAdminClient.crypto_key_version_path) + parse_crypto_key_version_path = staticmethod( + DatabaseAdminClient.parse_crypto_key_version_path + ) + database_path = staticmethod(DatabaseAdminClient.database_path) + parse_database_path = staticmethod(DatabaseAdminClient.parse_database_path) + database_role_path = staticmethod(DatabaseAdminClient.database_role_path) + parse_database_role_path = staticmethod( + DatabaseAdminClient.parse_database_role_path + ) + instance_path = staticmethod(DatabaseAdminClient.instance_path) + parse_instance_path = staticmethod(DatabaseAdminClient.parse_instance_path) + instance_partition_path = staticmethod(DatabaseAdminClient.instance_partition_path) + parse_instance_partition_path = staticmethod( + DatabaseAdminClient.parse_instance_partition_path + ) + common_billing_account_path = staticmethod( + DatabaseAdminClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + DatabaseAdminClient.parse_common_billing_account_path + ) + common_folder_path = staticmethod(DatabaseAdminClient.common_folder_path) + parse_common_folder_path = staticmethod( + DatabaseAdminClient.parse_common_folder_path + ) + common_organization_path = staticmethod( + DatabaseAdminClient.common_organization_path + ) + parse_common_organization_path = staticmethod( + DatabaseAdminClient.parse_common_organization_path + ) + common_project_path = staticmethod(DatabaseAdminClient.common_project_path) + parse_common_project_path = staticmethod( + DatabaseAdminClient.parse_common_project_path + ) + common_location_path = staticmethod(DatabaseAdminClient.common_location_path) + parse_common_location_path = staticmethod( + DatabaseAdminClient.parse_common_location_path + ) + + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials + info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + DatabaseAdminAsyncClient: The constructed client. + """ + return DatabaseAdminClient.from_service_account_info.__func__(DatabaseAdminAsyncClient, info, *args, **kwargs) # type: ignore + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + DatabaseAdminAsyncClient: The constructed client. + """ + return DatabaseAdminClient.from_service_account_file.__func__(DatabaseAdminAsyncClient, filename, *args, **kwargs) # type: ignore + + from_service_account_json = from_service_account_file + + @classmethod + def get_mtls_endpoint_and_cert_source( + cls, client_options: Optional[ClientOptions] = None + ): + """Return the API endpoint and client cert source for mutual TLS. + + The client cert source is determined in the following order: + (1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the + client cert source is None. + (2) if `client_options.client_cert_source` is provided, use the provided one; if the + default client cert source exists, use the default one; otherwise the client cert + source is None. + + The API endpoint is determined in the following order: + (1) if `client_options.api_endpoint` if provided, use the provided one. + (2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the + default mTLS endpoint; if the environment variable is "never", use the default API + endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise + use the default API endpoint. + + More details can be found at https://google.aip.dev/auth/4114. + + Args: + client_options (google.api_core.client_options.ClientOptions): Custom options for the + client. Only the `api_endpoint` and `client_cert_source` properties may be used + in this method. + + Returns: + Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the + client cert source to use. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If any errors happen. + """ + return DatabaseAdminClient.get_mtls_endpoint_and_cert_source(client_options) # type: ignore + + @property + def transport(self) -> DatabaseAdminTransport: + """Returns the transport used by the client instance. + + Returns: + DatabaseAdminTransport: The transport used by the client instance. + """ + return self._client.transport + + @property + def api_endpoint(self): + """Return the API endpoint used by the client instance. + + Returns: + str: The API endpoint used by the client instance. + """ + return self._client._api_endpoint + + @property + def universe_domain(self) -> str: + """Return the universe domain used by the client instance. + + Returns: + str: The universe domain used + by the client instance. + """ + return self._client._universe_domain + + get_transport_class = DatabaseAdminClient.get_transport_class + + def __init__( + self, + *, + credentials: Optional[ga_credentials.Credentials] = None, + transport: Optional[ + Union[str, DatabaseAdminTransport, Callable[..., DatabaseAdminTransport]] + ] = "grpc_asyncio", + client_options: Optional[ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiates the database admin async client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Optional[Union[str,DatabaseAdminTransport,Callable[..., DatabaseAdminTransport]]]): + The transport to use, or a Callable that constructs and returns a new transport to use. + If a Callable is given, it will be called with the same set of initialization + arguments as used in the DatabaseAdminTransport constructor. + If set to None, a transport is chosen automatically. + client_options (Optional[Union[google.api_core.client_options.ClientOptions, dict]]): + Custom options for the client. + + 1. The ``api_endpoint`` property can be used to override the + default endpoint provided by the client when ``transport`` is + not explicitly provided. Only if this property is not set and + ``transport`` was not explicitly provided, the endpoint is + determined by the GOOGLE_API_USE_MTLS_ENDPOINT environment + variable, which have one of the following values: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto-switch to the + default mTLS endpoint if client certificate is present; this is + the default value). + + 2. If the GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide a client certificate for mTLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + 3. The ``universe_domain`` property can be used to override the + default "googleapis.com" universe. Note that ``api_endpoint`` + property still takes precedence; and ``universe_domain`` is + currently not supported for mTLS. + + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + self._client = DatabaseAdminClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ): # pragma: NO COVER + _LOGGER.debug( + "Created client `google.spanner.admin.database_v1.DatabaseAdminAsyncClient`.", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "universeDomain": getattr( + self._client._transport._credentials, "universe_domain", "" + ), + "credentialsType": f"{type(self._client._transport._credentials).__module__}.{type(self._client._transport._credentials).__qualname__}", + "credentialsInfo": getattr( + self.transport._credentials, "get_cred_info", lambda: None + )(), + } + if hasattr(self._client._transport, "_credentials") + else { + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "credentialsType": None, + }, + ) + + async def list_databases( + self, + request: Optional[ + Union[spanner_database_admin.ListDatabasesRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListDatabasesAsyncPager: + r"""Lists Cloud Spanner databases. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_list_databases(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabasesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_databases(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest, dict]]): + The request object. The request for + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. + parent (:class:`str`): + Required. The instance whose databases should be listed. + Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabasesAsyncPager: + The response for + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.ListDatabasesRequest): + request = spanner_database_admin.ListDatabasesRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_databases + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListDatabasesAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def create_database( + self, + request: Optional[ + Union[spanner_database_admin.CreateDatabaseRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + create_statement: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Creates a new Cloud Spanner database and starts to prepare it + for serving. The returned [long-running + operation][google.longrunning.Operation] will have a name of the + format ``/operations/`` and can be + used to track preparation of the database. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateDatabaseMetadata][google.spanner.admin.database.v1.CreateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Database][google.spanner.admin.database.v1.Database], if + successful. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_create_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateDatabaseRequest( + parent="parent_value", + create_statement="create_statement_value", + ) + + # Make the request + operation = client.create_database(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.CreateDatabaseRequest, dict]]): + The request object. The request for + [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. + parent (:class:`str`): + Required. The name of the instance that will serve the + new database. Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + create_statement (:class:`str`): + Required. A ``CREATE DATABASE`` statement, which + specifies the ID of the new database. The database ID + must conform to the regular expression + ``[a-z][a-z0-9_\-]*[a-z0-9]`` and be between 2 and 30 + characters in length. If the database ID is a reserved + word or if it contains a hyphen, the database ID must be + enclosed in backticks (:literal:`\``). + + This corresponds to the ``create_statement`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, create_statement] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.CreateDatabaseRequest): + request = spanner_database_admin.CreateDatabaseRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if create_statement is not None: + request.create_statement = create_statement + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.create_database + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_database_admin.Database, + metadata_type=spanner_database_admin.CreateDatabaseMetadata, + ) + + # Done; return the response. + return response + + async def get_database( + self, + request: Optional[ + Union[spanner_database_admin.GetDatabaseRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.Database: + r"""Gets the state of a Cloud Spanner database. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_get_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetDatabaseRequest( + name="name_value", + ) + + # Make the request + response = await client.get_database(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.GetDatabaseRequest, dict]]): + The request object. The request for + [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. + name (:class:`str`): + Required. The name of the requested database. Values are + of the form + ``projects//instances//databases/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.Database: + A Cloud Spanner database. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.GetDatabaseRequest): + request = spanner_database_admin.GetDatabaseRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_database + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def update_database( + self, + request: Optional[ + Union[spanner_database_admin.UpdateDatabaseRequest, dict] + ] = None, + *, + database: Optional[spanner_database_admin.Database] = None, + update_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Updates a Cloud Spanner database. The returned [long-running + operation][google.longrunning.Operation] can be used to track + the progress of updating the database. If the named database + does not exist, returns ``NOT_FOUND``. + + While the operation is pending: + + - The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field is set to true. + - Cancelling the operation is best-effort. If the cancellation + succeeds, the operation metadata's + [cancel_time][google.spanner.admin.database.v1.UpdateDatabaseMetadata.cancel_time] + is set, the updates are reverted, and the operation terminates + with a ``CANCELLED`` status. + - New UpdateDatabase requests will return a + ``FAILED_PRECONDITION`` error until the pending operation is + done (returns successfully or with error). + - Reading the database via the API continues to give the + pre-request values. + + Upon completion of the returned operation: + + - The new values are in effect and readable via the API. + - The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field becomes false. + + The returned [long-running + operation][google.longrunning.Operation] will have a name of the + format + ``projects//instances//databases//operations/`` + and can be used to track the database modification. The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseMetadata][google.spanner.admin.database.v1.UpdateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Database][google.spanner.admin.database.v1.Database], if + successful. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_update_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + database = spanner_admin_database_v1.Database() + database.name = "name_value" + + request = spanner_admin_database_v1.UpdateDatabaseRequest( + database=database, + ) + + # Make the request + operation = client.update_database(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.UpdateDatabaseRequest, dict]]): + The request object. The request for + [UpdateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase]. + database (:class:`google.cloud.spanner_admin_database_v1.types.Database`): + Required. The database to update. The ``name`` field of + the database is of the form + ``projects//instances//databases/``. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`): + Required. The list of fields to update. Currently, only + ``enable_drop_protection`` field can be updated. + + This corresponds to the ``update_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, update_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.UpdateDatabaseRequest): + request = spanner_database_admin.UpdateDatabaseRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if update_mask is not None: + request.update_mask = update_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.update_database + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("database.name", request.database.name),) + ), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_database_admin.Database, + metadata_type=spanner_database_admin.UpdateDatabaseMetadata, + ) + + # Done; return the response. + return response + + async def update_database_ddl( + self, + request: Optional[ + Union[spanner_database_admin.UpdateDatabaseDdlRequest, dict] + ] = None, + *, + database: Optional[str] = None, + statements: Optional[MutableSequence[str]] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Updates the schema of a Cloud Spanner database by + creating/altering/dropping tables, columns, indexes, etc. The + returned [long-running operation][google.longrunning.Operation] + will have a name of the format + ``/operations/`` and can be used to + track execution of the schema change(s). The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseDdlMetadata][google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata]. + The operation has no response. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_update_database_ddl(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database="database_value", + statements=['statements_value1', 'statements_value2'], + ) + + # Make the request + operation = client.update_database_ddl(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.UpdateDatabaseDdlRequest, dict]]): + The request object. Enqueues the given DDL statements to be applied, in + order but not necessarily all at once, to the database + schema at some point (or points) in the future. The + server checks that the statements are executable + (syntactically valid, name tables that exist, etc.) + before enqueueing them, but they may still fail upon + later execution (e.g., if a statement from another batch + of statements is applied first and it conflicts in some + way, or if there is some data-related problem like a + ``NULL`` value in a column to which ``NOT NULL`` would + be added). If a statement fails, all subsequent + statements in the batch are automatically cancelled. + + Each batch of statements is assigned a name which can be + used with the + [Operations][google.longrunning.Operations] API to + monitor progress. See the + [operation_id][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.operation_id] + field for more details. + database (:class:`str`): + Required. The database to update. + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + statements (:class:`MutableSequence[str]`): + Required. DDL statements to be + applied to the database. + + This corresponds to the ``statements`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated + empty messages in your APIs. A typical example is to + use it as the request or the response type of an API + method. For instance: + + service Foo { + rpc Bar(google.protobuf.Empty) returns + (google.protobuf.Empty); + + } + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, statements] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.UpdateDatabaseDdlRequest): + request = spanner_database_admin.UpdateDatabaseDdlRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if statements: + request.statements.extend(statements) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.update_database_ddl + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + empty_pb2.Empty, + metadata_type=spanner_database_admin.UpdateDatabaseDdlMetadata, + ) + + # Done; return the response. + return response + + async def drop_database( + self, + request: Optional[ + Union[spanner_database_admin.DropDatabaseRequest, dict] + ] = None, + *, + database: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Drops (aka deletes) a Cloud Spanner database. Completed backups + for the database will be retained according to their + ``expire_time``. Note: Cloud Spanner might continue to accept + requests for a few seconds after the database has been deleted. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_drop_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DropDatabaseRequest( + database="database_value", + ) + + # Make the request + await client.drop_database(request=request) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.DropDatabaseRequest, dict]]): + The request object. The request for + [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. + database (:class:`str`): + Required. The database to be dropped. + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.DropDatabaseRequest): + request = spanner_database_admin.DropDatabaseRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.drop_database + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def get_database_ddl( + self, + request: Optional[ + Union[spanner_database_admin.GetDatabaseDdlRequest, dict] + ] = None, + *, + database: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.GetDatabaseDdlResponse: + r"""Returns the schema of a Cloud Spanner database as a list of + formatted DDL statements. This method does not show pending + schema updates, those may be queried using the + [Operations][google.longrunning.Operations] API. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_get_database_ddl(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetDatabaseDdlRequest( + database="database_value", + ) + + # Make the request + response = await client.get_database_ddl(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlRequest, dict]]): + The request object. The request for + [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. + database (:class:`str`): + Required. The database whose schema we wish to get. + Values are of the form + ``projects//instances//databases/`` + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlResponse: + The response for + [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.GetDatabaseDdlRequest): + request = spanner_database_admin.GetDatabaseDdlRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_database_ddl + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def set_iam_policy( + self, + request: Optional[Union[iam_policy_pb2.SetIamPolicyRequest, dict]] = None, + *, + resource: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Sets the access control policy on a database or backup resource. + Replaces any existing policy. + + Authorization requires ``spanner.databases.setIamPolicy`` + permission on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. For + backups, authorization requires ``spanner.backups.setIamPolicy`` + permission on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + async def sample_set_iam_policy(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = await client.set_iam_policy(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.iam.v1.iam_policy_pb2.SetIamPolicyRequest, dict]]): + The request object. Request message for ``SetIamPolicy`` method. + resource (:class:`str`): + REQUIRED: The resource for which the + policy is being specified. See the + operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which specifies access + controls for Google Cloud resources. + + A Policy is a collection of bindings. A binding binds + one or more members, or principals, to a single role. + Principals can be user accounts, service accounts, + Google groups, and domains (such as G Suite). A role + is a named list of permissions; each role can be an + IAM predefined role or a user-created custom role. + + For some types of Google Cloud resources, a binding + can also specify a condition, which is a logical + expression that allows access to a resource only if + the expression evaluates to true. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the [IAM + documentation](https://cloud.google.com/iam/help/conditions/resource-policies). + + **JSON example:** + + :literal:`` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \` + + **YAML example:** + + :literal:`` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \` + + For a description of IAM and its features, see the + [IAM + documentation](https://cloud.google.com/iam/docs/). + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = iam_policy_pb2.SetIamPolicyRequest(**request) + elif not request: + request = iam_policy_pb2.SetIamPolicyRequest(resource=resource) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.set_iam_policy + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_iam_policy( + self, + request: Optional[Union[iam_policy_pb2.GetIamPolicyRequest, dict]] = None, + *, + resource: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Gets the access control policy for a database or backup + resource. Returns an empty policy if a database or backup exists + but does not have a policy set. + + Authorization requires ``spanner.databases.getIamPolicy`` + permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. For + backups, authorization requires ``spanner.backups.getIamPolicy`` + permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + async def sample_get_iam_policy(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = await client.get_iam_policy(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.iam.v1.iam_policy_pb2.GetIamPolicyRequest, dict]]): + The request object. Request message for ``GetIamPolicy`` method. + resource (:class:`str`): + REQUIRED: The resource for which the + policy is being requested. See the + operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which specifies access + controls for Google Cloud resources. + + A Policy is a collection of bindings. A binding binds + one or more members, or principals, to a single role. + Principals can be user accounts, service accounts, + Google groups, and domains (such as G Suite). A role + is a named list of permissions; each role can be an + IAM predefined role or a user-created custom role. + + For some types of Google Cloud resources, a binding + can also specify a condition, which is a logical + expression that allows access to a resource only if + the expression evaluates to true. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the [IAM + documentation](https://cloud.google.com/iam/help/conditions/resource-policies). + + **JSON example:** + + :literal:`` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \` + + **YAML example:** + + :literal:`` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \` + + For a description of IAM and its features, see the + [IAM + documentation](https://cloud.google.com/iam/docs/). + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = iam_policy_pb2.GetIamPolicyRequest(**request) + elif not request: + request = iam_policy_pb2.GetIamPolicyRequest(resource=resource) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_iam_policy + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def test_iam_permissions( + self, + request: Optional[Union[iam_policy_pb2.TestIamPermissionsRequest, dict]] = None, + *, + resource: Optional[str] = None, + permissions: Optional[MutableSequence[str]] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> iam_policy_pb2.TestIamPermissionsResponse: + r"""Returns permissions that the caller has on the specified + database or backup resource. + + Attempting this RPC on a non-existent Cloud Spanner database + will result in a NOT_FOUND error if the user has + ``spanner.databases.list`` permission on the containing Cloud + Spanner instance. Otherwise returns an empty set of permissions. + Calling this method on a backup that does not exist will result + in a NOT_FOUND error if the user has ``spanner.backups.list`` + permission on the containing instance. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + async def sample_test_iam_permissions(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + permissions=['permissions_value1', 'permissions_value2'], + ) + + # Make the request + response = await client.test_iam_permissions(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest, dict]]): + The request object. Request message for ``TestIamPermissions`` method. + resource (:class:`str`): + REQUIRED: The resource for which the + policy detail is being requested. See + the operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + permissions (:class:`MutableSequence[str]`): + The set of permissions to check for the ``resource``. + Permissions with wildcards (such as '*' or 'storage.*') + are not allowed. For more information see `IAM + Overview `__. + + This corresponds to the ``permissions`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse: + Response message for TestIamPermissions method. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource, permissions] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = iam_policy_pb2.TestIamPermissionsRequest(**request) + elif not request: + request = iam_policy_pb2.TestIamPermissionsRequest( + resource=resource, permissions=permissions + ) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.test_iam_permissions + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def create_backup( + self, + request: Optional[Union[gsad_backup.CreateBackupRequest, dict]] = None, + *, + parent: Optional[str] = None, + backup: Optional[gsad_backup.Backup] = None, + backup_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Starts creating a new Cloud Spanner Backup. The returned backup + [long-running operation][google.longrunning.Operation] will have + a name of the format + ``projects//instances//backups//operations/`` + and can be used to track creation of the backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Backup][google.spanner.admin.database.v1.Backup], if + successful. Cancelling the returned operation will stop the + creation and delete the backup. There can be only one pending + backup creation per database. Backup creation of different + databases can run concurrently. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_create_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + ) + + # Make the request + operation = client.create_backup(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.CreateBackupRequest, dict]]): + The request object. The request for + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. + parent (:class:`str`): + Required. The name of the instance in which the backup + will be created. This must be the same instance that + contains the database the backup will be created from. + The backup will be stored in the location(s) specified + in the instance configuration of this instance. Values + are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup (:class:`google.cloud.spanner_admin_database_v1.types.Backup`): + Required. The backup to create. + This corresponds to the ``backup`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup_id (:class:`str`): + Required. The id of the backup to be created. The + ``backup_id`` appended to ``parent`` forms the full + backup name of the form + ``projects//instances//backups/``. + + This corresponds to the ``backup_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Backup` + A backup of a Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, backup, backup_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, gsad_backup.CreateBackupRequest): + request = gsad_backup.CreateBackupRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if backup is not None: + request.backup = backup + if backup_id is not None: + request.backup_id = backup_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.create_backup + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + gsad_backup.Backup, + metadata_type=gsad_backup.CreateBackupMetadata, + ) + + # Done; return the response. + return response + + async def copy_backup( + self, + request: Optional[Union[backup.CopyBackupRequest, dict]] = None, + *, + parent: Optional[str] = None, + backup_id: Optional[str] = None, + source_backup: Optional[str] = None, + expire_time: Optional[timestamp_pb2.Timestamp] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Starts copying a Cloud Spanner Backup. The returned backup + [long-running operation][google.longrunning.Operation] will have + a name of the format + ``projects//instances//backups//operations/`` + and can be used to track copying of the backup. The operation is + associated with the destination backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CopyBackupMetadata][google.spanner.admin.database.v1.CopyBackupMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Backup][google.spanner.admin.database.v1.Backup], if + successful. Cancelling the returned operation will stop the + copying and delete the destination backup. Concurrent CopyBackup + requests can run on the same source backup. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_copy_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CopyBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + ) + + # Make the request + operation = client.copy_backup(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.CopyBackupRequest, dict]]): + The request object. The request for + [CopyBackup][google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup]. + parent (:class:`str`): + Required. The name of the destination instance that will + contain the backup copy. Values are of the form: + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup_id (:class:`str`): + Required. The id of the backup copy. The ``backup_id`` + appended to ``parent`` forms the full backup_uri of the + form + ``projects//instances//backups/``. + + This corresponds to the ``backup_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + source_backup (:class:`str`): + Required. The source backup to be copied. The source + backup needs to be in READY state for it to be copied. + Once CopyBackup is in progress, the source backup cannot + be deleted or cleaned up on expiration until CopyBackup + is finished. Values are of the form: + ``projects//instances//backups/``. + + This corresponds to the ``source_backup`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + expire_time (:class:`google.protobuf.timestamp_pb2.Timestamp`): + Required. The expiration time of the backup in + microsecond granularity. The expiration time must be at + least 6 hours and at most 366 days from the + ``create_time`` of the source backup. Once the + ``expire_time`` has passed, the backup is eligible to be + automatically deleted by Cloud Spanner to free the + resources used by the backup. + + This corresponds to the ``expire_time`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Backup` + A backup of a Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, backup_id, source_backup, expire_time] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.CopyBackupRequest): + request = backup.CopyBackupRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if backup_id is not None: + request.backup_id = backup_id + if source_backup is not None: + request.source_backup = source_backup + if expire_time is not None: + request.expire_time = expire_time + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.copy_backup + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + backup.Backup, + metadata_type=backup.CopyBackupMetadata, + ) + + # Done; return the response. + return response + + async def get_backup( + self, + request: Optional[Union[backup.GetBackupRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup.Backup: + r"""Gets metadata on a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_get_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetBackupRequest( + name="name_value", + ) + + # Make the request + response = await client.get_backup(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.GetBackupRequest, dict]]): + The request object. The request for + [GetBackup][google.spanner.admin.database.v1.DatabaseAdmin.GetBackup]. + name (:class:`str`): + Required. Name of the backup. Values are of the form + ``projects//instances//backups/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.Backup: + A backup of a Cloud Spanner database. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.GetBackupRequest): + request = backup.GetBackupRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_backup + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def update_backup( + self, + request: Optional[Union[gsad_backup.UpdateBackupRequest, dict]] = None, + *, + backup: Optional[gsad_backup.Backup] = None, + update_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup.Backup: + r"""Updates a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_update_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateBackupRequest( + ) + + # Make the request + response = await client.update_backup(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.UpdateBackupRequest, dict]]): + The request object. The request for + [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. + backup (:class:`google.cloud.spanner_admin_database_v1.types.Backup`): + Required. The backup to update. ``backup.name``, and the + fields to be updated as specified by ``update_mask`` are + required. Other fields are ignored. Update is only + supported for the following fields: + + - ``backup.expire_time``. + + This corresponds to the ``backup`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`): + Required. A mask specifying which fields (e.g. + ``expire_time``) in the Backup resource should be + updated. This mask is relative to the Backup resource, + not to the request message. The field mask must always + be specified; this prevents any future fields from being + erased accidentally by clients that do not know about + them. + + This corresponds to the ``update_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.Backup: + A backup of a Cloud Spanner database. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [backup, update_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, gsad_backup.UpdateBackupRequest): + request = gsad_backup.UpdateBackupRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if backup is not None: + request.backup = backup + if update_mask is not None: + request.update_mask = update_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.update_backup + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("backup.name", request.backup.name),) + ), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def delete_backup( + self, + request: Optional[Union[backup.DeleteBackupRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_delete_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DeleteBackupRequest( + name="name_value", + ) + + # Make the request + await client.delete_backup(request=request) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.DeleteBackupRequest, dict]]): + The request object. The request for + [DeleteBackup][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup]. + name (:class:`str`): + Required. Name of the backup to delete. Values are of + the form + ``projects//instances//backups/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.DeleteBackupRequest): + request = backup.DeleteBackupRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.delete_backup + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def list_backups( + self, + request: Optional[Union[backup.ListBackupsRequest, dict]] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListBackupsAsyncPager: + r"""Lists completed and pending backups. Backups returned are + ordered by ``create_time`` in descending order, starting from + the most recent ``create_time``. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_list_backups(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backups(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.ListBackupsRequest, dict]]): + The request object. The request for + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. + parent (:class:`str`): + Required. The instance to list backups from. Values are + of the form ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupsAsyncPager: + The response for + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.ListBackupsRequest): + request = backup.ListBackupsRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_backups + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListBackupsAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def restore_database( + self, + request: Optional[ + Union[spanner_database_admin.RestoreDatabaseRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + database_id: Optional[str] = None, + backup: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Create a new database by restoring from a completed backup. The + new database must be in the same project and in an instance with + the same instance configuration as the instance containing the + backup. The returned database [long-running + operation][google.longrunning.Operation] has a name of the + format + ``projects//instances//databases//operations/``, + and can be used to track the progress of the operation, and to + cancel it. The [metadata][google.longrunning.Operation.metadata] + field type is + [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. + The [response][google.longrunning.Operation.response] type is + [Database][google.spanner.admin.database.v1.Database], if + successful. Cancelling the returned operation will stop the + restore and delete the database. There can be only one database + being restored into an instance at a time. Once the restore + operation completes, a new restore operation can be initiated, + without waiting for the optimize operation associated with the + first restore to complete. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_restore_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.RestoreDatabaseRequest( + backup="backup_value", + parent="parent_value", + database_id="database_id_value", + ) + + # Make the request + operation = client.restore_database(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.RestoreDatabaseRequest, dict]]): + The request object. The request for + [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. + parent (:class:`str`): + Required. The name of the instance in which to create + the restored database. This instance must be in the same + project and have the same instance configuration as the + instance containing the source backup. Values are of the + form ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + database_id (:class:`str`): + Required. The id of the database to create and restore + to. This database must not already exist. The + ``database_id`` appended to ``parent`` forms the full + database name of the form + ``projects//instances//databases/``. + + This corresponds to the ``database_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup (:class:`str`): + Name of the backup from which to restore. Values are of + the form + ``projects//instances//backups/``. + + This corresponds to the ``backup`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, database_id, backup] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.RestoreDatabaseRequest): + request = spanner_database_admin.RestoreDatabaseRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if database_id is not None: + request.database_id = database_id + if backup is not None: + request.backup = backup + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.restore_database + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_database_admin.Database, + metadata_type=spanner_database_admin.RestoreDatabaseMetadata, + ) + + # Done; return the response. + return response + + async def list_database_operations( + self, + request: Optional[ + Union[spanner_database_admin.ListDatabaseOperationsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListDatabaseOperationsAsyncPager: + r"""Lists database + [longrunning-operations][google.longrunning.Operation]. A + database operation has a name of the form + ``projects//instances//databases//operations/``. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_list_database_operations(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabaseOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_database_operations(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest, dict]]): + The request object. The request for + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + parent (:class:`str`): + Required. The instance of the database operations. + Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseOperationsAsyncPager: + The response for + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_database_admin.ListDatabaseOperationsRequest + ): + request = spanner_database_admin.ListDatabaseOperationsRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_database_operations + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListDatabaseOperationsAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def list_backup_operations( + self, + request: Optional[Union[backup.ListBackupOperationsRequest, dict]] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListBackupOperationsAsyncPager: + r"""Lists the backup [long-running + operations][google.longrunning.Operation] in the given instance. + A backup operation has a name of the form + ``projects//instances//backups//operations/``. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.progress.start_time`` in descending + order starting from the most recently started operation. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_list_backup_operations(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backup_operations(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest, dict]]): + The request object. The request for + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + parent (:class:`str`): + Required. The instance of the backup operations. Values + are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupOperationsAsyncPager: + The response for + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.ListBackupOperationsRequest): + request = backup.ListBackupOperationsRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_backup_operations + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListBackupOperationsAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def list_database_roles( + self, + request: Optional[ + Union[spanner_database_admin.ListDatabaseRolesRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListDatabaseRolesAsyncPager: + r"""Lists Cloud Spanner database roles. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_list_database_roles(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabaseRolesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_database_roles(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesRequest, dict]]): + The request object. The request for + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles]. + parent (:class:`str`): + Required. The database whose roles should be listed. + Values are of the form + ``projects//instances//databases/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseRolesAsyncPager: + The response for + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.ListDatabaseRolesRequest): + request = spanner_database_admin.ListDatabaseRolesRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_database_roles + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListDatabaseRolesAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def add_split_points( + self, + request: Optional[ + Union[spanner_database_admin.AddSplitPointsRequest, dict] + ] = None, + *, + database: Optional[str] = None, + split_points: Optional[ + MutableSequence[spanner_database_admin.SplitPoints] + ] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.AddSplitPointsResponse: + r"""Adds split points to specified tables, indexes of a + database. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_add_split_points(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.AddSplitPointsRequest( + database="database_value", + ) + + # Make the request + response = await client.add_split_points(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.AddSplitPointsRequest, dict]]): + The request object. The request for + [AddSplitPoints][google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints]. + database (:class:`str`): + Required. The database on whose tables/indexes split + points are to be added. Values are of the form + ``projects//instances//databases/``. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + split_points (:class:`MutableSequence[google.cloud.spanner_admin_database_v1.types.SplitPoints]`): + Required. The split points to add. + This corresponds to the ``split_points`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.AddSplitPointsResponse: + The response for + [AddSplitPoints][google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints]. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, split_points] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.AddSplitPointsRequest): + request = spanner_database_admin.AddSplitPointsRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if split_points: + request.split_points.extend(split_points) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.add_split_points + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def create_backup_schedule( + self, + request: Optional[ + Union[gsad_backup_schedule.CreateBackupScheduleRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + backup_schedule: Optional[gsad_backup_schedule.BackupSchedule] = None, + backup_schedule_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup_schedule.BackupSchedule: + r"""Creates a new backup schedule. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_create_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateBackupScheduleRequest( + parent="parent_value", + backup_schedule_id="backup_schedule_id_value", + ) + + # Make the request + response = await client.create_backup_schedule(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.CreateBackupScheduleRequest, dict]]): + The request object. The request for + [CreateBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackupSchedule]. + parent (:class:`str`): + Required. The name of the database + that this backup schedule applies to. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup_schedule (:class:`google.cloud.spanner_admin_database_v1.types.BackupSchedule`): + Required. The backup schedule to + create. + + This corresponds to the ``backup_schedule`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup_schedule_id (:class:`str`): + Required. The Id to use for the backup schedule. The + ``backup_schedule_id`` appended to ``parent`` forms the + full backup schedule name of the form + ``projects//instances//databases//backupSchedules/``. + + This corresponds to the ``backup_schedule_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, backup_schedule, backup_schedule_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, gsad_backup_schedule.CreateBackupScheduleRequest): + request = gsad_backup_schedule.CreateBackupScheduleRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if backup_schedule is not None: + request.backup_schedule = backup_schedule + if backup_schedule_id is not None: + request.backup_schedule_id = backup_schedule_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.create_backup_schedule + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_backup_schedule( + self, + request: Optional[Union[backup_schedule.GetBackupScheduleRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup_schedule.BackupSchedule: + r"""Gets backup schedule for the input schedule name. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_get_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetBackupScheduleRequest( + name="name_value", + ) + + # Make the request + response = await client.get_backup_schedule(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.GetBackupScheduleRequest, dict]]): + The request object. The request for + [GetBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.GetBackupSchedule]. + name (:class:`str`): + Required. The name of the schedule to retrieve. Values + are of the form + ``projects//instances//databases//backupSchedules/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup_schedule.GetBackupScheduleRequest): + request = backup_schedule.GetBackupScheduleRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_backup_schedule + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def update_backup_schedule( + self, + request: Optional[ + Union[gsad_backup_schedule.UpdateBackupScheduleRequest, dict] + ] = None, + *, + backup_schedule: Optional[gsad_backup_schedule.BackupSchedule] = None, + update_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup_schedule.BackupSchedule: + r"""Updates a backup schedule. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_update_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateBackupScheduleRequest( + ) + + # Make the request + response = await client.update_backup_schedule(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.UpdateBackupScheduleRequest, dict]]): + The request object. The request for + [UpdateBackupScheduleRequest][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackupSchedule]. + backup_schedule (:class:`google.cloud.spanner_admin_database_v1.types.BackupSchedule`): + Required. The backup schedule to update. + ``backup_schedule.name``, and the fields to be updated + as specified by ``update_mask`` are required. Other + fields are ignored. + + This corresponds to the ``backup_schedule`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`): + Required. A mask specifying which + fields in the BackupSchedule resource + should be updated. This mask is relative + to the BackupSchedule resource, not to + the request message. The field mask must + always be specified; this prevents any + future fields from being erased + accidentally. + + This corresponds to the ``update_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [backup_schedule, update_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, gsad_backup_schedule.UpdateBackupScheduleRequest): + request = gsad_backup_schedule.UpdateBackupScheduleRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if backup_schedule is not None: + request.backup_schedule = backup_schedule + if update_mask is not None: + request.update_mask = update_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.update_backup_schedule + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("backup_schedule.name", request.backup_schedule.name),) + ), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def delete_backup_schedule( + self, + request: Optional[ + Union[backup_schedule.DeleteBackupScheduleRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes a backup schedule. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_delete_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DeleteBackupScheduleRequest( + name="name_value", + ) + + # Make the request + await client.delete_backup_schedule(request=request) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.DeleteBackupScheduleRequest, dict]]): + The request object. The request for + [DeleteBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackupSchedule]. + name (:class:`str`): + Required. The name of the schedule to delete. Values are + of the form + ``projects//instances//databases//backupSchedules/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup_schedule.DeleteBackupScheduleRequest): + request = backup_schedule.DeleteBackupScheduleRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.delete_backup_schedule + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def list_backup_schedules( + self, + request: Optional[ + Union[backup_schedule.ListBackupSchedulesRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListBackupSchedulesAsyncPager: + r"""Lists all the backup schedules for the database. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_list_backup_schedules(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupSchedulesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backup_schedules(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesRequest, dict]]): + The request object. The request for + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules]. + parent (:class:`str`): + Required. Database is the parent + resource whose backup schedules should + be listed. Values are of the form + projects//instances//databases/ + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupSchedulesAsyncPager: + The response for + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup_schedule.ListBackupSchedulesRequest): + request = backup_schedule.ListBackupSchedulesRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_backup_schedules + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListBackupSchedulesAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def internal_update_graph_operation( + self, + request: Optional[ + Union[spanner_database_admin.InternalUpdateGraphOperationRequest, dict] + ] = None, + *, + database: Optional[str] = None, + operation_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.InternalUpdateGraphOperationResponse: + r"""This is an internal API called by Spanner Graph jobs. + You should never need to call this API directly. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + async def sample_internal_update_graph_operation(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.InternalUpdateGraphOperationRequest( + database="database_value", + operation_id="operation_id_value", + vm_identity_token="vm_identity_token_value", + ) + + # Make the request + response = await client.internal_update_graph_operation(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_database_v1.types.InternalUpdateGraphOperationRequest, dict]]): + The request object. Internal request proto, do not use + directly. + database (:class:`str`): + Internal field, do not use directly. + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + operation_id (:class:`str`): + Internal field, do not use directly. + This corresponds to the ``operation_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.InternalUpdateGraphOperationResponse: + Internal response proto, do not use + directly. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, operation_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_database_admin.InternalUpdateGraphOperationRequest + ): + request = spanner_database_admin.InternalUpdateGraphOperationRequest( + request + ) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if operation_id is not None: + request.operation_id = operation_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.internal_update_graph_operation + ] + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def list_operations( + self, + request: Optional[operations_pb2.ListOperationsRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.ListOperationsResponse: + r"""Lists operations that match the specified filter in the request. + + Args: + request (:class:`~.operations_pb2.ListOperationsRequest`): + The request object. Request message for + `ListOperations` method. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + ~.operations_pb2.ListOperationsResponse: + Response message for ``ListOperations`` method. + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.ListOperationsRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self.transport._wrapped_methods[self._client._transport.list_operations] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_operation( + self, + request: Optional[operations_pb2.GetOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Gets the latest state of a long-running operation. + + Args: + request (:class:`~.operations_pb2.GetOperationRequest`): + The request object. Request message for + `GetOperation` method. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + ~.operations_pb2.Operation: + An ``Operation`` object. + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.GetOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self.transport._wrapped_methods[self._client._transport.get_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def delete_operation( + self, + request: Optional[operations_pb2.DeleteOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes a long-running operation. + + This method indicates that the client is no longer interested + in the operation result. It does not cancel the operation. + If the server doesn't support this method, it returns + `google.rpc.Code.UNIMPLEMENTED`. + + Args: + request (:class:`~.operations_pb2.DeleteOperationRequest`): + The request object. Request message for + `DeleteOperation` method. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + None + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.DeleteOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self.transport._wrapped_methods[self._client._transport.delete_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def cancel_operation( + self, + request: Optional[operations_pb2.CancelOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Starts asynchronous cancellation on a long-running operation. + + The server makes a best effort to cancel the operation, but success + is not guaranteed. If the server doesn't support this method, it returns + `google.rpc.Code.UNIMPLEMENTED`. + + Args: + request (:class:`~.operations_pb2.CancelOperationRequest`): + The request object. Request message for + `CancelOperation` method. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + None + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.CancelOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self.transport._wrapped_methods[self._client._transport.cancel_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def __aenter__(self) -> "DatabaseAdminAsyncClient": + return self + + async def __aexit__(self, exc_type, exc, tb): + await self.transport.close() + + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +__all__ = ("DatabaseAdminAsyncClient",) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/client.py b/google/cloud/spanner_admin_database_v1/services/database_admin/client.py new file mode 100644 index 0000000000..5f85aa39b1 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/client.py @@ -0,0 +1,4722 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from collections import OrderedDict +from http import HTTPStatus +import json +import logging as std_logging +import os +import re +from typing import ( + Dict, + Callable, + Mapping, + MutableMapping, + MutableSequence, + Optional, + Sequence, + Tuple, + Type, + Union, + cast, +) +import uuid +import warnings + +from google.cloud.spanner_admin_database_v1 import gapic_version as package_version + +from google.api_core import client_options as client_options_lib +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.spanner_admin_database_v1.services.database_admin import pagers +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as gsad_backup_schedule, +) +from google.cloud.spanner_admin_database_v1.types import common +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import duration_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from .transports.base import DatabaseAdminTransport, DEFAULT_CLIENT_INFO +from .transports.grpc import DatabaseAdminGrpcTransport +from .transports.grpc_asyncio import DatabaseAdminGrpcAsyncIOTransport +from .transports.rest import DatabaseAdminRestTransport + + +class DatabaseAdminClientMeta(type): + """Metaclass for the DatabaseAdmin client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = OrderedDict() # type: Dict[str, Type[DatabaseAdminTransport]] + _transport_registry["grpc"] = DatabaseAdminGrpcTransport + _transport_registry["grpc_asyncio"] = DatabaseAdminGrpcAsyncIOTransport + _transport_registry["rest"] = DatabaseAdminRestTransport + + def get_transport_class( + cls, + label: Optional[str] = None, + ) -> Type[DatabaseAdminTransport]: + """Returns an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class DatabaseAdminClient(metaclass=DatabaseAdminClientMeta): + """Cloud Spanner Database Admin API + + The Cloud Spanner Database Admin API can be used to: + + - create, drop, and list databases + - update the schema of pre-existing databases + - create, delete, copy and list backups for a database + - restore a database from an existing backup + """ + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Converts api endpoint to mTLS endpoint. + + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + # Note: DEFAULT_ENDPOINT is deprecated. Use _DEFAULT_ENDPOINT_TEMPLATE instead. + DEFAULT_ENDPOINT = "spanner.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + _DEFAULT_ENDPOINT_TEMPLATE = "spanner.{UNIVERSE_DOMAIN}" + _DEFAULT_UNIVERSE = "googleapis.com" + + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials + info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + DatabaseAdminClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_info(info) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + DatabaseAdminClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> DatabaseAdminTransport: + """Returns the transport used by the client instance. + + Returns: + DatabaseAdminTransport: The transport used by the client + instance. + """ + return self._transport + + @staticmethod + def backup_path( + project: str, + instance: str, + backup: str, + ) -> str: + """Returns a fully-qualified backup string.""" + return "projects/{project}/instances/{instance}/backups/{backup}".format( + project=project, + instance=instance, + backup=backup, + ) + + @staticmethod + def parse_backup_path(path: str) -> Dict[str, str]: + """Parses a backup path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instances/(?P.+?)/backups/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def backup_schedule_path( + project: str, + instance: str, + database: str, + schedule: str, + ) -> str: + """Returns a fully-qualified backup_schedule string.""" + return "projects/{project}/instances/{instance}/databases/{database}/backupSchedules/{schedule}".format( + project=project, + instance=instance, + database=database, + schedule=schedule, + ) + + @staticmethod + def parse_backup_schedule_path(path: str) -> Dict[str, str]: + """Parses a backup_schedule path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instances/(?P.+?)/databases/(?P.+?)/backupSchedules/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def crypto_key_path( + project: str, + location: str, + key_ring: str, + crypto_key: str, + ) -> str: + """Returns a fully-qualified crypto_key string.""" + return "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}".format( + project=project, + location=location, + key_ring=key_ring, + crypto_key=crypto_key, + ) + + @staticmethod + def parse_crypto_key_path(path: str) -> Dict[str, str]: + """Parses a crypto_key path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/locations/(?P.+?)/keyRings/(?P.+?)/cryptoKeys/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def crypto_key_version_path( + project: str, + location: str, + key_ring: str, + crypto_key: str, + crypto_key_version: str, + ) -> str: + """Returns a fully-qualified crypto_key_version string.""" + return "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}/cryptoKeyVersions/{crypto_key_version}".format( + project=project, + location=location, + key_ring=key_ring, + crypto_key=crypto_key, + crypto_key_version=crypto_key_version, + ) + + @staticmethod + def parse_crypto_key_version_path(path: str) -> Dict[str, str]: + """Parses a crypto_key_version path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/locations/(?P.+?)/keyRings/(?P.+?)/cryptoKeys/(?P.+?)/cryptoKeyVersions/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def database_path( + project: str, + instance: str, + database: str, + ) -> str: + """Returns a fully-qualified database string.""" + return "projects/{project}/instances/{instance}/databases/{database}".format( + project=project, + instance=instance, + database=database, + ) + + @staticmethod + def parse_database_path(path: str) -> Dict[str, str]: + """Parses a database path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instances/(?P.+?)/databases/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def database_role_path( + project: str, + instance: str, + database: str, + role: str, + ) -> str: + """Returns a fully-qualified database_role string.""" + return "projects/{project}/instances/{instance}/databases/{database}/databaseRoles/{role}".format( + project=project, + instance=instance, + database=database, + role=role, + ) + + @staticmethod + def parse_database_role_path(path: str) -> Dict[str, str]: + """Parses a database_role path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instances/(?P.+?)/databases/(?P.+?)/databaseRoles/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def instance_path( + project: str, + instance: str, + ) -> str: + """Returns a fully-qualified instance string.""" + return "projects/{project}/instances/{instance}".format( + project=project, + instance=instance, + ) + + @staticmethod + def parse_instance_path(path: str) -> Dict[str, str]: + """Parses a instance path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/instances/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def instance_partition_path( + project: str, + instance: str, + instance_partition: str, + ) -> str: + """Returns a fully-qualified instance_partition string.""" + return "projects/{project}/instances/{instance}/instancePartitions/{instance_partition}".format( + project=project, + instance=instance, + instance_partition=instance_partition, + ) + + @staticmethod + def parse_instance_partition_path(path: str) -> Dict[str, str]: + """Parses a instance_partition path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instances/(?P.+?)/instancePartitions/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def common_billing_account_path( + billing_account: str, + ) -> str: + """Returns a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path( + folder: str, + ) -> str: + """Returns a fully-qualified folder string.""" + return "folders/{folder}".format( + folder=folder, + ) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path( + organization: str, + ) -> str: + """Returns a fully-qualified organization string.""" + return "organizations/{organization}".format( + organization=organization, + ) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path( + project: str, + ) -> str: + """Returns a fully-qualified project string.""" + return "projects/{project}".format( + project=project, + ) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path( + project: str, + location: str, + ) -> str: + """Returns a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, + location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @classmethod + def get_mtls_endpoint_and_cert_source( + cls, client_options: Optional[client_options_lib.ClientOptions] = None + ): + """Deprecated. Return the API endpoint and client cert source for mutual TLS. + + The client cert source is determined in the following order: + (1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the + client cert source is None. + (2) if `client_options.client_cert_source` is provided, use the provided one; if the + default client cert source exists, use the default one; otherwise the client cert + source is None. + + The API endpoint is determined in the following order: + (1) if `client_options.api_endpoint` if provided, use the provided one. + (2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the + default mTLS endpoint; if the environment variable is "never", use the default API + endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise + use the default API endpoint. + + More details can be found at https://google.aip.dev/auth/4114. + + Args: + client_options (google.api_core.client_options.ClientOptions): Custom options for the + client. Only the `api_endpoint` and `client_cert_source` properties may be used + in this method. + + Returns: + Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the + client cert source to use. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If any errors happen. + """ + + warnings.warn( + "get_mtls_endpoint_and_cert_source is deprecated. Use the api_endpoint property instead.", + DeprecationWarning, + ) + if client_options is None: + client_options = client_options_lib.ClientOptions() + use_client_cert = os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false") + use_mtls_endpoint = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_client_cert not in ("true", "false"): + raise ValueError( + "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + if use_mtls_endpoint not in ("auto", "never", "always"): + raise MutualTLSChannelError( + "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Figure out the client cert source to use. + client_cert_source = None + if use_client_cert == "true": + if client_options.client_cert_source: + client_cert_source = client_options.client_cert_source + elif mtls.has_default_client_cert_source(): + client_cert_source = mtls.default_client_cert_source() + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + elif use_mtls_endpoint == "always" or ( + use_mtls_endpoint == "auto" and client_cert_source + ): + api_endpoint = cls.DEFAULT_MTLS_ENDPOINT + else: + api_endpoint = cls.DEFAULT_ENDPOINT + + return api_endpoint, client_cert_source + + @staticmethod + def _read_environment_variables(): + """Returns the environment variables used by the client. + + Returns: + Tuple[bool, str, str]: returns the GOOGLE_API_USE_CLIENT_CERTIFICATE, + GOOGLE_API_USE_MTLS_ENDPOINT, and GOOGLE_CLOUD_UNIVERSE_DOMAIN environment variables. + + Raises: + ValueError: If GOOGLE_API_USE_CLIENT_CERTIFICATE is not + any of ["true", "false"]. + google.auth.exceptions.MutualTLSChannelError: If GOOGLE_API_USE_MTLS_ENDPOINT + is not any of ["auto", "never", "always"]. + """ + use_client_cert = os.getenv( + "GOOGLE_API_USE_CLIENT_CERTIFICATE", "false" + ).lower() + use_mtls_endpoint = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto").lower() + universe_domain_env = os.getenv("GOOGLE_CLOUD_UNIVERSE_DOMAIN") + if use_client_cert not in ("true", "false"): + raise ValueError( + "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + if use_mtls_endpoint not in ("auto", "never", "always"): + raise MutualTLSChannelError( + "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + return use_client_cert == "true", use_mtls_endpoint, universe_domain_env + + @staticmethod + def _get_client_cert_source(provided_cert_source, use_cert_flag): + """Return the client cert source to be used by the client. + + Args: + provided_cert_source (bytes): The client certificate source provided. + use_cert_flag (bool): A flag indicating whether to use the client certificate. + + Returns: + bytes or None: The client cert source to be used by the client. + """ + client_cert_source = None + if use_cert_flag: + if provided_cert_source: + client_cert_source = provided_cert_source + elif mtls.has_default_client_cert_source(): + client_cert_source = mtls.default_client_cert_source() + return client_cert_source + + @staticmethod + def _get_api_endpoint( + api_override, client_cert_source, universe_domain, use_mtls_endpoint + ): + """Return the API endpoint used by the client. + + Args: + api_override (str): The API endpoint override. If specified, this is always + the return value of this function and the other arguments are not used. + client_cert_source (bytes): The client certificate source used by the client. + universe_domain (str): The universe domain used by the client. + use_mtls_endpoint (str): How to use the mTLS endpoint, which depends also on the other parameters. + Possible values are "always", "auto", or "never". + + Returns: + str: The API endpoint to be used by the client. + """ + if api_override is not None: + api_endpoint = api_override + elif use_mtls_endpoint == "always" or ( + use_mtls_endpoint == "auto" and client_cert_source + ): + _default_universe = DatabaseAdminClient._DEFAULT_UNIVERSE + if universe_domain != _default_universe: + raise MutualTLSChannelError( + f"mTLS is not supported in any universe other than {_default_universe}." + ) + api_endpoint = DatabaseAdminClient.DEFAULT_MTLS_ENDPOINT + else: + api_endpoint = DatabaseAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=universe_domain + ) + return api_endpoint + + @staticmethod + def _get_universe_domain( + client_universe_domain: Optional[str], universe_domain_env: Optional[str] + ) -> str: + """Return the universe domain used by the client. + + Args: + client_universe_domain (Optional[str]): The universe domain configured via the client options. + universe_domain_env (Optional[str]): The universe domain configured via the "GOOGLE_CLOUD_UNIVERSE_DOMAIN" environment variable. + + Returns: + str: The universe domain to be used by the client. + + Raises: + ValueError: If the universe domain is an empty string. + """ + universe_domain = DatabaseAdminClient._DEFAULT_UNIVERSE + if client_universe_domain is not None: + universe_domain = client_universe_domain + elif universe_domain_env is not None: + universe_domain = universe_domain_env + if len(universe_domain.strip()) == 0: + raise ValueError("Universe Domain cannot be an empty string.") + return universe_domain + + def _validate_universe_domain(self): + """Validates client's and credentials' universe domains are consistent. + + Returns: + bool: True iff the configured universe domain is valid. + + Raises: + ValueError: If the configured universe domain is not valid. + """ + + # NOTE (b/349488459): universe validation is disabled until further notice. + return True + + def _add_cred_info_for_auth_errors( + self, error: core_exceptions.GoogleAPICallError + ) -> None: + """Adds credential info string to error details for 401/403/404 errors. + + Args: + error (google.api_core.exceptions.GoogleAPICallError): The error to add the cred info. + """ + if error.code not in [ + HTTPStatus.UNAUTHORIZED, + HTTPStatus.FORBIDDEN, + HTTPStatus.NOT_FOUND, + ]: + return + + cred = self._transport._credentials + + # get_cred_info is only available in google-auth>=2.35.0 + if not hasattr(cred, "get_cred_info"): + return + + # ignore the type check since pypy test fails when get_cred_info + # is not available + cred_info = cred.get_cred_info() # type: ignore + if cred_info and hasattr(error._details, "append"): + error._details.append(json.dumps(cred_info)) + + @property + def api_endpoint(self): + """Return the API endpoint used by the client instance. + + Returns: + str: The API endpoint used by the client instance. + """ + return self._api_endpoint + + @property + def universe_domain(self) -> str: + """Return the universe domain used by the client instance. + + Returns: + str: The universe domain used by the client instance. + """ + return self._universe_domain + + def __init__( + self, + *, + credentials: Optional[ga_credentials.Credentials] = None, + transport: Optional[ + Union[str, DatabaseAdminTransport, Callable[..., DatabaseAdminTransport]] + ] = None, + client_options: Optional[Union[client_options_lib.ClientOptions, dict]] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiates the database admin client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Optional[Union[str,DatabaseAdminTransport,Callable[..., DatabaseAdminTransport]]]): + The transport to use, or a Callable that constructs and returns a new transport. + If a Callable is given, it will be called with the same set of initialization + arguments as used in the DatabaseAdminTransport constructor. + If set to None, a transport is chosen automatically. + client_options (Optional[Union[google.api_core.client_options.ClientOptions, dict]]): + Custom options for the client. + + 1. The ``api_endpoint`` property can be used to override the + default endpoint provided by the client when ``transport`` is + not explicitly provided. Only if this property is not set and + ``transport`` was not explicitly provided, the endpoint is + determined by the GOOGLE_API_USE_MTLS_ENDPOINT environment + variable, which have one of the following values: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto-switch to the + default mTLS endpoint if client certificate is present; this is + the default value). + + 2. If the GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide a client certificate for mTLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + 3. The ``universe_domain`` property can be used to override the + default "googleapis.com" universe. Note that the ``api_endpoint`` + property still takes precedence; and ``universe_domain`` is + currently not supported for mTLS. + + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + self._client_options = client_options + if isinstance(self._client_options, dict): + self._client_options = client_options_lib.from_dict(self._client_options) + if self._client_options is None: + self._client_options = client_options_lib.ClientOptions() + self._client_options = cast( + client_options_lib.ClientOptions, self._client_options + ) + + universe_domain_opt = getattr(self._client_options, "universe_domain", None) + + ( + self._use_client_cert, + self._use_mtls_endpoint, + self._universe_domain_env, + ) = DatabaseAdminClient._read_environment_variables() + self._client_cert_source = DatabaseAdminClient._get_client_cert_source( + self._client_options.client_cert_source, self._use_client_cert + ) + self._universe_domain = DatabaseAdminClient._get_universe_domain( + universe_domain_opt, self._universe_domain_env + ) + self._api_endpoint = None # updated below, depending on `transport` + + # Initialize the universe domain validation. + self._is_universe_domain_valid = False + + if CLIENT_LOGGING_SUPPORTED: # pragma: NO COVER + # Setup logging. + client_logging.initialize_logging() + + api_key_value = getattr(self._client_options, "api_key", None) + if api_key_value and credentials: + raise ValueError( + "client_options.api_key and credentials are mutually exclusive" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + transport_provided = isinstance(transport, DatabaseAdminTransport) + if transport_provided: + # transport is a DatabaseAdminTransport instance. + if credentials or self._client_options.credentials_file or api_key_value: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if self._client_options.scopes: + raise ValueError( + "When providing a transport instance, provide its scopes " + "directly." + ) + self._transport = cast(DatabaseAdminTransport, transport) + self._api_endpoint = self._transport.host + + self._api_endpoint = ( + self._api_endpoint + or DatabaseAdminClient._get_api_endpoint( + self._client_options.api_endpoint, + self._client_cert_source, + self._universe_domain, + self._use_mtls_endpoint, + ) + ) + + if not transport_provided: + import google.auth._default # type: ignore + + if api_key_value and hasattr( + google.auth._default, "get_api_key_credentials" + ): + credentials = google.auth._default.get_api_key_credentials( + api_key_value + ) + + transport_init: Union[ + Type[DatabaseAdminTransport], Callable[..., DatabaseAdminTransport] + ] = ( + DatabaseAdminClient.get_transport_class(transport) + if isinstance(transport, str) or transport is None + else cast(Callable[..., DatabaseAdminTransport], transport) + ) + # initialize with the provided callable or the passed in class + self._transport = transport_init( + credentials=credentials, + credentials_file=self._client_options.credentials_file, + host=self._api_endpoint, + scopes=self._client_options.scopes, + client_cert_source_for_mtls=self._client_cert_source, + quota_project_id=self._client_options.quota_project_id, + client_info=client_info, + always_use_jwt_access=True, + api_audience=self._client_options.api_audience, + ) + + if "async" not in str(self._transport): + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ): # pragma: NO COVER + _LOGGER.debug( + "Created client `google.spanner.admin.database_v1.DatabaseAdminClient`.", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "universeDomain": getattr( + self._transport._credentials, "universe_domain", "" + ), + "credentialsType": f"{type(self._transport._credentials).__module__}.{type(self._transport._credentials).__qualname__}", + "credentialsInfo": getattr( + self.transport._credentials, "get_cred_info", lambda: None + )(), + } + if hasattr(self._transport, "_credentials") + else { + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "credentialsType": None, + }, + ) + + def list_databases( + self, + request: Optional[ + Union[spanner_database_admin.ListDatabasesRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListDatabasesPager: + r"""Lists Cloud Spanner databases. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_list_databases(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabasesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_databases(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest, dict]): + The request object. The request for + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. + parent (str): + Required. The instance whose databases should be listed. + Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabasesPager: + The response for + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.ListDatabasesRequest): + request = spanner_database_admin.ListDatabasesRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_databases] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListDatabasesPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def create_database( + self, + request: Optional[ + Union[spanner_database_admin.CreateDatabaseRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + create_statement: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Creates a new Cloud Spanner database and starts to prepare it + for serving. The returned [long-running + operation][google.longrunning.Operation] will have a name of the + format ``/operations/`` and can be + used to track preparation of the database. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateDatabaseMetadata][google.spanner.admin.database.v1.CreateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Database][google.spanner.admin.database.v1.Database], if + successful. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_create_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateDatabaseRequest( + parent="parent_value", + create_statement="create_statement_value", + ) + + # Make the request + operation = client.create_database(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.CreateDatabaseRequest, dict]): + The request object. The request for + [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. + parent (str): + Required. The name of the instance that will serve the + new database. Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + create_statement (str): + Required. A ``CREATE DATABASE`` statement, which + specifies the ID of the new database. The database ID + must conform to the regular expression + ``[a-z][a-z0-9_\-]*[a-z0-9]`` and be between 2 and 30 + characters in length. If the database ID is a reserved + word or if it contains a hyphen, the database ID must be + enclosed in backticks (:literal:`\``). + + This corresponds to the ``create_statement`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, create_statement] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.CreateDatabaseRequest): + request = spanner_database_admin.CreateDatabaseRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if create_statement is not None: + request.create_statement = create_statement + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.create_database] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_database_admin.Database, + metadata_type=spanner_database_admin.CreateDatabaseMetadata, + ) + + # Done; return the response. + return response + + def get_database( + self, + request: Optional[ + Union[spanner_database_admin.GetDatabaseRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.Database: + r"""Gets the state of a Cloud Spanner database. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_get_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetDatabaseRequest( + name="name_value", + ) + + # Make the request + response = client.get_database(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.GetDatabaseRequest, dict]): + The request object. The request for + [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. + name (str): + Required. The name of the requested database. Values are + of the form + ``projects//instances//databases/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.Database: + A Cloud Spanner database. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.GetDatabaseRequest): + request = spanner_database_admin.GetDatabaseRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_database] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def update_database( + self, + request: Optional[ + Union[spanner_database_admin.UpdateDatabaseRequest, dict] + ] = None, + *, + database: Optional[spanner_database_admin.Database] = None, + update_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Updates a Cloud Spanner database. The returned [long-running + operation][google.longrunning.Operation] can be used to track + the progress of updating the database. If the named database + does not exist, returns ``NOT_FOUND``. + + While the operation is pending: + + - The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field is set to true. + - Cancelling the operation is best-effort. If the cancellation + succeeds, the operation metadata's + [cancel_time][google.spanner.admin.database.v1.UpdateDatabaseMetadata.cancel_time] + is set, the updates are reverted, and the operation terminates + with a ``CANCELLED`` status. + - New UpdateDatabase requests will return a + ``FAILED_PRECONDITION`` error until the pending operation is + done (returns successfully or with error). + - Reading the database via the API continues to give the + pre-request values. + + Upon completion of the returned operation: + + - The new values are in effect and readable via the API. + - The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field becomes false. + + The returned [long-running + operation][google.longrunning.Operation] will have a name of the + format + ``projects//instances//databases//operations/`` + and can be used to track the database modification. The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseMetadata][google.spanner.admin.database.v1.UpdateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Database][google.spanner.admin.database.v1.Database], if + successful. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_update_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + database = spanner_admin_database_v1.Database() + database.name = "name_value" + + request = spanner_admin_database_v1.UpdateDatabaseRequest( + database=database, + ) + + # Make the request + operation = client.update_database(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.UpdateDatabaseRequest, dict]): + The request object. The request for + [UpdateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase]. + database (google.cloud.spanner_admin_database_v1.types.Database): + Required. The database to update. The ``name`` field of + the database is of the form + ``projects//instances//databases/``. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + update_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. The list of fields to update. Currently, only + ``enable_drop_protection`` field can be updated. + + This corresponds to the ``update_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, update_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.UpdateDatabaseRequest): + request = spanner_database_admin.UpdateDatabaseRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if update_mask is not None: + request.update_mask = update_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.update_database] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("database.name", request.database.name),) + ), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_database_admin.Database, + metadata_type=spanner_database_admin.UpdateDatabaseMetadata, + ) + + # Done; return the response. + return response + + def update_database_ddl( + self, + request: Optional[ + Union[spanner_database_admin.UpdateDatabaseDdlRequest, dict] + ] = None, + *, + database: Optional[str] = None, + statements: Optional[MutableSequence[str]] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Updates the schema of a Cloud Spanner database by + creating/altering/dropping tables, columns, indexes, etc. The + returned [long-running operation][google.longrunning.Operation] + will have a name of the format + ``/operations/`` and can be used to + track execution of the schema change(s). The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseDdlMetadata][google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata]. + The operation has no response. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_update_database_ddl(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database="database_value", + statements=['statements_value1', 'statements_value2'], + ) + + # Make the request + operation = client.update_database_ddl(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.UpdateDatabaseDdlRequest, dict]): + The request object. Enqueues the given DDL statements to be applied, in + order but not necessarily all at once, to the database + schema at some point (or points) in the future. The + server checks that the statements are executable + (syntactically valid, name tables that exist, etc.) + before enqueueing them, but they may still fail upon + later execution (e.g., if a statement from another batch + of statements is applied first and it conflicts in some + way, or if there is some data-related problem like a + ``NULL`` value in a column to which ``NOT NULL`` would + be added). If a statement fails, all subsequent + statements in the batch are automatically cancelled. + + Each batch of statements is assigned a name which can be + used with the + [Operations][google.longrunning.Operations] API to + monitor progress. See the + [operation_id][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.operation_id] + field for more details. + database (str): + Required. The database to update. + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + statements (MutableSequence[str]): + Required. DDL statements to be + applied to the database. + + This corresponds to the ``statements`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.protobuf.empty_pb2.Empty` A generic empty message that you can re-use to avoid defining duplicated + empty messages in your APIs. A typical example is to + use it as the request or the response type of an API + method. For instance: + + service Foo { + rpc Bar(google.protobuf.Empty) returns + (google.protobuf.Empty); + + } + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, statements] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.UpdateDatabaseDdlRequest): + request = spanner_database_admin.UpdateDatabaseDdlRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if statements is not None: + request.statements = statements + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.update_database_ddl] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + empty_pb2.Empty, + metadata_type=spanner_database_admin.UpdateDatabaseDdlMetadata, + ) + + # Done; return the response. + return response + + def drop_database( + self, + request: Optional[ + Union[spanner_database_admin.DropDatabaseRequest, dict] + ] = None, + *, + database: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Drops (aka deletes) a Cloud Spanner database. Completed backups + for the database will be retained according to their + ``expire_time``. Note: Cloud Spanner might continue to accept + requests for a few seconds after the database has been deleted. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_drop_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DropDatabaseRequest( + database="database_value", + ) + + # Make the request + client.drop_database(request=request) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.DropDatabaseRequest, dict]): + The request object. The request for + [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. + database (str): + Required. The database to be dropped. + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.DropDatabaseRequest): + request = spanner_database_admin.DropDatabaseRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.drop_database] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def get_database_ddl( + self, + request: Optional[ + Union[spanner_database_admin.GetDatabaseDdlRequest, dict] + ] = None, + *, + database: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.GetDatabaseDdlResponse: + r"""Returns the schema of a Cloud Spanner database as a list of + formatted DDL statements. This method does not show pending + schema updates, those may be queried using the + [Operations][google.longrunning.Operations] API. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_get_database_ddl(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetDatabaseDdlRequest( + database="database_value", + ) + + # Make the request + response = client.get_database_ddl(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlRequest, dict]): + The request object. The request for + [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. + database (str): + Required. The database whose schema we wish to get. + Values are of the form + ``projects//instances//databases/`` + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlResponse: + The response for + [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.GetDatabaseDdlRequest): + request = spanner_database_admin.GetDatabaseDdlRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_database_ddl] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def set_iam_policy( + self, + request: Optional[Union[iam_policy_pb2.SetIamPolicyRequest, dict]] = None, + *, + resource: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Sets the access control policy on a database or backup resource. + Replaces any existing policy. + + Authorization requires ``spanner.databases.setIamPolicy`` + permission on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. For + backups, authorization requires ``spanner.backups.setIamPolicy`` + permission on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + def sample_set_iam_policy(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = client.set_iam_policy(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.iam.v1.iam_policy_pb2.SetIamPolicyRequest, dict]): + The request object. Request message for ``SetIamPolicy`` method. + resource (str): + REQUIRED: The resource for which the + policy is being specified. See the + operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which specifies access + controls for Google Cloud resources. + + A Policy is a collection of bindings. A binding binds + one or more members, or principals, to a single role. + Principals can be user accounts, service accounts, + Google groups, and domains (such as G Suite). A role + is a named list of permissions; each role can be an + IAM predefined role or a user-created custom role. + + For some types of Google Cloud resources, a binding + can also specify a condition, which is a logical + expression that allows access to a resource only if + the expression evaluates to true. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the [IAM + documentation](https://cloud.google.com/iam/help/conditions/resource-policies). + + **JSON example:** + + :literal:`` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \` + + **YAML example:** + + :literal:`` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \` + + For a description of IAM and its features, see the + [IAM + documentation](https://cloud.google.com/iam/docs/). + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + if isinstance(request, dict): + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + request = iam_policy_pb2.SetIamPolicyRequest(**request) + elif not request: + # Null request, just make one. + request = iam_policy_pb2.SetIamPolicyRequest() + if resource is not None: + request.resource = resource + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.set_iam_policy] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def get_iam_policy( + self, + request: Optional[Union[iam_policy_pb2.GetIamPolicyRequest, dict]] = None, + *, + resource: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Gets the access control policy for a database or backup + resource. Returns an empty policy if a database or backup exists + but does not have a policy set. + + Authorization requires ``spanner.databases.getIamPolicy`` + permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. For + backups, authorization requires ``spanner.backups.getIamPolicy`` + permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + def sample_get_iam_policy(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = client.get_iam_policy(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.iam.v1.iam_policy_pb2.GetIamPolicyRequest, dict]): + The request object. Request message for ``GetIamPolicy`` method. + resource (str): + REQUIRED: The resource for which the + policy is being requested. See the + operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which specifies access + controls for Google Cloud resources. + + A Policy is a collection of bindings. A binding binds + one or more members, or principals, to a single role. + Principals can be user accounts, service accounts, + Google groups, and domains (such as G Suite). A role + is a named list of permissions; each role can be an + IAM predefined role or a user-created custom role. + + For some types of Google Cloud resources, a binding + can also specify a condition, which is a logical + expression that allows access to a resource only if + the expression evaluates to true. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the [IAM + documentation](https://cloud.google.com/iam/help/conditions/resource-policies). + + **JSON example:** + + :literal:`` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \` + + **YAML example:** + + :literal:`` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \` + + For a description of IAM and its features, see the + [IAM + documentation](https://cloud.google.com/iam/docs/). + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + if isinstance(request, dict): + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + request = iam_policy_pb2.GetIamPolicyRequest(**request) + elif not request: + # Null request, just make one. + request = iam_policy_pb2.GetIamPolicyRequest() + if resource is not None: + request.resource = resource + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_iam_policy] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def test_iam_permissions( + self, + request: Optional[Union[iam_policy_pb2.TestIamPermissionsRequest, dict]] = None, + *, + resource: Optional[str] = None, + permissions: Optional[MutableSequence[str]] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> iam_policy_pb2.TestIamPermissionsResponse: + r"""Returns permissions that the caller has on the specified + database or backup resource. + + Attempting this RPC on a non-existent Cloud Spanner database + will result in a NOT_FOUND error if the user has + ``spanner.databases.list`` permission on the containing Cloud + Spanner instance. Otherwise returns an empty set of permissions. + Calling this method on a backup that does not exist will result + in a NOT_FOUND error if the user has ``spanner.backups.list`` + permission on the containing instance. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + def sample_test_iam_permissions(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + permissions=['permissions_value1', 'permissions_value2'], + ) + + # Make the request + response = client.test_iam_permissions(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest, dict]): + The request object. Request message for ``TestIamPermissions`` method. + resource (str): + REQUIRED: The resource for which the + policy detail is being requested. See + the operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + permissions (MutableSequence[str]): + The set of permissions to check for the ``resource``. + Permissions with wildcards (such as '*' or 'storage.*') + are not allowed. For more information see `IAM + Overview `__. + + This corresponds to the ``permissions`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse: + Response message for TestIamPermissions method. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource, permissions] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + if isinstance(request, dict): + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + request = iam_policy_pb2.TestIamPermissionsRequest(**request) + elif not request: + # Null request, just make one. + request = iam_policy_pb2.TestIamPermissionsRequest() + if resource is not None: + request.resource = resource + if permissions: + request.permissions.extend(permissions) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.test_iam_permissions] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def create_backup( + self, + request: Optional[Union[gsad_backup.CreateBackupRequest, dict]] = None, + *, + parent: Optional[str] = None, + backup: Optional[gsad_backup.Backup] = None, + backup_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Starts creating a new Cloud Spanner Backup. The returned backup + [long-running operation][google.longrunning.Operation] will have + a name of the format + ``projects//instances//backups//operations/`` + and can be used to track creation of the backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Backup][google.spanner.admin.database.v1.Backup], if + successful. Cancelling the returned operation will stop the + creation and delete the backup. There can be only one pending + backup creation per database. Backup creation of different + databases can run concurrently. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_create_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + ) + + # Make the request + operation = client.create_backup(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.CreateBackupRequest, dict]): + The request object. The request for + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. + parent (str): + Required. The name of the instance in which the backup + will be created. This must be the same instance that + contains the database the backup will be created from. + The backup will be stored in the location(s) specified + in the instance configuration of this instance. Values + are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup (google.cloud.spanner_admin_database_v1.types.Backup): + Required. The backup to create. + This corresponds to the ``backup`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup_id (str): + Required. The id of the backup to be created. The + ``backup_id`` appended to ``parent`` forms the full + backup name of the form + ``projects//instances//backups/``. + + This corresponds to the ``backup_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Backup` + A backup of a Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, backup, backup_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, gsad_backup.CreateBackupRequest): + request = gsad_backup.CreateBackupRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if backup is not None: + request.backup = backup + if backup_id is not None: + request.backup_id = backup_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.create_backup] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + gsad_backup.Backup, + metadata_type=gsad_backup.CreateBackupMetadata, + ) + + # Done; return the response. + return response + + def copy_backup( + self, + request: Optional[Union[backup.CopyBackupRequest, dict]] = None, + *, + parent: Optional[str] = None, + backup_id: Optional[str] = None, + source_backup: Optional[str] = None, + expire_time: Optional[timestamp_pb2.Timestamp] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Starts copying a Cloud Spanner Backup. The returned backup + [long-running operation][google.longrunning.Operation] will have + a name of the format + ``projects//instances//backups//operations/`` + and can be used to track copying of the backup. The operation is + associated with the destination backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CopyBackupMetadata][google.spanner.admin.database.v1.CopyBackupMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Backup][google.spanner.admin.database.v1.Backup], if + successful. Cancelling the returned operation will stop the + copying and delete the destination backup. Concurrent CopyBackup + requests can run on the same source backup. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_copy_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CopyBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + ) + + # Make the request + operation = client.copy_backup(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.CopyBackupRequest, dict]): + The request object. The request for + [CopyBackup][google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup]. + parent (str): + Required. The name of the destination instance that will + contain the backup copy. Values are of the form: + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup_id (str): + Required. The id of the backup copy. The ``backup_id`` + appended to ``parent`` forms the full backup_uri of the + form + ``projects//instances//backups/``. + + This corresponds to the ``backup_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + source_backup (str): + Required. The source backup to be copied. The source + backup needs to be in READY state for it to be copied. + Once CopyBackup is in progress, the source backup cannot + be deleted or cleaned up on expiration until CopyBackup + is finished. Values are of the form: + ``projects//instances//backups/``. + + This corresponds to the ``source_backup`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + expire_time (google.protobuf.timestamp_pb2.Timestamp): + Required. The expiration time of the backup in + microsecond granularity. The expiration time must be at + least 6 hours and at most 366 days from the + ``create_time`` of the source backup. Once the + ``expire_time`` has passed, the backup is eligible to be + automatically deleted by Cloud Spanner to free the + resources used by the backup. + + This corresponds to the ``expire_time`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Backup` + A backup of a Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, backup_id, source_backup, expire_time] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.CopyBackupRequest): + request = backup.CopyBackupRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if backup_id is not None: + request.backup_id = backup_id + if source_backup is not None: + request.source_backup = source_backup + if expire_time is not None: + request.expire_time = expire_time + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.copy_backup] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + backup.Backup, + metadata_type=backup.CopyBackupMetadata, + ) + + # Done; return the response. + return response + + def get_backup( + self, + request: Optional[Union[backup.GetBackupRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup.Backup: + r"""Gets metadata on a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_get_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetBackupRequest( + name="name_value", + ) + + # Make the request + response = client.get_backup(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.GetBackupRequest, dict]): + The request object. The request for + [GetBackup][google.spanner.admin.database.v1.DatabaseAdmin.GetBackup]. + name (str): + Required. Name of the backup. Values are of the form + ``projects//instances//backups/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.Backup: + A backup of a Cloud Spanner database. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.GetBackupRequest): + request = backup.GetBackupRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_backup] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def update_backup( + self, + request: Optional[Union[gsad_backup.UpdateBackupRequest, dict]] = None, + *, + backup: Optional[gsad_backup.Backup] = None, + update_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup.Backup: + r"""Updates a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_update_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateBackupRequest( + ) + + # Make the request + response = client.update_backup(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.UpdateBackupRequest, dict]): + The request object. The request for + [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. + backup (google.cloud.spanner_admin_database_v1.types.Backup): + Required. The backup to update. ``backup.name``, and the + fields to be updated as specified by ``update_mask`` are + required. Other fields are ignored. Update is only + supported for the following fields: + + - ``backup.expire_time``. + + This corresponds to the ``backup`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + update_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields (e.g. + ``expire_time``) in the Backup resource should be + updated. This mask is relative to the Backup resource, + not to the request message. The field mask must always + be specified; this prevents any future fields from being + erased accidentally by clients that do not know about + them. + + This corresponds to the ``update_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.Backup: + A backup of a Cloud Spanner database. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [backup, update_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, gsad_backup.UpdateBackupRequest): + request = gsad_backup.UpdateBackupRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if backup is not None: + request.backup = backup + if update_mask is not None: + request.update_mask = update_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.update_backup] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("backup.name", request.backup.name),) + ), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def delete_backup( + self, + request: Optional[Union[backup.DeleteBackupRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_delete_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DeleteBackupRequest( + name="name_value", + ) + + # Make the request + client.delete_backup(request=request) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.DeleteBackupRequest, dict]): + The request object. The request for + [DeleteBackup][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup]. + name (str): + Required. Name of the backup to delete. Values are of + the form + ``projects//instances//backups/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.DeleteBackupRequest): + request = backup.DeleteBackupRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.delete_backup] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def list_backups( + self, + request: Optional[Union[backup.ListBackupsRequest, dict]] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListBackupsPager: + r"""Lists completed and pending backups. Backups returned are + ordered by ``create_time`` in descending order, starting from + the most recent ``create_time``. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_list_backups(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backups(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.ListBackupsRequest, dict]): + The request object. The request for + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. + parent (str): + Required. The instance to list backups from. Values are + of the form ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupsPager: + The response for + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.ListBackupsRequest): + request = backup.ListBackupsRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_backups] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListBackupsPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def restore_database( + self, + request: Optional[ + Union[spanner_database_admin.RestoreDatabaseRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + database_id: Optional[str] = None, + backup: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Create a new database by restoring from a completed backup. The + new database must be in the same project and in an instance with + the same instance configuration as the instance containing the + backup. The returned database [long-running + operation][google.longrunning.Operation] has a name of the + format + ``projects//instances//databases//operations/``, + and can be used to track the progress of the operation, and to + cancel it. The [metadata][google.longrunning.Operation.metadata] + field type is + [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. + The [response][google.longrunning.Operation.response] type is + [Database][google.spanner.admin.database.v1.Database], if + successful. Cancelling the returned operation will stop the + restore and delete the database. There can be only one database + being restored into an instance at a time. Once the restore + operation completes, a new restore operation can be initiated, + without waiting for the optimize operation associated with the + first restore to complete. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_restore_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.RestoreDatabaseRequest( + backup="backup_value", + parent="parent_value", + database_id="database_id_value", + ) + + # Make the request + operation = client.restore_database(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.RestoreDatabaseRequest, dict]): + The request object. The request for + [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. + parent (str): + Required. The name of the instance in which to create + the restored database. This instance must be in the same + project and have the same instance configuration as the + instance containing the source backup. Values are of the + form ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + database_id (str): + Required. The id of the database to create and restore + to. This database must not already exist. The + ``database_id`` appended to ``parent`` forms the full + database name of the form + ``projects//instances//databases/``. + + This corresponds to the ``database_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup (str): + Name of the backup from which to restore. Values are of + the form + ``projects//instances//backups/``. + + This corresponds to the ``backup`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_database_v1.types.Database` + A Cloud Spanner database. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, database_id, backup] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.RestoreDatabaseRequest): + request = spanner_database_admin.RestoreDatabaseRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if database_id is not None: + request.database_id = database_id + if backup is not None: + request.backup = backup + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.restore_database] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_database_admin.Database, + metadata_type=spanner_database_admin.RestoreDatabaseMetadata, + ) + + # Done; return the response. + return response + + def list_database_operations( + self, + request: Optional[ + Union[spanner_database_admin.ListDatabaseOperationsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListDatabaseOperationsPager: + r"""Lists database + [longrunning-operations][google.longrunning.Operation]. A + database operation has a name of the form + ``projects//instances//databases//operations/``. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_list_database_operations(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabaseOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_database_operations(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest, dict]): + The request object. The request for + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + parent (str): + Required. The instance of the database operations. + Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseOperationsPager: + The response for + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_database_admin.ListDatabaseOperationsRequest + ): + request = spanner_database_admin.ListDatabaseOperationsRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_database_operations] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListDatabaseOperationsPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def list_backup_operations( + self, + request: Optional[Union[backup.ListBackupOperationsRequest, dict]] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListBackupOperationsPager: + r"""Lists the backup [long-running + operations][google.longrunning.Operation] in the given instance. + A backup operation has a name of the form + ``projects//instances//backups//operations/``. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.progress.start_time`` in descending + order starting from the most recently started operation. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_list_backup_operations(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backup_operations(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest, dict]): + The request object. The request for + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + parent (str): + Required. The instance of the backup operations. Values + are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupOperationsPager: + The response for + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup.ListBackupOperationsRequest): + request = backup.ListBackupOperationsRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_backup_operations] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListBackupOperationsPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def list_database_roles( + self, + request: Optional[ + Union[spanner_database_admin.ListDatabaseRolesRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListDatabaseRolesPager: + r"""Lists Cloud Spanner database roles. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_list_database_roles(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabaseRolesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_database_roles(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesRequest, dict]): + The request object. The request for + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles]. + parent (str): + Required. The database whose roles should be listed. + Values are of the form + ``projects//instances//databases/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseRolesPager: + The response for + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.ListDatabaseRolesRequest): + request = spanner_database_admin.ListDatabaseRolesRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_database_roles] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListDatabaseRolesPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def add_split_points( + self, + request: Optional[ + Union[spanner_database_admin.AddSplitPointsRequest, dict] + ] = None, + *, + database: Optional[str] = None, + split_points: Optional[ + MutableSequence[spanner_database_admin.SplitPoints] + ] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.AddSplitPointsResponse: + r"""Adds split points to specified tables, indexes of a + database. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_add_split_points(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.AddSplitPointsRequest( + database="database_value", + ) + + # Make the request + response = client.add_split_points(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.AddSplitPointsRequest, dict]): + The request object. The request for + [AddSplitPoints][google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints]. + database (str): + Required. The database on whose tables/indexes split + points are to be added. Values are of the form + ``projects//instances//databases/``. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + split_points (MutableSequence[google.cloud.spanner_admin_database_v1.types.SplitPoints]): + Required. The split points to add. + This corresponds to the ``split_points`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.AddSplitPointsResponse: + The response for + [AddSplitPoints][google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints]. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, split_points] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_database_admin.AddSplitPointsRequest): + request = spanner_database_admin.AddSplitPointsRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if split_points is not None: + request.split_points = split_points + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.add_split_points] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def create_backup_schedule( + self, + request: Optional[ + Union[gsad_backup_schedule.CreateBackupScheduleRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + backup_schedule: Optional[gsad_backup_schedule.BackupSchedule] = None, + backup_schedule_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup_schedule.BackupSchedule: + r"""Creates a new backup schedule. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_create_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateBackupScheduleRequest( + parent="parent_value", + backup_schedule_id="backup_schedule_id_value", + ) + + # Make the request + response = client.create_backup_schedule(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.CreateBackupScheduleRequest, dict]): + The request object. The request for + [CreateBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackupSchedule]. + parent (str): + Required. The name of the database + that this backup schedule applies to. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup_schedule (google.cloud.spanner_admin_database_v1.types.BackupSchedule): + Required. The backup schedule to + create. + + This corresponds to the ``backup_schedule`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + backup_schedule_id (str): + Required. The Id to use for the backup schedule. The + ``backup_schedule_id`` appended to ``parent`` forms the + full backup schedule name of the form + ``projects//instances//databases//backupSchedules/``. + + This corresponds to the ``backup_schedule_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, backup_schedule, backup_schedule_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, gsad_backup_schedule.CreateBackupScheduleRequest): + request = gsad_backup_schedule.CreateBackupScheduleRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if backup_schedule is not None: + request.backup_schedule = backup_schedule + if backup_schedule_id is not None: + request.backup_schedule_id = backup_schedule_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.create_backup_schedule] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def get_backup_schedule( + self, + request: Optional[Union[backup_schedule.GetBackupScheduleRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup_schedule.BackupSchedule: + r"""Gets backup schedule for the input schedule name. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_get_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetBackupScheduleRequest( + name="name_value", + ) + + # Make the request + response = client.get_backup_schedule(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.GetBackupScheduleRequest, dict]): + The request object. The request for + [GetBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.GetBackupSchedule]. + name (str): + Required. The name of the schedule to retrieve. Values + are of the form + ``projects//instances//databases//backupSchedules/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup_schedule.GetBackupScheduleRequest): + request = backup_schedule.GetBackupScheduleRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_backup_schedule] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def update_backup_schedule( + self, + request: Optional[ + Union[gsad_backup_schedule.UpdateBackupScheduleRequest, dict] + ] = None, + *, + backup_schedule: Optional[gsad_backup_schedule.BackupSchedule] = None, + update_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup_schedule.BackupSchedule: + r"""Updates a backup schedule. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_update_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateBackupScheduleRequest( + ) + + # Make the request + response = client.update_backup_schedule(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.UpdateBackupScheduleRequest, dict]): + The request object. The request for + [UpdateBackupScheduleRequest][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackupSchedule]. + backup_schedule (google.cloud.spanner_admin_database_v1.types.BackupSchedule): + Required. The backup schedule to update. + ``backup_schedule.name``, and the fields to be updated + as specified by ``update_mask`` are required. Other + fields are ignored. + + This corresponds to the ``backup_schedule`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + update_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which + fields in the BackupSchedule resource + should be updated. This mask is relative + to the BackupSchedule resource, not to + the request message. The field mask must + always be specified; this prevents any + future fields from being erased + accidentally. + + This corresponds to the ``update_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [backup_schedule, update_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, gsad_backup_schedule.UpdateBackupScheduleRequest): + request = gsad_backup_schedule.UpdateBackupScheduleRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if backup_schedule is not None: + request.backup_schedule = backup_schedule + if update_mask is not None: + request.update_mask = update_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.update_backup_schedule] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("backup_schedule.name", request.backup_schedule.name),) + ), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def delete_backup_schedule( + self, + request: Optional[ + Union[backup_schedule.DeleteBackupScheduleRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes a backup schedule. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_delete_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DeleteBackupScheduleRequest( + name="name_value", + ) + + # Make the request + client.delete_backup_schedule(request=request) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.DeleteBackupScheduleRequest, dict]): + The request object. The request for + [DeleteBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackupSchedule]. + name (str): + Required. The name of the schedule to delete. Values are + of the form + ``projects//instances//databases//backupSchedules/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup_schedule.DeleteBackupScheduleRequest): + request = backup_schedule.DeleteBackupScheduleRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.delete_backup_schedule] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def list_backup_schedules( + self, + request: Optional[ + Union[backup_schedule.ListBackupSchedulesRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListBackupSchedulesPager: + r"""Lists all the backup schedules for the database. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_list_backup_schedules(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupSchedulesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backup_schedules(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesRequest, dict]): + The request object. The request for + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules]. + parent (str): + Required. Database is the parent + resource whose backup schedules should + be listed. Values are of the form + projects//instances//databases/ + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupSchedulesPager: + The response for + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, backup_schedule.ListBackupSchedulesRequest): + request = backup_schedule.ListBackupSchedulesRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_backup_schedules] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListBackupSchedulesPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def internal_update_graph_operation( + self, + request: Optional[ + Union[spanner_database_admin.InternalUpdateGraphOperationRequest, dict] + ] = None, + *, + database: Optional[str] = None, + operation_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.InternalUpdateGraphOperationResponse: + r"""This is an internal API called by Spanner Graph jobs. + You should never need to call this API directly. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_database_v1 + + def sample_internal_update_graph_operation(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.InternalUpdateGraphOperationRequest( + database="database_value", + operation_id="operation_id_value", + vm_identity_token="vm_identity_token_value", + ) + + # Make the request + response = client.internal_update_graph_operation(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_database_v1.types.InternalUpdateGraphOperationRequest, dict]): + The request object. Internal request proto, do not use + directly. + database (str): + Internal field, do not use directly. + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + operation_id (str): + Internal field, do not use directly. + This corresponds to the ``operation_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_database_v1.types.InternalUpdateGraphOperationResponse: + Internal response proto, do not use + directly. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, operation_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_database_admin.InternalUpdateGraphOperationRequest + ): + request = spanner_database_admin.InternalUpdateGraphOperationRequest( + request + ) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if operation_id is not None: + request.operation_id = operation_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[ + self._transport.internal_update_graph_operation + ] + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def __enter__(self) -> "DatabaseAdminClient": + return self + + def __exit__(self, type, value, traceback): + """Releases underlying transport's resources. + + .. warning:: + ONLY use as a context manager if the transport is NOT shared + with other clients! Exiting the with block will CLOSE the transport + and may cause errors in other clients! + """ + self.transport.close() + + def list_operations( + self, + request: Optional[operations_pb2.ListOperationsRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.ListOperationsResponse: + r"""Lists operations that match the specified filter in the request. + + Args: + request (:class:`~.operations_pb2.ListOperationsRequest`): + The request object. Request message for + `ListOperations` method. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + ~.operations_pb2.ListOperationsResponse: + Response message for ``ListOperations`` method. + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.ListOperationsRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_operations] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + try: + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + except core_exceptions.GoogleAPICallError as e: + self._add_cred_info_for_auth_errors(e) + raise e + + def get_operation( + self, + request: Optional[operations_pb2.GetOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Gets the latest state of a long-running operation. + + Args: + request (:class:`~.operations_pb2.GetOperationRequest`): + The request object. Request message for + `GetOperation` method. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + ~.operations_pb2.Operation: + An ``Operation`` object. + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.GetOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + try: + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + except core_exceptions.GoogleAPICallError as e: + self._add_cred_info_for_auth_errors(e) + raise e + + def delete_operation( + self, + request: Optional[operations_pb2.DeleteOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes a long-running operation. + + This method indicates that the client is no longer interested + in the operation result. It does not cancel the operation. + If the server doesn't support this method, it returns + `google.rpc.Code.UNIMPLEMENTED`. + + Args: + request (:class:`~.operations_pb2.DeleteOperationRequest`): + The request object. Request message for + `DeleteOperation` method. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + None + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.DeleteOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.delete_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def cancel_operation( + self, + request: Optional[operations_pb2.CancelOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Starts asynchronous cancellation on a long-running operation. + + The server makes a best effort to cancel the operation, but success + is not guaranteed. If the server doesn't support this method, it returns + `google.rpc.Code.UNIMPLEMENTED`. + + Args: + request (:class:`~.operations_pb2.CancelOperationRequest`): + The request object. Request message for + `CancelOperation` method. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + None + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.CancelOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.cancel_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + +__all__ = ("DatabaseAdminClient",) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/pagers.py b/google/cloud/spanner_admin_database_v1/services/database_admin/pagers.py new file mode 100644 index 0000000000..c9e2e14d52 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/pagers.py @@ -0,0 +1,989 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.api_core import retry_async as retries_async +from typing import ( + Any, + AsyncIterator, + Awaitable, + Callable, + Sequence, + Tuple, + Optional, + Iterator, + Union, +) + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] + OptionalAsyncRetry = Union[ + retries_async.AsyncRetry, gapic_v1.method._MethodDefault, None + ] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + OptionalAsyncRetry = Union[retries_async.AsyncRetry, object, None] # type: ignore + +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.longrunning import operations_pb2 # type: ignore + + +class ListDatabasesPager: + """A pager for iterating through ``list_databases`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse` object, and + provides an ``__iter__`` method to iterate through its + ``databases`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListDatabases`` requests and continue to iterate + through the ``databases`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., spanner_database_admin.ListDatabasesResponse], + request: spanner_database_admin.ListDatabasesRequest, + response: spanner_database_admin.ListDatabasesResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_database_admin.ListDatabasesRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[spanner_database_admin.ListDatabasesResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[spanner_database_admin.Database]: + for page in self.pages: + yield from page.databases + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListDatabasesAsyncPager: + """A pager for iterating through ``list_databases`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``databases`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListDatabases`` requests and continue to iterate + through the ``databases`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., Awaitable[spanner_database_admin.ListDatabasesResponse]], + request: spanner_database_admin.ListDatabasesRequest, + response: spanner_database_admin.ListDatabasesResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListDatabasesResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_database_admin.ListDatabasesRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages( + self, + ) -> AsyncIterator[spanner_database_admin.ListDatabasesResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[spanner_database_admin.Database]: + async def async_generator(): + async for page in self.pages: + for response in page.databases: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListBackupsPager: + """A pager for iterating through ``list_backups`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupsResponse` object, and + provides an ``__iter__`` method to iterate through its + ``backups`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListBackups`` requests and continue to iterate + through the ``backups`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., backup.ListBackupsResponse], + request: backup.ListBackupsRequest, + response: backup.ListBackupsResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListBackupsRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListBackupsResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = backup.ListBackupsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[backup.ListBackupsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[backup.Backup]: + for page in self.pages: + yield from page.backups + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListBackupsAsyncPager: + """A pager for iterating through ``list_backups`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupsResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``backups`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListBackups`` requests and continue to iterate + through the ``backups`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., Awaitable[backup.ListBackupsResponse]], + request: backup.ListBackupsRequest, + response: backup.ListBackupsResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListBackupsRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListBackupsResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = backup.ListBackupsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages(self) -> AsyncIterator[backup.ListBackupsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[backup.Backup]: + async def async_generator(): + async for page in self.pages: + for response in page.backups: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListDatabaseOperationsPager: + """A pager for iterating through ``list_database_operations`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse` object, and + provides an ``__iter__`` method to iterate through its + ``operations`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListDatabaseOperations`` requests and continue to iterate + through the ``operations`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., spanner_database_admin.ListDatabaseOperationsResponse], + request: spanner_database_admin.ListDatabaseOperationsRequest, + response: spanner_database_admin.ListDatabaseOperationsResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_database_admin.ListDatabaseOperationsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[spanner_database_admin.ListDatabaseOperationsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[operations_pb2.Operation]: + for page in self.pages: + yield from page.operations + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListDatabaseOperationsAsyncPager: + """A pager for iterating through ``list_database_operations`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``operations`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListDatabaseOperations`` requests and continue to iterate + through the ``operations`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[ + ..., Awaitable[spanner_database_admin.ListDatabaseOperationsResponse] + ], + request: spanner_database_admin.ListDatabaseOperationsRequest, + response: spanner_database_admin.ListDatabaseOperationsResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_database_admin.ListDatabaseOperationsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages( + self, + ) -> AsyncIterator[spanner_database_admin.ListDatabaseOperationsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[operations_pb2.Operation]: + async def async_generator(): + async for page in self.pages: + for response in page.operations: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListBackupOperationsPager: + """A pager for iterating through ``list_backup_operations`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse` object, and + provides an ``__iter__`` method to iterate through its + ``operations`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListBackupOperations`` requests and continue to iterate + through the ``operations`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., backup.ListBackupOperationsResponse], + request: backup.ListBackupOperationsRequest, + response: backup.ListBackupOperationsResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = backup.ListBackupOperationsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[backup.ListBackupOperationsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[operations_pb2.Operation]: + for page in self.pages: + yield from page.operations + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListBackupOperationsAsyncPager: + """A pager for iterating through ``list_backup_operations`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``operations`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListBackupOperations`` requests and continue to iterate + through the ``operations`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., Awaitable[backup.ListBackupOperationsResponse]], + request: backup.ListBackupOperationsRequest, + response: backup.ListBackupOperationsResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListBackupOperationsResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = backup.ListBackupOperationsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages(self) -> AsyncIterator[backup.ListBackupOperationsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[operations_pb2.Operation]: + async def async_generator(): + async for page in self.pages: + for response in page.operations: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListDatabaseRolesPager: + """A pager for iterating through ``list_database_roles`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesResponse` object, and + provides an ``__iter__`` method to iterate through its + ``database_roles`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListDatabaseRoles`` requests and continue to iterate + through the ``database_roles`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., spanner_database_admin.ListDatabaseRolesResponse], + request: spanner_database_admin.ListDatabaseRolesRequest, + response: spanner_database_admin.ListDatabaseRolesResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_database_admin.ListDatabaseRolesRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[spanner_database_admin.ListDatabaseRolesResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[spanner_database_admin.DatabaseRole]: + for page in self.pages: + yield from page.database_roles + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListDatabaseRolesAsyncPager: + """A pager for iterating through ``list_database_roles`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``database_roles`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListDatabaseRoles`` requests and continue to iterate + through the ``database_roles`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[ + ..., Awaitable[spanner_database_admin.ListDatabaseRolesResponse] + ], + request: spanner_database_admin.ListDatabaseRolesRequest, + response: spanner_database_admin.ListDatabaseRolesResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_database_admin.ListDatabaseRolesRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages( + self, + ) -> AsyncIterator[spanner_database_admin.ListDatabaseRolesResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[spanner_database_admin.DatabaseRole]: + async def async_generator(): + async for page in self.pages: + for response in page.database_roles: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListBackupSchedulesPager: + """A pager for iterating through ``list_backup_schedules`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesResponse` object, and + provides an ``__iter__`` method to iterate through its + ``backup_schedules`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListBackupSchedules`` requests and continue to iterate + through the ``backup_schedules`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., backup_schedule.ListBackupSchedulesResponse], + request: backup_schedule.ListBackupSchedulesRequest, + response: backup_schedule.ListBackupSchedulesResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = backup_schedule.ListBackupSchedulesRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[backup_schedule.ListBackupSchedulesResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[backup_schedule.BackupSchedule]: + for page in self.pages: + yield from page.backup_schedules + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListBackupSchedulesAsyncPager: + """A pager for iterating through ``list_backup_schedules`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``backup_schedules`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListBackupSchedules`` requests and continue to iterate + through the ``backup_schedules`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., Awaitable[backup_schedule.ListBackupSchedulesResponse]], + request: backup_schedule.ListBackupSchedulesRequest, + response: backup_schedule.ListBackupSchedulesResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesRequest): + The initial request object. + response (google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = backup_schedule.ListBackupSchedulesRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages(self) -> AsyncIterator[backup_schedule.ListBackupSchedulesResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[backup_schedule.BackupSchedule]: + async def async_generator(): + async for page in self.pages: + for response in page.backup_schedules: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/README.rst b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/README.rst new file mode 100644 index 0000000000..f70c023a98 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/README.rst @@ -0,0 +1,9 @@ + +transport inheritance structure +_______________________________ + +`DatabaseAdminTransport` is the ABC for all transports. +- public child `DatabaseAdminGrpcTransport` for sync gRPC transport (defined in `grpc.py`). +- public child `DatabaseAdminGrpcAsyncIOTransport` for async gRPC transport (defined in `grpc_asyncio.py`). +- private child `_BaseDatabaseAdminRestTransport` for base REST transport with inner classes `_BaseMETHOD` (defined in `rest_base.py`). +- public child `DatabaseAdminRestTransport` for sync REST transport with inner classes `METHOD` derived from the parent's corresponding `_BaseMETHOD` classes (defined in `rest.py`). diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/__init__.py b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/__init__.py new file mode 100644 index 0000000000..23ba04ea21 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/__init__.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from collections import OrderedDict +from typing import Dict, Type + +from .base import DatabaseAdminTransport +from .grpc import DatabaseAdminGrpcTransport +from .grpc_asyncio import DatabaseAdminGrpcAsyncIOTransport +from .rest import DatabaseAdminRestTransport +from .rest import DatabaseAdminRestInterceptor + + +# Compile a registry of transports. +_transport_registry = OrderedDict() # type: Dict[str, Type[DatabaseAdminTransport]] +_transport_registry["grpc"] = DatabaseAdminGrpcTransport +_transport_registry["grpc_asyncio"] = DatabaseAdminGrpcAsyncIOTransport +_transport_registry["rest"] = DatabaseAdminRestTransport + +__all__ = ( + "DatabaseAdminTransport", + "DatabaseAdminGrpcTransport", + "DatabaseAdminGrpcAsyncIOTransport", + "DatabaseAdminRestTransport", + "DatabaseAdminRestInterceptor", +) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/base.py b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/base.py new file mode 100644 index 0000000000..689f6afe96 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/base.py @@ -0,0 +1,837 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import abc +from typing import Awaitable, Callable, Dict, Optional, Sequence, Union + +from google.cloud.spanner_admin_database_v1 import gapic_version as package_version + +import google.auth # type: ignore +import google.api_core +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.api_core import operations_v1 +from google.auth import credentials as ga_credentials # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as gsad_backup_schedule, +) +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +class DatabaseAdminTransport(abc.ABC): + """Abstract transport class for DatabaseAdmin.""" + + AUTH_SCOPES = ( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ) + + DEFAULT_HOST: str = "spanner.googleapis.com" + + def __init__( + self, + *, + host: str = DEFAULT_HOST, + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + """ + + scopes_kwargs = {"scopes": scopes, "default_scopes": self.AUTH_SCOPES} + + # Save the scopes. + self._scopes = scopes + if not hasattr(self, "_ignore_credentials"): + self._ignore_credentials: bool = False + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise core_exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = google.auth.load_credentials_from_file( + credentials_file, **scopes_kwargs, quota_project_id=quota_project_id + ) + elif credentials is None and not self._ignore_credentials: + credentials, _ = google.auth.default( + **scopes_kwargs, quota_project_id=quota_project_id + ) + # Don't apply audience if the credentials file passed from user. + if hasattr(credentials, "with_gdch_audience"): + credentials = credentials.with_gdch_audience( + api_audience if api_audience else host + ) + + # If the credentials are service account credentials, then always try to use self signed JWT. + if ( + always_use_jwt_access + and isinstance(credentials, service_account.Credentials) + and hasattr(service_account.Credentials, "with_always_use_jwt_access") + ): + credentials = credentials.with_always_use_jwt_access(True) + + # Save the credentials. + self._credentials = credentials + + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + @property + def host(self): + return self._host + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.list_databases: gapic_v1.method.wrap_method( + self.list_databases, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.create_database: gapic_v1.method.wrap_method( + self.create_database, + default_timeout=3600.0, + client_info=client_info, + ), + self.get_database: gapic_v1.method.wrap_method( + self.get_database, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.update_database: gapic_v1.method.wrap_method( + self.update_database, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.update_database_ddl: gapic_v1.method.wrap_method( + self.update_database_ddl, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.drop_database: gapic_v1.method.wrap_method( + self.drop_database, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.get_database_ddl: gapic_v1.method.wrap_method( + self.get_database_ddl, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.set_iam_policy: gapic_v1.method.wrap_method( + self.set_iam_policy, + default_timeout=30.0, + client_info=client_info, + ), + self.get_iam_policy: gapic_v1.method.wrap_method( + self.get_iam_policy, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.test_iam_permissions: gapic_v1.method.wrap_method( + self.test_iam_permissions, + default_timeout=30.0, + client_info=client_info, + ), + self.create_backup: gapic_v1.method.wrap_method( + self.create_backup, + default_timeout=3600.0, + client_info=client_info, + ), + self.copy_backup: gapic_v1.method.wrap_method( + self.copy_backup, + default_timeout=3600.0, + client_info=client_info, + ), + self.get_backup: gapic_v1.method.wrap_method( + self.get_backup, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.update_backup: gapic_v1.method.wrap_method( + self.update_backup, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.delete_backup: gapic_v1.method.wrap_method( + self.delete_backup, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_backups: gapic_v1.method.wrap_method( + self.list_backups, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.restore_database: gapic_v1.method.wrap_method( + self.restore_database, + default_timeout=3600.0, + client_info=client_info, + ), + self.list_database_operations: gapic_v1.method.wrap_method( + self.list_database_operations, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_backup_operations: gapic_v1.method.wrap_method( + self.list_backup_operations, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_database_roles: gapic_v1.method.wrap_method( + self.list_database_roles, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.add_split_points: gapic_v1.method.wrap_method( + self.add_split_points, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.create_backup_schedule: gapic_v1.method.wrap_method( + self.create_backup_schedule, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.get_backup_schedule: gapic_v1.method.wrap_method( + self.get_backup_schedule, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.update_backup_schedule: gapic_v1.method.wrap_method( + self.update_backup_schedule, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.delete_backup_schedule: gapic_v1.method.wrap_method( + self.delete_backup_schedule, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_backup_schedules: gapic_v1.method.wrap_method( + self.list_backup_schedules, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.internal_update_graph_operation: gapic_v1.method.wrap_method( + self.internal_update_graph_operation, + default_timeout=None, + client_info=client_info, + ), + self.cancel_operation: gapic_v1.method.wrap_method( + self.cancel_operation, + default_timeout=None, + client_info=client_info, + ), + self.delete_operation: gapic_v1.method.wrap_method( + self.delete_operation, + default_timeout=None, + client_info=client_info, + ), + self.get_operation: gapic_v1.method.wrap_method( + self.get_operation, + default_timeout=None, + client_info=client_info, + ), + self.list_operations: gapic_v1.method.wrap_method( + self.list_operations, + default_timeout=None, + client_info=client_info, + ), + } + + def close(self): + """Closes resources associated with the transport. + + .. warning:: + Only call this method if the transport is NOT shared + with other clients - this may cause errors in other clients! + """ + raise NotImplementedError() + + @property + def operations_client(self): + """Return the client designed to process long-running operations.""" + raise NotImplementedError() + + @property + def list_databases( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabasesRequest], + Union[ + spanner_database_admin.ListDatabasesResponse, + Awaitable[spanner_database_admin.ListDatabasesResponse], + ], + ]: + raise NotImplementedError() + + @property + def create_database( + self, + ) -> Callable[ + [spanner_database_admin.CreateDatabaseRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def get_database( + self, + ) -> Callable[ + [spanner_database_admin.GetDatabaseRequest], + Union[ + spanner_database_admin.Database, Awaitable[spanner_database_admin.Database] + ], + ]: + raise NotImplementedError() + + @property + def update_database( + self, + ) -> Callable[ + [spanner_database_admin.UpdateDatabaseRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def update_database_ddl( + self, + ) -> Callable[ + [spanner_database_admin.UpdateDatabaseDdlRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def drop_database( + self, + ) -> Callable[ + [spanner_database_admin.DropDatabaseRequest], + Union[empty_pb2.Empty, Awaitable[empty_pb2.Empty]], + ]: + raise NotImplementedError() + + @property + def get_database_ddl( + self, + ) -> Callable[ + [spanner_database_admin.GetDatabaseDdlRequest], + Union[ + spanner_database_admin.GetDatabaseDdlResponse, + Awaitable[spanner_database_admin.GetDatabaseDdlResponse], + ], + ]: + raise NotImplementedError() + + @property + def set_iam_policy( + self, + ) -> Callable[ + [iam_policy_pb2.SetIamPolicyRequest], + Union[policy_pb2.Policy, Awaitable[policy_pb2.Policy]], + ]: + raise NotImplementedError() + + @property + def get_iam_policy( + self, + ) -> Callable[ + [iam_policy_pb2.GetIamPolicyRequest], + Union[policy_pb2.Policy, Awaitable[policy_pb2.Policy]], + ]: + raise NotImplementedError() + + @property + def test_iam_permissions( + self, + ) -> Callable[ + [iam_policy_pb2.TestIamPermissionsRequest], + Union[ + iam_policy_pb2.TestIamPermissionsResponse, + Awaitable[iam_policy_pb2.TestIamPermissionsResponse], + ], + ]: + raise NotImplementedError() + + @property + def create_backup( + self, + ) -> Callable[ + [gsad_backup.CreateBackupRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def copy_backup( + self, + ) -> Callable[ + [backup.CopyBackupRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def get_backup( + self, + ) -> Callable[ + [backup.GetBackupRequest], Union[backup.Backup, Awaitable[backup.Backup]] + ]: + raise NotImplementedError() + + @property + def update_backup( + self, + ) -> Callable[ + [gsad_backup.UpdateBackupRequest], + Union[gsad_backup.Backup, Awaitable[gsad_backup.Backup]], + ]: + raise NotImplementedError() + + @property + def delete_backup( + self, + ) -> Callable[ + [backup.DeleteBackupRequest], Union[empty_pb2.Empty, Awaitable[empty_pb2.Empty]] + ]: + raise NotImplementedError() + + @property + def list_backups( + self, + ) -> Callable[ + [backup.ListBackupsRequest], + Union[backup.ListBackupsResponse, Awaitable[backup.ListBackupsResponse]], + ]: + raise NotImplementedError() + + @property + def restore_database( + self, + ) -> Callable[ + [spanner_database_admin.RestoreDatabaseRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def list_database_operations( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabaseOperationsRequest], + Union[ + spanner_database_admin.ListDatabaseOperationsResponse, + Awaitable[spanner_database_admin.ListDatabaseOperationsResponse], + ], + ]: + raise NotImplementedError() + + @property + def list_backup_operations( + self, + ) -> Callable[ + [backup.ListBackupOperationsRequest], + Union[ + backup.ListBackupOperationsResponse, + Awaitable[backup.ListBackupOperationsResponse], + ], + ]: + raise NotImplementedError() + + @property + def list_database_roles( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabaseRolesRequest], + Union[ + spanner_database_admin.ListDatabaseRolesResponse, + Awaitable[spanner_database_admin.ListDatabaseRolesResponse], + ], + ]: + raise NotImplementedError() + + @property + def add_split_points( + self, + ) -> Callable[ + [spanner_database_admin.AddSplitPointsRequest], + Union[ + spanner_database_admin.AddSplitPointsResponse, + Awaitable[spanner_database_admin.AddSplitPointsResponse], + ], + ]: + raise NotImplementedError() + + @property + def create_backup_schedule( + self, + ) -> Callable[ + [gsad_backup_schedule.CreateBackupScheduleRequest], + Union[ + gsad_backup_schedule.BackupSchedule, + Awaitable[gsad_backup_schedule.BackupSchedule], + ], + ]: + raise NotImplementedError() + + @property + def get_backup_schedule( + self, + ) -> Callable[ + [backup_schedule.GetBackupScheduleRequest], + Union[ + backup_schedule.BackupSchedule, Awaitable[backup_schedule.BackupSchedule] + ], + ]: + raise NotImplementedError() + + @property + def update_backup_schedule( + self, + ) -> Callable[ + [gsad_backup_schedule.UpdateBackupScheduleRequest], + Union[ + gsad_backup_schedule.BackupSchedule, + Awaitable[gsad_backup_schedule.BackupSchedule], + ], + ]: + raise NotImplementedError() + + @property + def delete_backup_schedule( + self, + ) -> Callable[ + [backup_schedule.DeleteBackupScheduleRequest], + Union[empty_pb2.Empty, Awaitable[empty_pb2.Empty]], + ]: + raise NotImplementedError() + + @property + def list_backup_schedules( + self, + ) -> Callable[ + [backup_schedule.ListBackupSchedulesRequest], + Union[ + backup_schedule.ListBackupSchedulesResponse, + Awaitable[backup_schedule.ListBackupSchedulesResponse], + ], + ]: + raise NotImplementedError() + + @property + def internal_update_graph_operation( + self, + ) -> Callable[ + [spanner_database_admin.InternalUpdateGraphOperationRequest], + Union[ + spanner_database_admin.InternalUpdateGraphOperationResponse, + Awaitable[spanner_database_admin.InternalUpdateGraphOperationResponse], + ], + ]: + raise NotImplementedError() + + @property + def list_operations( + self, + ) -> Callable[ + [operations_pb2.ListOperationsRequest], + Union[ + operations_pb2.ListOperationsResponse, + Awaitable[operations_pb2.ListOperationsResponse], + ], + ]: + raise NotImplementedError() + + @property + def get_operation( + self, + ) -> Callable[ + [operations_pb2.GetOperationRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def cancel_operation( + self, + ) -> Callable[[operations_pb2.CancelOperationRequest], None,]: + raise NotImplementedError() + + @property + def delete_operation( + self, + ) -> Callable[[operations_pb2.DeleteOperationRequest], None,]: + raise NotImplementedError() + + @property + def kind(self) -> str: + raise NotImplementedError() + + +__all__ = ("DatabaseAdminTransport",) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc.py b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc.py new file mode 100644 index 0000000000..8f31a1fb98 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc.py @@ -0,0 +1,1336 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import json +import logging as std_logging +import pickle +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple, Union + +from google.api_core import grpc_helpers +from google.api_core import operations_v1 +from google.api_core import gapic_v1 +import google.auth # type: ignore +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.protobuf.json_format import MessageToJson +import google.protobuf.message + +import grpc # type: ignore +import proto # type: ignore + +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as gsad_backup_schedule, +) +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from .base import DatabaseAdminTransport, DEFAULT_CLIENT_INFO + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class _LoggingClientInterceptor(grpc.UnaryUnaryClientInterceptor): # pragma: NO COVER + def intercept_unary_unary(self, continuation, client_call_details, request): + logging_enabled = CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ) + if logging_enabled: # pragma: NO COVER + request_metadata = client_call_details.metadata + if isinstance(request, proto.Message): + request_payload = type(request).to_json(request) + elif isinstance(request, google.protobuf.message.Message): + request_payload = MessageToJson(request) + else: + request_payload = f"{type(request).__name__}: {pickle.dumps(request)}" + + request_metadata = { + key: value.decode("utf-8") if isinstance(value, bytes) else value + for key, value in request_metadata + } + grpc_request = { + "payload": request_payload, + "requestMethod": "grpc", + "metadata": dict(request_metadata), + } + _LOGGER.debug( + f"Sending request for {client_call_details.method}", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": str(client_call_details.method), + "request": grpc_request, + "metadata": grpc_request["metadata"], + }, + ) + response = continuation(client_call_details, request) + if logging_enabled: # pragma: NO COVER + response_metadata = response.trailing_metadata() + # Convert gRPC metadata `` to list of tuples + metadata = ( + dict([(k, str(v)) for k, v in response_metadata]) + if response_metadata + else None + ) + result = response.result() + if isinstance(result, proto.Message): + response_payload = type(result).to_json(result) + elif isinstance(result, google.protobuf.message.Message): + response_payload = MessageToJson(result) + else: + response_payload = f"{type(result).__name__}: {pickle.dumps(result)}" + grpc_response = { + "payload": response_payload, + "metadata": metadata, + "status": "OK", + } + _LOGGER.debug( + f"Received response for {client_call_details.method}.", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": client_call_details.method, + "response": grpc_response, + "metadata": grpc_response["metadata"], + }, + ) + return response + + +class DatabaseAdminGrpcTransport(DatabaseAdminTransport): + """gRPC backend transport for DatabaseAdmin. + + Cloud Spanner Database Admin API + + The Cloud Spanner Database Admin API can be used to: + + - create, drop, and list databases + - update the schema of pre-existing databases + - create, delete, copy and list backups for a database + - restore a database from an existing backup + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: Optional[Union[grpc.Channel, Callable[..., grpc.Channel]]] = None, + api_mtls_endpoint: Optional[str] = None, + client_cert_source: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + ssl_channel_credentials: Optional[grpc.ChannelCredentials] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if a ``channel`` instance is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if a ``channel`` instance is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if a ``channel`` instance is provided. + channel (Optional[Union[grpc.Channel, Callable[..., grpc.Channel]]]): + A ``Channel`` instance through which to make calls, or a Callable + that constructs and returns one. If set to None, ``self.create_channel`` + is used to create the channel. If a Callable is given, it will be called + with the same arguments as used in ``self.create_channel``. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or application default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for the grpc channel. It is ignored if a ``channel`` instance is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure a mutual TLS channel. It is + ignored if a ``channel`` instance or ``ssl_channel_credentials`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._grpc_channel = None + self._ssl_channel_credentials = ssl_channel_credentials + self._stubs: Dict[str, Callable] = {} + self._operations_client: Optional[operations_v1.OperationsClient] = None + + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + + if isinstance(channel, grpc.Channel): + # Ignore credentials if a channel was passed. + credentials = None + self._ignore_credentials = True + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + + else: + if api_mtls_endpoint: + host = api_mtls_endpoint + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + self._ssl_channel_credentials = SslCredentials().ssl_credentials + + else: + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + + # The base transport sets the host, credentials and scopes + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + if not self._grpc_channel: + # initialize with the provided callable or the default channel + channel_init = channel or type(self).create_channel + self._grpc_channel = channel_init( + self._host, + # use the credentials which are saved + credentials=self._credentials, + # Set ``credentials_file`` to ``None`` here as + # the credentials that we saved earlier should be used. + credentials_file=None, + scopes=self._scopes, + ssl_credentials=self._ssl_channel_credentials, + quota_project_id=quota_project_id, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + self._interceptor = _LoggingClientInterceptor() + self._logged_channel = grpc.intercept_channel( + self._grpc_channel, self._interceptor + ) + + # Wrap messages. This must be done after self._logged_channel exists + self._prep_wrapped_messages(client_info) + + @classmethod + def create_channel( + cls, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + host (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + quota_project_id=quota_project_id, + default_scopes=cls.AUTH_SCOPES, + scopes=scopes, + default_host=cls.DEFAULT_HOST, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service.""" + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Quick check: Only create a new client if we do not already have one. + if self._operations_client is None: + self._operations_client = operations_v1.OperationsClient( + self._logged_channel + ) + + # Return the client from cache. + return self._operations_client + + @property + def list_databases( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabasesRequest], + spanner_database_admin.ListDatabasesResponse, + ]: + r"""Return a callable for the list databases method over gRPC. + + Lists Cloud Spanner databases. + + Returns: + Callable[[~.ListDatabasesRequest], + ~.ListDatabasesResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_databases" not in self._stubs: + self._stubs["list_databases"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabases", + request_serializer=spanner_database_admin.ListDatabasesRequest.serialize, + response_deserializer=spanner_database_admin.ListDatabasesResponse.deserialize, + ) + return self._stubs["list_databases"] + + @property + def create_database( + self, + ) -> Callable[ + [spanner_database_admin.CreateDatabaseRequest], operations_pb2.Operation + ]: + r"""Return a callable for the create database method over gRPC. + + Creates a new Cloud Spanner database and starts to prepare it + for serving. The returned [long-running + operation][google.longrunning.Operation] will have a name of the + format ``/operations/`` and can be + used to track preparation of the database. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateDatabaseMetadata][google.spanner.admin.database.v1.CreateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Database][google.spanner.admin.database.v1.Database], if + successful. + + Returns: + Callable[[~.CreateDatabaseRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_database" not in self._stubs: + self._stubs["create_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateDatabase", + request_serializer=spanner_database_admin.CreateDatabaseRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_database"] + + @property + def get_database( + self, + ) -> Callable[ + [spanner_database_admin.GetDatabaseRequest], spanner_database_admin.Database + ]: + r"""Return a callable for the get database method over gRPC. + + Gets the state of a Cloud Spanner database. + + Returns: + Callable[[~.GetDatabaseRequest], + ~.Database]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_database" not in self._stubs: + self._stubs["get_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetDatabase", + request_serializer=spanner_database_admin.GetDatabaseRequest.serialize, + response_deserializer=spanner_database_admin.Database.deserialize, + ) + return self._stubs["get_database"] + + @property + def update_database( + self, + ) -> Callable[ + [spanner_database_admin.UpdateDatabaseRequest], operations_pb2.Operation + ]: + r"""Return a callable for the update database method over gRPC. + + Updates a Cloud Spanner database. The returned [long-running + operation][google.longrunning.Operation] can be used to track + the progress of updating the database. If the named database + does not exist, returns ``NOT_FOUND``. + + While the operation is pending: + + - The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field is set to true. + - Cancelling the operation is best-effort. If the cancellation + succeeds, the operation metadata's + [cancel_time][google.spanner.admin.database.v1.UpdateDatabaseMetadata.cancel_time] + is set, the updates are reverted, and the operation terminates + with a ``CANCELLED`` status. + - New UpdateDatabase requests will return a + ``FAILED_PRECONDITION`` error until the pending operation is + done (returns successfully or with error). + - Reading the database via the API continues to give the + pre-request values. + + Upon completion of the returned operation: + + - The new values are in effect and readable via the API. + - The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field becomes false. + + The returned [long-running + operation][google.longrunning.Operation] will have a name of the + format + ``projects//instances//databases//operations/`` + and can be used to track the database modification. The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseMetadata][google.spanner.admin.database.v1.UpdateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Database][google.spanner.admin.database.v1.Database], if + successful. + + Returns: + Callable[[~.UpdateDatabaseRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_database" not in self._stubs: + self._stubs["update_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateDatabase", + request_serializer=spanner_database_admin.UpdateDatabaseRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_database"] + + @property + def update_database_ddl( + self, + ) -> Callable[ + [spanner_database_admin.UpdateDatabaseDdlRequest], operations_pb2.Operation + ]: + r"""Return a callable for the update database ddl method over gRPC. + + Updates the schema of a Cloud Spanner database by + creating/altering/dropping tables, columns, indexes, etc. The + returned [long-running operation][google.longrunning.Operation] + will have a name of the format + ``/operations/`` and can be used to + track execution of the schema change(s). The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseDdlMetadata][google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata]. + The operation has no response. + + Returns: + Callable[[~.UpdateDatabaseDdlRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_database_ddl" not in self._stubs: + self._stubs["update_database_ddl"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateDatabaseDdl", + request_serializer=spanner_database_admin.UpdateDatabaseDdlRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_database_ddl"] + + @property + def drop_database( + self, + ) -> Callable[[spanner_database_admin.DropDatabaseRequest], empty_pb2.Empty]: + r"""Return a callable for the drop database method over gRPC. + + Drops (aka deletes) a Cloud Spanner database. Completed backups + for the database will be retained according to their + ``expire_time``. Note: Cloud Spanner might continue to accept + requests for a few seconds after the database has been deleted. + + Returns: + Callable[[~.DropDatabaseRequest], + ~.Empty]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "drop_database" not in self._stubs: + self._stubs["drop_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/DropDatabase", + request_serializer=spanner_database_admin.DropDatabaseRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["drop_database"] + + @property + def get_database_ddl( + self, + ) -> Callable[ + [spanner_database_admin.GetDatabaseDdlRequest], + spanner_database_admin.GetDatabaseDdlResponse, + ]: + r"""Return a callable for the get database ddl method over gRPC. + + Returns the schema of a Cloud Spanner database as a list of + formatted DDL statements. This method does not show pending + schema updates, those may be queried using the + [Operations][google.longrunning.Operations] API. + + Returns: + Callable[[~.GetDatabaseDdlRequest], + ~.GetDatabaseDdlResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_database_ddl" not in self._stubs: + self._stubs["get_database_ddl"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetDatabaseDdl", + request_serializer=spanner_database_admin.GetDatabaseDdlRequest.serialize, + response_deserializer=spanner_database_admin.GetDatabaseDdlResponse.deserialize, + ) + return self._stubs["get_database_ddl"] + + @property + def set_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.SetIamPolicyRequest], policy_pb2.Policy]: + r"""Return a callable for the set iam policy method over gRPC. + + Sets the access control policy on a database or backup resource. + Replaces any existing policy. + + Authorization requires ``spanner.databases.setIamPolicy`` + permission on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. For + backups, authorization requires ``spanner.backups.setIamPolicy`` + permission on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. + + Returns: + Callable[[~.SetIamPolicyRequest], + ~.Policy]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "set_iam_policy" not in self._stubs: + self._stubs["set_iam_policy"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/SetIamPolicy", + request_serializer=iam_policy_pb2.SetIamPolicyRequest.SerializeToString, + response_deserializer=policy_pb2.Policy.FromString, + ) + return self._stubs["set_iam_policy"] + + @property + def get_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.GetIamPolicyRequest], policy_pb2.Policy]: + r"""Return a callable for the get iam policy method over gRPC. + + Gets the access control policy for a database or backup + resource. Returns an empty policy if a database or backup exists + but does not have a policy set. + + Authorization requires ``spanner.databases.getIamPolicy`` + permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. For + backups, authorization requires ``spanner.backups.getIamPolicy`` + permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + + Returns: + Callable[[~.GetIamPolicyRequest], + ~.Policy]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_iam_policy" not in self._stubs: + self._stubs["get_iam_policy"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetIamPolicy", + request_serializer=iam_policy_pb2.GetIamPolicyRequest.SerializeToString, + response_deserializer=policy_pb2.Policy.FromString, + ) + return self._stubs["get_iam_policy"] + + @property + def test_iam_permissions( + self, + ) -> Callable[ + [iam_policy_pb2.TestIamPermissionsRequest], + iam_policy_pb2.TestIamPermissionsResponse, + ]: + r"""Return a callable for the test iam permissions method over gRPC. + + Returns permissions that the caller has on the specified + database or backup resource. + + Attempting this RPC on a non-existent Cloud Spanner database + will result in a NOT_FOUND error if the user has + ``spanner.databases.list`` permission on the containing Cloud + Spanner instance. Otherwise returns an empty set of permissions. + Calling this method on a backup that does not exist will result + in a NOT_FOUND error if the user has ``spanner.backups.list`` + permission on the containing instance. + + Returns: + Callable[[~.TestIamPermissionsRequest], + ~.TestIamPermissionsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "test_iam_permissions" not in self._stubs: + self._stubs["test_iam_permissions"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/TestIamPermissions", + request_serializer=iam_policy_pb2.TestIamPermissionsRequest.SerializeToString, + response_deserializer=iam_policy_pb2.TestIamPermissionsResponse.FromString, + ) + return self._stubs["test_iam_permissions"] + + @property + def create_backup( + self, + ) -> Callable[[gsad_backup.CreateBackupRequest], operations_pb2.Operation]: + r"""Return a callable for the create backup method over gRPC. + + Starts creating a new Cloud Spanner Backup. The returned backup + [long-running operation][google.longrunning.Operation] will have + a name of the format + ``projects//instances//backups//operations/`` + and can be used to track creation of the backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Backup][google.spanner.admin.database.v1.Backup], if + successful. Cancelling the returned operation will stop the + creation and delete the backup. There can be only one pending + backup creation per database. Backup creation of different + databases can run concurrently. + + Returns: + Callable[[~.CreateBackupRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_backup" not in self._stubs: + self._stubs["create_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateBackup", + request_serializer=gsad_backup.CreateBackupRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_backup"] + + @property + def copy_backup( + self, + ) -> Callable[[backup.CopyBackupRequest], operations_pb2.Operation]: + r"""Return a callable for the copy backup method over gRPC. + + Starts copying a Cloud Spanner Backup. The returned backup + [long-running operation][google.longrunning.Operation] will have + a name of the format + ``projects//instances//backups//operations/`` + and can be used to track copying of the backup. The operation is + associated with the destination backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CopyBackupMetadata][google.spanner.admin.database.v1.CopyBackupMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Backup][google.spanner.admin.database.v1.Backup], if + successful. Cancelling the returned operation will stop the + copying and delete the destination backup. Concurrent CopyBackup + requests can run on the same source backup. + + Returns: + Callable[[~.CopyBackupRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "copy_backup" not in self._stubs: + self._stubs["copy_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/CopyBackup", + request_serializer=backup.CopyBackupRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["copy_backup"] + + @property + def get_backup(self) -> Callable[[backup.GetBackupRequest], backup.Backup]: + r"""Return a callable for the get backup method over gRPC. + + Gets metadata on a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + Returns: + Callable[[~.GetBackupRequest], + ~.Backup]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_backup" not in self._stubs: + self._stubs["get_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetBackup", + request_serializer=backup.GetBackupRequest.serialize, + response_deserializer=backup.Backup.deserialize, + ) + return self._stubs["get_backup"] + + @property + def update_backup( + self, + ) -> Callable[[gsad_backup.UpdateBackupRequest], gsad_backup.Backup]: + r"""Return a callable for the update backup method over gRPC. + + Updates a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + Returns: + Callable[[~.UpdateBackupRequest], + ~.Backup]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_backup" not in self._stubs: + self._stubs["update_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateBackup", + request_serializer=gsad_backup.UpdateBackupRequest.serialize, + response_deserializer=gsad_backup.Backup.deserialize, + ) + return self._stubs["update_backup"] + + @property + def delete_backup(self) -> Callable[[backup.DeleteBackupRequest], empty_pb2.Empty]: + r"""Return a callable for the delete backup method over gRPC. + + Deletes a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + Returns: + Callable[[~.DeleteBackupRequest], + ~.Empty]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_backup" not in self._stubs: + self._stubs["delete_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/DeleteBackup", + request_serializer=backup.DeleteBackupRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_backup"] + + @property + def list_backups( + self, + ) -> Callable[[backup.ListBackupsRequest], backup.ListBackupsResponse]: + r"""Return a callable for the list backups method over gRPC. + + Lists completed and pending backups. Backups returned are + ordered by ``create_time`` in descending order, starting from + the most recent ``create_time``. + + Returns: + Callable[[~.ListBackupsRequest], + ~.ListBackupsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_backups" not in self._stubs: + self._stubs["list_backups"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackups", + request_serializer=backup.ListBackupsRequest.serialize, + response_deserializer=backup.ListBackupsResponse.deserialize, + ) + return self._stubs["list_backups"] + + @property + def restore_database( + self, + ) -> Callable[ + [spanner_database_admin.RestoreDatabaseRequest], operations_pb2.Operation + ]: + r"""Return a callable for the restore database method over gRPC. + + Create a new database by restoring from a completed backup. The + new database must be in the same project and in an instance with + the same instance configuration as the instance containing the + backup. The returned database [long-running + operation][google.longrunning.Operation] has a name of the + format + ``projects//instances//databases//operations/``, + and can be used to track the progress of the operation, and to + cancel it. The [metadata][google.longrunning.Operation.metadata] + field type is + [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. + The [response][google.longrunning.Operation.response] type is + [Database][google.spanner.admin.database.v1.Database], if + successful. Cancelling the returned operation will stop the + restore and delete the database. There can be only one database + being restored into an instance at a time. Once the restore + operation completes, a new restore operation can be initiated, + without waiting for the optimize operation associated with the + first restore to complete. + + Returns: + Callable[[~.RestoreDatabaseRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "restore_database" not in self._stubs: + self._stubs["restore_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/RestoreDatabase", + request_serializer=spanner_database_admin.RestoreDatabaseRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["restore_database"] + + @property + def list_database_operations( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabaseOperationsRequest], + spanner_database_admin.ListDatabaseOperationsResponse, + ]: + r"""Return a callable for the list database operations method over gRPC. + + Lists database + [longrunning-operations][google.longrunning.Operation]. A + database operation has a name of the form + ``projects//instances//databases//operations/``. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. + + Returns: + Callable[[~.ListDatabaseOperationsRequest], + ~.ListDatabaseOperationsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_database_operations" not in self._stubs: + self._stubs["list_database_operations"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabaseOperations", + request_serializer=spanner_database_admin.ListDatabaseOperationsRequest.serialize, + response_deserializer=spanner_database_admin.ListDatabaseOperationsResponse.deserialize, + ) + return self._stubs["list_database_operations"] + + @property + def list_backup_operations( + self, + ) -> Callable[ + [backup.ListBackupOperationsRequest], backup.ListBackupOperationsResponse + ]: + r"""Return a callable for the list backup operations method over gRPC. + + Lists the backup [long-running + operations][google.longrunning.Operation] in the given instance. + A backup operation has a name of the form + ``projects//instances//backups//operations/``. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.progress.start_time`` in descending + order starting from the most recently started operation. + + Returns: + Callable[[~.ListBackupOperationsRequest], + ~.ListBackupOperationsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_backup_operations" not in self._stubs: + self._stubs["list_backup_operations"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackupOperations", + request_serializer=backup.ListBackupOperationsRequest.serialize, + response_deserializer=backup.ListBackupOperationsResponse.deserialize, + ) + return self._stubs["list_backup_operations"] + + @property + def list_database_roles( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabaseRolesRequest], + spanner_database_admin.ListDatabaseRolesResponse, + ]: + r"""Return a callable for the list database roles method over gRPC. + + Lists Cloud Spanner database roles. + + Returns: + Callable[[~.ListDatabaseRolesRequest], + ~.ListDatabaseRolesResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_database_roles" not in self._stubs: + self._stubs["list_database_roles"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabaseRoles", + request_serializer=spanner_database_admin.ListDatabaseRolesRequest.serialize, + response_deserializer=spanner_database_admin.ListDatabaseRolesResponse.deserialize, + ) + return self._stubs["list_database_roles"] + + @property + def add_split_points( + self, + ) -> Callable[ + [spanner_database_admin.AddSplitPointsRequest], + spanner_database_admin.AddSplitPointsResponse, + ]: + r"""Return a callable for the add split points method over gRPC. + + Adds split points to specified tables, indexes of a + database. + + Returns: + Callable[[~.AddSplitPointsRequest], + ~.AddSplitPointsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "add_split_points" not in self._stubs: + self._stubs["add_split_points"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/AddSplitPoints", + request_serializer=spanner_database_admin.AddSplitPointsRequest.serialize, + response_deserializer=spanner_database_admin.AddSplitPointsResponse.deserialize, + ) + return self._stubs["add_split_points"] + + @property + def create_backup_schedule( + self, + ) -> Callable[ + [gsad_backup_schedule.CreateBackupScheduleRequest], + gsad_backup_schedule.BackupSchedule, + ]: + r"""Return a callable for the create backup schedule method over gRPC. + + Creates a new backup schedule. + + Returns: + Callable[[~.CreateBackupScheduleRequest], + ~.BackupSchedule]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_backup_schedule" not in self._stubs: + self._stubs["create_backup_schedule"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateBackupSchedule", + request_serializer=gsad_backup_schedule.CreateBackupScheduleRequest.serialize, + response_deserializer=gsad_backup_schedule.BackupSchedule.deserialize, + ) + return self._stubs["create_backup_schedule"] + + @property + def get_backup_schedule( + self, + ) -> Callable[ + [backup_schedule.GetBackupScheduleRequest], backup_schedule.BackupSchedule + ]: + r"""Return a callable for the get backup schedule method over gRPC. + + Gets backup schedule for the input schedule name. + + Returns: + Callable[[~.GetBackupScheduleRequest], + ~.BackupSchedule]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_backup_schedule" not in self._stubs: + self._stubs["get_backup_schedule"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetBackupSchedule", + request_serializer=backup_schedule.GetBackupScheduleRequest.serialize, + response_deserializer=backup_schedule.BackupSchedule.deserialize, + ) + return self._stubs["get_backup_schedule"] + + @property + def update_backup_schedule( + self, + ) -> Callable[ + [gsad_backup_schedule.UpdateBackupScheduleRequest], + gsad_backup_schedule.BackupSchedule, + ]: + r"""Return a callable for the update backup schedule method over gRPC. + + Updates a backup schedule. + + Returns: + Callable[[~.UpdateBackupScheduleRequest], + ~.BackupSchedule]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_backup_schedule" not in self._stubs: + self._stubs["update_backup_schedule"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateBackupSchedule", + request_serializer=gsad_backup_schedule.UpdateBackupScheduleRequest.serialize, + response_deserializer=gsad_backup_schedule.BackupSchedule.deserialize, + ) + return self._stubs["update_backup_schedule"] + + @property + def delete_backup_schedule( + self, + ) -> Callable[[backup_schedule.DeleteBackupScheduleRequest], empty_pb2.Empty]: + r"""Return a callable for the delete backup schedule method over gRPC. + + Deletes a backup schedule. + + Returns: + Callable[[~.DeleteBackupScheduleRequest], + ~.Empty]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_backup_schedule" not in self._stubs: + self._stubs["delete_backup_schedule"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/DeleteBackupSchedule", + request_serializer=backup_schedule.DeleteBackupScheduleRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_backup_schedule"] + + @property + def list_backup_schedules( + self, + ) -> Callable[ + [backup_schedule.ListBackupSchedulesRequest], + backup_schedule.ListBackupSchedulesResponse, + ]: + r"""Return a callable for the list backup schedules method over gRPC. + + Lists all the backup schedules for the database. + + Returns: + Callable[[~.ListBackupSchedulesRequest], + ~.ListBackupSchedulesResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_backup_schedules" not in self._stubs: + self._stubs["list_backup_schedules"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackupSchedules", + request_serializer=backup_schedule.ListBackupSchedulesRequest.serialize, + response_deserializer=backup_schedule.ListBackupSchedulesResponse.deserialize, + ) + return self._stubs["list_backup_schedules"] + + @property + def internal_update_graph_operation( + self, + ) -> Callable[ + [spanner_database_admin.InternalUpdateGraphOperationRequest], + spanner_database_admin.InternalUpdateGraphOperationResponse, + ]: + r"""Return a callable for the internal update graph + operation method over gRPC. + + This is an internal API called by Spanner Graph jobs. + You should never need to call this API directly. + + Returns: + Callable[[~.InternalUpdateGraphOperationRequest], + ~.InternalUpdateGraphOperationResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "internal_update_graph_operation" not in self._stubs: + self._stubs[ + "internal_update_graph_operation" + ] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/InternalUpdateGraphOperation", + request_serializer=spanner_database_admin.InternalUpdateGraphOperationRequest.serialize, + response_deserializer=spanner_database_admin.InternalUpdateGraphOperationResponse.deserialize, + ) + return self._stubs["internal_update_graph_operation"] + + def close(self): + self._logged_channel.close() + + @property + def delete_operation( + self, + ) -> Callable[[operations_pb2.DeleteOperationRequest], None]: + r"""Return a callable for the delete_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_operation" not in self._stubs: + self._stubs["delete_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/DeleteOperation", + request_serializer=operations_pb2.DeleteOperationRequest.SerializeToString, + response_deserializer=None, + ) + return self._stubs["delete_operation"] + + @property + def cancel_operation( + self, + ) -> Callable[[operations_pb2.CancelOperationRequest], None]: + r"""Return a callable for the cancel_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "cancel_operation" not in self._stubs: + self._stubs["cancel_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/CancelOperation", + request_serializer=operations_pb2.CancelOperationRequest.SerializeToString, + response_deserializer=None, + ) + return self._stubs["cancel_operation"] + + @property + def get_operation( + self, + ) -> Callable[[operations_pb2.GetOperationRequest], operations_pb2.Operation]: + r"""Return a callable for the get_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_operation" not in self._stubs: + self._stubs["get_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/GetOperation", + request_serializer=operations_pb2.GetOperationRequest.SerializeToString, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["get_operation"] + + @property + def list_operations( + self, + ) -> Callable[ + [operations_pb2.ListOperationsRequest], operations_pb2.ListOperationsResponse + ]: + r"""Return a callable for the list_operations method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_operations" not in self._stubs: + self._stubs["list_operations"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/ListOperations", + request_serializer=operations_pb2.ListOperationsRequest.SerializeToString, + response_deserializer=operations_pb2.ListOperationsResponse.FromString, + ) + return self._stubs["list_operations"] + + @property + def kind(self) -> str: + return "grpc" + + +__all__ = ("DatabaseAdminGrpcTransport",) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc_asyncio.py b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc_asyncio.py new file mode 100644 index 0000000000..5171d84d40 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/grpc_asyncio.py @@ -0,0 +1,1726 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import inspect +import json +import pickle +import logging as std_logging +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple, Union + +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers_async +from google.api_core import exceptions as core_exceptions +from google.api_core import retry_async as retries +from google.api_core import operations_v1 +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.protobuf.json_format import MessageToJson +import google.protobuf.message + +import grpc # type: ignore +import proto # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as gsad_backup_schedule, +) +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from .base import DatabaseAdminTransport, DEFAULT_CLIENT_INFO +from .grpc import DatabaseAdminGrpcTransport + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class _LoggingClientAIOInterceptor( + grpc.aio.UnaryUnaryClientInterceptor +): # pragma: NO COVER + async def intercept_unary_unary(self, continuation, client_call_details, request): + logging_enabled = CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ) + if logging_enabled: # pragma: NO COVER + request_metadata = client_call_details.metadata + if isinstance(request, proto.Message): + request_payload = type(request).to_json(request) + elif isinstance(request, google.protobuf.message.Message): + request_payload = MessageToJson(request) + else: + request_payload = f"{type(request).__name__}: {pickle.dumps(request)}" + + request_metadata = { + key: value.decode("utf-8") if isinstance(value, bytes) else value + for key, value in request_metadata + } + grpc_request = { + "payload": request_payload, + "requestMethod": "grpc", + "metadata": dict(request_metadata), + } + _LOGGER.debug( + f"Sending request for {client_call_details.method}", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": str(client_call_details.method), + "request": grpc_request, + "metadata": grpc_request["metadata"], + }, + ) + response = await continuation(client_call_details, request) + if logging_enabled: # pragma: NO COVER + response_metadata = await response.trailing_metadata() + # Convert gRPC metadata `` to list of tuples + metadata = ( + dict([(k, str(v)) for k, v in response_metadata]) + if response_metadata + else None + ) + result = await response + if isinstance(result, proto.Message): + response_payload = type(result).to_json(result) + elif isinstance(result, google.protobuf.message.Message): + response_payload = MessageToJson(result) + else: + response_payload = f"{type(result).__name__}: {pickle.dumps(result)}" + grpc_response = { + "payload": response_payload, + "metadata": metadata, + "status": "OK", + } + _LOGGER.debug( + f"Received response to rpc {client_call_details.method}.", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": str(client_call_details.method), + "response": grpc_response, + "metadata": grpc_response["metadata"], + }, + ) + return response + + +class DatabaseAdminGrpcAsyncIOTransport(DatabaseAdminTransport): + """gRPC AsyncIO backend transport for DatabaseAdmin. + + Cloud Spanner Database Admin API + + The Cloud Spanner Database Admin API can be used to: + + - create, drop, and list databases + - update the schema of pre-existing databases + - create, delete, copy and list backups for a database + - restore a database from an existing backup + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + host (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + quota_project_id=quota_project_id, + default_scopes=cls.AUTH_SCOPES, + scopes=scopes, + default_host=cls.DEFAULT_HOST, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: Optional[Union[aio.Channel, Callable[..., aio.Channel]]] = None, + api_mtls_endpoint: Optional[str] = None, + client_cert_source: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + ssl_channel_credentials: Optional[grpc.ChannelCredentials] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if a ``channel`` instance is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if a ``channel`` instance is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[Union[aio.Channel, Callable[..., aio.Channel]]]): + A ``Channel`` instance through which to make calls, or a Callable + that constructs and returns one. If set to None, ``self.create_channel`` + is used to create the channel. If a Callable is given, it will be called + with the same arguments as used in ``self.create_channel``. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or application default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for the grpc channel. It is ignored if a ``channel`` instance is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure a mutual TLS channel. It is + ignored if a ``channel`` instance or ``ssl_channel_credentials`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._grpc_channel = None + self._ssl_channel_credentials = ssl_channel_credentials + self._stubs: Dict[str, Callable] = {} + self._operations_client: Optional[operations_v1.OperationsAsyncClient] = None + + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + + if isinstance(channel, aio.Channel): + # Ignore credentials if a channel was passed. + credentials = None + self._ignore_credentials = True + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + else: + if api_mtls_endpoint: + host = api_mtls_endpoint + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + self._ssl_channel_credentials = SslCredentials().ssl_credentials + + else: + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + + # The base transport sets the host, credentials and scopes + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + if not self._grpc_channel: + # initialize with the provided callable or the default channel + channel_init = channel or type(self).create_channel + self._grpc_channel = channel_init( + self._host, + # use the credentials which are saved + credentials=self._credentials, + # Set ``credentials_file`` to ``None`` here as + # the credentials that we saved earlier should be used. + credentials_file=None, + scopes=self._scopes, + ssl_credentials=self._ssl_channel_credentials, + quota_project_id=quota_project_id, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + self._interceptor = _LoggingClientAIOInterceptor() + self._grpc_channel._unary_unary_interceptors.append(self._interceptor) + self._logged_channel = self._grpc_channel + self._wrap_with_kind = ( + "kind" in inspect.signature(gapic_v1.method_async.wrap_method).parameters + ) + # Wrap messages. This must be done after self._logged_channel exists + self._prep_wrapped_messages(client_info) + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsAsyncClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Quick check: Only create a new client if we do not already have one. + if self._operations_client is None: + self._operations_client = operations_v1.OperationsAsyncClient( + self._logged_channel + ) + + # Return the client from cache. + return self._operations_client + + @property + def list_databases( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabasesRequest], + Awaitable[spanner_database_admin.ListDatabasesResponse], + ]: + r"""Return a callable for the list databases method over gRPC. + + Lists Cloud Spanner databases. + + Returns: + Callable[[~.ListDatabasesRequest], + Awaitable[~.ListDatabasesResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_databases" not in self._stubs: + self._stubs["list_databases"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabases", + request_serializer=spanner_database_admin.ListDatabasesRequest.serialize, + response_deserializer=spanner_database_admin.ListDatabasesResponse.deserialize, + ) + return self._stubs["list_databases"] + + @property + def create_database( + self, + ) -> Callable[ + [spanner_database_admin.CreateDatabaseRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the create database method over gRPC. + + Creates a new Cloud Spanner database and starts to prepare it + for serving. The returned [long-running + operation][google.longrunning.Operation] will have a name of the + format ``/operations/`` and can be + used to track preparation of the database. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateDatabaseMetadata][google.spanner.admin.database.v1.CreateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Database][google.spanner.admin.database.v1.Database], if + successful. + + Returns: + Callable[[~.CreateDatabaseRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_database" not in self._stubs: + self._stubs["create_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateDatabase", + request_serializer=spanner_database_admin.CreateDatabaseRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_database"] + + @property + def get_database( + self, + ) -> Callable[ + [spanner_database_admin.GetDatabaseRequest], + Awaitable[spanner_database_admin.Database], + ]: + r"""Return a callable for the get database method over gRPC. + + Gets the state of a Cloud Spanner database. + + Returns: + Callable[[~.GetDatabaseRequest], + Awaitable[~.Database]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_database" not in self._stubs: + self._stubs["get_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetDatabase", + request_serializer=spanner_database_admin.GetDatabaseRequest.serialize, + response_deserializer=spanner_database_admin.Database.deserialize, + ) + return self._stubs["get_database"] + + @property + def update_database( + self, + ) -> Callable[ + [spanner_database_admin.UpdateDatabaseRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the update database method over gRPC. + + Updates a Cloud Spanner database. The returned [long-running + operation][google.longrunning.Operation] can be used to track + the progress of updating the database. If the named database + does not exist, returns ``NOT_FOUND``. + + While the operation is pending: + + - The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field is set to true. + - Cancelling the operation is best-effort. If the cancellation + succeeds, the operation metadata's + [cancel_time][google.spanner.admin.database.v1.UpdateDatabaseMetadata.cancel_time] + is set, the updates are reverted, and the operation terminates + with a ``CANCELLED`` status. + - New UpdateDatabase requests will return a + ``FAILED_PRECONDITION`` error until the pending operation is + done (returns successfully or with error). + - Reading the database via the API continues to give the + pre-request values. + + Upon completion of the returned operation: + + - The new values are in effect and readable via the API. + - The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field becomes false. + + The returned [long-running + operation][google.longrunning.Operation] will have a name of the + format + ``projects//instances//databases//operations/`` + and can be used to track the database modification. The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseMetadata][google.spanner.admin.database.v1.UpdateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Database][google.spanner.admin.database.v1.Database], if + successful. + + Returns: + Callable[[~.UpdateDatabaseRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_database" not in self._stubs: + self._stubs["update_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateDatabase", + request_serializer=spanner_database_admin.UpdateDatabaseRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_database"] + + @property + def update_database_ddl( + self, + ) -> Callable[ + [spanner_database_admin.UpdateDatabaseDdlRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the update database ddl method over gRPC. + + Updates the schema of a Cloud Spanner database by + creating/altering/dropping tables, columns, indexes, etc. The + returned [long-running operation][google.longrunning.Operation] + will have a name of the format + ``/operations/`` and can be used to + track execution of the schema change(s). The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseDdlMetadata][google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata]. + The operation has no response. + + Returns: + Callable[[~.UpdateDatabaseDdlRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_database_ddl" not in self._stubs: + self._stubs["update_database_ddl"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateDatabaseDdl", + request_serializer=spanner_database_admin.UpdateDatabaseDdlRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_database_ddl"] + + @property + def drop_database( + self, + ) -> Callable[ + [spanner_database_admin.DropDatabaseRequest], Awaitable[empty_pb2.Empty] + ]: + r"""Return a callable for the drop database method over gRPC. + + Drops (aka deletes) a Cloud Spanner database. Completed backups + for the database will be retained according to their + ``expire_time``. Note: Cloud Spanner might continue to accept + requests for a few seconds after the database has been deleted. + + Returns: + Callable[[~.DropDatabaseRequest], + Awaitable[~.Empty]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "drop_database" not in self._stubs: + self._stubs["drop_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/DropDatabase", + request_serializer=spanner_database_admin.DropDatabaseRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["drop_database"] + + @property + def get_database_ddl( + self, + ) -> Callable[ + [spanner_database_admin.GetDatabaseDdlRequest], + Awaitable[spanner_database_admin.GetDatabaseDdlResponse], + ]: + r"""Return a callable for the get database ddl method over gRPC. + + Returns the schema of a Cloud Spanner database as a list of + formatted DDL statements. This method does not show pending + schema updates, those may be queried using the + [Operations][google.longrunning.Operations] API. + + Returns: + Callable[[~.GetDatabaseDdlRequest], + Awaitable[~.GetDatabaseDdlResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_database_ddl" not in self._stubs: + self._stubs["get_database_ddl"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetDatabaseDdl", + request_serializer=spanner_database_admin.GetDatabaseDdlRequest.serialize, + response_deserializer=spanner_database_admin.GetDatabaseDdlResponse.deserialize, + ) + return self._stubs["get_database_ddl"] + + @property + def set_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.SetIamPolicyRequest], Awaitable[policy_pb2.Policy]]: + r"""Return a callable for the set iam policy method over gRPC. + + Sets the access control policy on a database or backup resource. + Replaces any existing policy. + + Authorization requires ``spanner.databases.setIamPolicy`` + permission on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. For + backups, authorization requires ``spanner.backups.setIamPolicy`` + permission on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. + + Returns: + Callable[[~.SetIamPolicyRequest], + Awaitable[~.Policy]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "set_iam_policy" not in self._stubs: + self._stubs["set_iam_policy"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/SetIamPolicy", + request_serializer=iam_policy_pb2.SetIamPolicyRequest.SerializeToString, + response_deserializer=policy_pb2.Policy.FromString, + ) + return self._stubs["set_iam_policy"] + + @property + def get_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.GetIamPolicyRequest], Awaitable[policy_pb2.Policy]]: + r"""Return a callable for the get iam policy method over gRPC. + + Gets the access control policy for a database or backup + resource. Returns an empty policy if a database or backup exists + but does not have a policy set. + + Authorization requires ``spanner.databases.getIamPolicy`` + permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. For + backups, authorization requires ``spanner.backups.getIamPolicy`` + permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + + Returns: + Callable[[~.GetIamPolicyRequest], + Awaitable[~.Policy]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_iam_policy" not in self._stubs: + self._stubs["get_iam_policy"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetIamPolicy", + request_serializer=iam_policy_pb2.GetIamPolicyRequest.SerializeToString, + response_deserializer=policy_pb2.Policy.FromString, + ) + return self._stubs["get_iam_policy"] + + @property + def test_iam_permissions( + self, + ) -> Callable[ + [iam_policy_pb2.TestIamPermissionsRequest], + Awaitable[iam_policy_pb2.TestIamPermissionsResponse], + ]: + r"""Return a callable for the test iam permissions method over gRPC. + + Returns permissions that the caller has on the specified + database or backup resource. + + Attempting this RPC on a non-existent Cloud Spanner database + will result in a NOT_FOUND error if the user has + ``spanner.databases.list`` permission on the containing Cloud + Spanner instance. Otherwise returns an empty set of permissions. + Calling this method on a backup that does not exist will result + in a NOT_FOUND error if the user has ``spanner.backups.list`` + permission on the containing instance. + + Returns: + Callable[[~.TestIamPermissionsRequest], + Awaitable[~.TestIamPermissionsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "test_iam_permissions" not in self._stubs: + self._stubs["test_iam_permissions"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/TestIamPermissions", + request_serializer=iam_policy_pb2.TestIamPermissionsRequest.SerializeToString, + response_deserializer=iam_policy_pb2.TestIamPermissionsResponse.FromString, + ) + return self._stubs["test_iam_permissions"] + + @property + def create_backup( + self, + ) -> Callable[ + [gsad_backup.CreateBackupRequest], Awaitable[operations_pb2.Operation] + ]: + r"""Return a callable for the create backup method over gRPC. + + Starts creating a new Cloud Spanner Backup. The returned backup + [long-running operation][google.longrunning.Operation] will have + a name of the format + ``projects//instances//backups//operations/`` + and can be used to track creation of the backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Backup][google.spanner.admin.database.v1.Backup], if + successful. Cancelling the returned operation will stop the + creation and delete the backup. There can be only one pending + backup creation per database. Backup creation of different + databases can run concurrently. + + Returns: + Callable[[~.CreateBackupRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_backup" not in self._stubs: + self._stubs["create_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateBackup", + request_serializer=gsad_backup.CreateBackupRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_backup"] + + @property + def copy_backup( + self, + ) -> Callable[[backup.CopyBackupRequest], Awaitable[operations_pb2.Operation]]: + r"""Return a callable for the copy backup method over gRPC. + + Starts copying a Cloud Spanner Backup. The returned backup + [long-running operation][google.longrunning.Operation] will have + a name of the format + ``projects//instances//backups//operations/`` + and can be used to track copying of the backup. The operation is + associated with the destination backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CopyBackupMetadata][google.spanner.admin.database.v1.CopyBackupMetadata]. + The [response][google.longrunning.Operation.response] field type + is [Backup][google.spanner.admin.database.v1.Backup], if + successful. Cancelling the returned operation will stop the + copying and delete the destination backup. Concurrent CopyBackup + requests can run on the same source backup. + + Returns: + Callable[[~.CopyBackupRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "copy_backup" not in self._stubs: + self._stubs["copy_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/CopyBackup", + request_serializer=backup.CopyBackupRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["copy_backup"] + + @property + def get_backup( + self, + ) -> Callable[[backup.GetBackupRequest], Awaitable[backup.Backup]]: + r"""Return a callable for the get backup method over gRPC. + + Gets metadata on a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + Returns: + Callable[[~.GetBackupRequest], + Awaitable[~.Backup]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_backup" not in self._stubs: + self._stubs["get_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetBackup", + request_serializer=backup.GetBackupRequest.serialize, + response_deserializer=backup.Backup.deserialize, + ) + return self._stubs["get_backup"] + + @property + def update_backup( + self, + ) -> Callable[[gsad_backup.UpdateBackupRequest], Awaitable[gsad_backup.Backup]]: + r"""Return a callable for the update backup method over gRPC. + + Updates a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + Returns: + Callable[[~.UpdateBackupRequest], + Awaitable[~.Backup]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_backup" not in self._stubs: + self._stubs["update_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateBackup", + request_serializer=gsad_backup.UpdateBackupRequest.serialize, + response_deserializer=gsad_backup.Backup.deserialize, + ) + return self._stubs["update_backup"] + + @property + def delete_backup( + self, + ) -> Callable[[backup.DeleteBackupRequest], Awaitable[empty_pb2.Empty]]: + r"""Return a callable for the delete backup method over gRPC. + + Deletes a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + + Returns: + Callable[[~.DeleteBackupRequest], + Awaitable[~.Empty]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_backup" not in self._stubs: + self._stubs["delete_backup"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/DeleteBackup", + request_serializer=backup.DeleteBackupRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_backup"] + + @property + def list_backups( + self, + ) -> Callable[[backup.ListBackupsRequest], Awaitable[backup.ListBackupsResponse]]: + r"""Return a callable for the list backups method over gRPC. + + Lists completed and pending backups. Backups returned are + ordered by ``create_time`` in descending order, starting from + the most recent ``create_time``. + + Returns: + Callable[[~.ListBackupsRequest], + Awaitable[~.ListBackupsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_backups" not in self._stubs: + self._stubs["list_backups"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackups", + request_serializer=backup.ListBackupsRequest.serialize, + response_deserializer=backup.ListBackupsResponse.deserialize, + ) + return self._stubs["list_backups"] + + @property + def restore_database( + self, + ) -> Callable[ + [spanner_database_admin.RestoreDatabaseRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the restore database method over gRPC. + + Create a new database by restoring from a completed backup. The + new database must be in the same project and in an instance with + the same instance configuration as the instance containing the + backup. The returned database [long-running + operation][google.longrunning.Operation] has a name of the + format + ``projects//instances//databases//operations/``, + and can be used to track the progress of the operation, and to + cancel it. The [metadata][google.longrunning.Operation.metadata] + field type is + [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. + The [response][google.longrunning.Operation.response] type is + [Database][google.spanner.admin.database.v1.Database], if + successful. Cancelling the returned operation will stop the + restore and delete the database. There can be only one database + being restored into an instance at a time. Once the restore + operation completes, a new restore operation can be initiated, + without waiting for the optimize operation associated with the + first restore to complete. + + Returns: + Callable[[~.RestoreDatabaseRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "restore_database" not in self._stubs: + self._stubs["restore_database"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/RestoreDatabase", + request_serializer=spanner_database_admin.RestoreDatabaseRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["restore_database"] + + @property + def list_database_operations( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabaseOperationsRequest], + Awaitable[spanner_database_admin.ListDatabaseOperationsResponse], + ]: + r"""Return a callable for the list database operations method over gRPC. + + Lists database + [longrunning-operations][google.longrunning.Operation]. A + database operation has a name of the form + ``projects//instances//databases//operations/``. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. + + Returns: + Callable[[~.ListDatabaseOperationsRequest], + Awaitable[~.ListDatabaseOperationsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_database_operations" not in self._stubs: + self._stubs["list_database_operations"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabaseOperations", + request_serializer=spanner_database_admin.ListDatabaseOperationsRequest.serialize, + response_deserializer=spanner_database_admin.ListDatabaseOperationsResponse.deserialize, + ) + return self._stubs["list_database_operations"] + + @property + def list_backup_operations( + self, + ) -> Callable[ + [backup.ListBackupOperationsRequest], + Awaitable[backup.ListBackupOperationsResponse], + ]: + r"""Return a callable for the list backup operations method over gRPC. + + Lists the backup [long-running + operations][google.longrunning.Operation] in the given instance. + A backup operation has a name of the form + ``projects//instances//backups//operations/``. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.progress.start_time`` in descending + order starting from the most recently started operation. + + Returns: + Callable[[~.ListBackupOperationsRequest], + Awaitable[~.ListBackupOperationsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_backup_operations" not in self._stubs: + self._stubs["list_backup_operations"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackupOperations", + request_serializer=backup.ListBackupOperationsRequest.serialize, + response_deserializer=backup.ListBackupOperationsResponse.deserialize, + ) + return self._stubs["list_backup_operations"] + + @property + def list_database_roles( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabaseRolesRequest], + Awaitable[spanner_database_admin.ListDatabaseRolesResponse], + ]: + r"""Return a callable for the list database roles method over gRPC. + + Lists Cloud Spanner database roles. + + Returns: + Callable[[~.ListDatabaseRolesRequest], + Awaitable[~.ListDatabaseRolesResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_database_roles" not in self._stubs: + self._stubs["list_database_roles"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabaseRoles", + request_serializer=spanner_database_admin.ListDatabaseRolesRequest.serialize, + response_deserializer=spanner_database_admin.ListDatabaseRolesResponse.deserialize, + ) + return self._stubs["list_database_roles"] + + @property + def add_split_points( + self, + ) -> Callable[ + [spanner_database_admin.AddSplitPointsRequest], + Awaitable[spanner_database_admin.AddSplitPointsResponse], + ]: + r"""Return a callable for the add split points method over gRPC. + + Adds split points to specified tables, indexes of a + database. + + Returns: + Callable[[~.AddSplitPointsRequest], + Awaitable[~.AddSplitPointsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "add_split_points" not in self._stubs: + self._stubs["add_split_points"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/AddSplitPoints", + request_serializer=spanner_database_admin.AddSplitPointsRequest.serialize, + response_deserializer=spanner_database_admin.AddSplitPointsResponse.deserialize, + ) + return self._stubs["add_split_points"] + + @property + def create_backup_schedule( + self, + ) -> Callable[ + [gsad_backup_schedule.CreateBackupScheduleRequest], + Awaitable[gsad_backup_schedule.BackupSchedule], + ]: + r"""Return a callable for the create backup schedule method over gRPC. + + Creates a new backup schedule. + + Returns: + Callable[[~.CreateBackupScheduleRequest], + Awaitable[~.BackupSchedule]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_backup_schedule" not in self._stubs: + self._stubs["create_backup_schedule"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateBackupSchedule", + request_serializer=gsad_backup_schedule.CreateBackupScheduleRequest.serialize, + response_deserializer=gsad_backup_schedule.BackupSchedule.deserialize, + ) + return self._stubs["create_backup_schedule"] + + @property + def get_backup_schedule( + self, + ) -> Callable[ + [backup_schedule.GetBackupScheduleRequest], + Awaitable[backup_schedule.BackupSchedule], + ]: + r"""Return a callable for the get backup schedule method over gRPC. + + Gets backup schedule for the input schedule name. + + Returns: + Callable[[~.GetBackupScheduleRequest], + Awaitable[~.BackupSchedule]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_backup_schedule" not in self._stubs: + self._stubs["get_backup_schedule"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/GetBackupSchedule", + request_serializer=backup_schedule.GetBackupScheduleRequest.serialize, + response_deserializer=backup_schedule.BackupSchedule.deserialize, + ) + return self._stubs["get_backup_schedule"] + + @property + def update_backup_schedule( + self, + ) -> Callable[ + [gsad_backup_schedule.UpdateBackupScheduleRequest], + Awaitable[gsad_backup_schedule.BackupSchedule], + ]: + r"""Return a callable for the update backup schedule method over gRPC. + + Updates a backup schedule. + + Returns: + Callable[[~.UpdateBackupScheduleRequest], + Awaitable[~.BackupSchedule]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_backup_schedule" not in self._stubs: + self._stubs["update_backup_schedule"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateBackupSchedule", + request_serializer=gsad_backup_schedule.UpdateBackupScheduleRequest.serialize, + response_deserializer=gsad_backup_schedule.BackupSchedule.deserialize, + ) + return self._stubs["update_backup_schedule"] + + @property + def delete_backup_schedule( + self, + ) -> Callable[ + [backup_schedule.DeleteBackupScheduleRequest], Awaitable[empty_pb2.Empty] + ]: + r"""Return a callable for the delete backup schedule method over gRPC. + + Deletes a backup schedule. + + Returns: + Callable[[~.DeleteBackupScheduleRequest], + Awaitable[~.Empty]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_backup_schedule" not in self._stubs: + self._stubs["delete_backup_schedule"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/DeleteBackupSchedule", + request_serializer=backup_schedule.DeleteBackupScheduleRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_backup_schedule"] + + @property + def list_backup_schedules( + self, + ) -> Callable[ + [backup_schedule.ListBackupSchedulesRequest], + Awaitable[backup_schedule.ListBackupSchedulesResponse], + ]: + r"""Return a callable for the list backup schedules method over gRPC. + + Lists all the backup schedules for the database. + + Returns: + Callable[[~.ListBackupSchedulesRequest], + Awaitable[~.ListBackupSchedulesResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_backup_schedules" not in self._stubs: + self._stubs["list_backup_schedules"] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackupSchedules", + request_serializer=backup_schedule.ListBackupSchedulesRequest.serialize, + response_deserializer=backup_schedule.ListBackupSchedulesResponse.deserialize, + ) + return self._stubs["list_backup_schedules"] + + @property + def internal_update_graph_operation( + self, + ) -> Callable[ + [spanner_database_admin.InternalUpdateGraphOperationRequest], + Awaitable[spanner_database_admin.InternalUpdateGraphOperationResponse], + ]: + r"""Return a callable for the internal update graph + operation method over gRPC. + + This is an internal API called by Spanner Graph jobs. + You should never need to call this API directly. + + Returns: + Callable[[~.InternalUpdateGraphOperationRequest], + Awaitable[~.InternalUpdateGraphOperationResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "internal_update_graph_operation" not in self._stubs: + self._stubs[ + "internal_update_graph_operation" + ] = self._logged_channel.unary_unary( + "/google.spanner.admin.database.v1.DatabaseAdmin/InternalUpdateGraphOperation", + request_serializer=spanner_database_admin.InternalUpdateGraphOperationRequest.serialize, + response_deserializer=spanner_database_admin.InternalUpdateGraphOperationResponse.deserialize, + ) + return self._stubs["internal_update_graph_operation"] + + def _prep_wrapped_messages(self, client_info): + """Precompute the wrapped methods, overriding the base class method to use async wrappers.""" + self._wrapped_methods = { + self.list_databases: self._wrap_method( + self.list_databases, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.create_database: self._wrap_method( + self.create_database, + default_timeout=3600.0, + client_info=client_info, + ), + self.get_database: self._wrap_method( + self.get_database, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.update_database: self._wrap_method( + self.update_database, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.update_database_ddl: self._wrap_method( + self.update_database_ddl, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.drop_database: self._wrap_method( + self.drop_database, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.get_database_ddl: self._wrap_method( + self.get_database_ddl, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.set_iam_policy: self._wrap_method( + self.set_iam_policy, + default_timeout=30.0, + client_info=client_info, + ), + self.get_iam_policy: self._wrap_method( + self.get_iam_policy, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.test_iam_permissions: self._wrap_method( + self.test_iam_permissions, + default_timeout=30.0, + client_info=client_info, + ), + self.create_backup: self._wrap_method( + self.create_backup, + default_timeout=3600.0, + client_info=client_info, + ), + self.copy_backup: self._wrap_method( + self.copy_backup, + default_timeout=3600.0, + client_info=client_info, + ), + self.get_backup: self._wrap_method( + self.get_backup, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.update_backup: self._wrap_method( + self.update_backup, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.delete_backup: self._wrap_method( + self.delete_backup, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_backups: self._wrap_method( + self.list_backups, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.restore_database: self._wrap_method( + self.restore_database, + default_timeout=3600.0, + client_info=client_info, + ), + self.list_database_operations: self._wrap_method( + self.list_database_operations, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_backup_operations: self._wrap_method( + self.list_backup_operations, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_database_roles: self._wrap_method( + self.list_database_roles, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.add_split_points: self._wrap_method( + self.add_split_points, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.create_backup_schedule: self._wrap_method( + self.create_backup_schedule, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.get_backup_schedule: self._wrap_method( + self.get_backup_schedule, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.update_backup_schedule: self._wrap_method( + self.update_backup_schedule, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.delete_backup_schedule: self._wrap_method( + self.delete_backup_schedule, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_backup_schedules: self._wrap_method( + self.list_backup_schedules, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.internal_update_graph_operation: self._wrap_method( + self.internal_update_graph_operation, + default_timeout=None, + client_info=client_info, + ), + self.cancel_operation: self._wrap_method( + self.cancel_operation, + default_timeout=None, + client_info=client_info, + ), + self.delete_operation: self._wrap_method( + self.delete_operation, + default_timeout=None, + client_info=client_info, + ), + self.get_operation: self._wrap_method( + self.get_operation, + default_timeout=None, + client_info=client_info, + ), + self.list_operations: self._wrap_method( + self.list_operations, + default_timeout=None, + client_info=client_info, + ), + } + + def _wrap_method(self, func, *args, **kwargs): + if self._wrap_with_kind: # pragma: NO COVER + kwargs["kind"] = self.kind + return gapic_v1.method_async.wrap_method(func, *args, **kwargs) + + def close(self): + return self._logged_channel.close() + + @property + def kind(self) -> str: + return "grpc_asyncio" + + @property + def delete_operation( + self, + ) -> Callable[[operations_pb2.DeleteOperationRequest], None]: + r"""Return a callable for the delete_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_operation" not in self._stubs: + self._stubs["delete_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/DeleteOperation", + request_serializer=operations_pb2.DeleteOperationRequest.SerializeToString, + response_deserializer=None, + ) + return self._stubs["delete_operation"] + + @property + def cancel_operation( + self, + ) -> Callable[[operations_pb2.CancelOperationRequest], None]: + r"""Return a callable for the cancel_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "cancel_operation" not in self._stubs: + self._stubs["cancel_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/CancelOperation", + request_serializer=operations_pb2.CancelOperationRequest.SerializeToString, + response_deserializer=None, + ) + return self._stubs["cancel_operation"] + + @property + def get_operation( + self, + ) -> Callable[[operations_pb2.GetOperationRequest], operations_pb2.Operation]: + r"""Return a callable for the get_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_operation" not in self._stubs: + self._stubs["get_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/GetOperation", + request_serializer=operations_pb2.GetOperationRequest.SerializeToString, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["get_operation"] + + @property + def list_operations( + self, + ) -> Callable[ + [operations_pb2.ListOperationsRequest], operations_pb2.ListOperationsResponse + ]: + r"""Return a callable for the list_operations method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_operations" not in self._stubs: + self._stubs["list_operations"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/ListOperations", + request_serializer=operations_pb2.ListOperationsRequest.SerializeToString, + response_deserializer=operations_pb2.ListOperationsResponse.FromString, + ) + return self._stubs["list_operations"] + + +__all__ = ("DatabaseAdminGrpcAsyncIOTransport",) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/rest.py b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/rest.py new file mode 100644 index 0000000000..df70fc5636 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/rest.py @@ -0,0 +1,6550 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import logging +import json # type: ignore + +from google.auth.transport.requests import AuthorizedSession # type: ignore +from google.auth import credentials as ga_credentials # type: ignore +from google.api_core import exceptions as core_exceptions +from google.api_core import retry as retries +from google.api_core import rest_helpers +from google.api_core import rest_streaming +from google.api_core import gapic_v1 +import google.protobuf + +from google.protobuf import json_format +from google.api_core import operations_v1 + +from requests import __version__ as requests_version +import dataclasses +from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union +import warnings + + +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as gsad_backup_schedule, +) +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore + + +from .rest_base import _BaseDatabaseAdminRestTransport +from .base import DEFAULT_CLIENT_INFO as BASE_DEFAULT_CLIENT_INFO + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = logging.getLogger(__name__) + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=BASE_DEFAULT_CLIENT_INFO.gapic_version, + grpc_version=None, + rest_version=f"requests@{requests_version}", +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +class DatabaseAdminRestInterceptor: + """Interceptor for DatabaseAdmin. + + Interceptors are used to manipulate requests, request metadata, and responses + in arbitrary ways. + Example use cases include: + * Logging + * Verifying requests according to service or custom semantics + * Stripping extraneous information from responses + + These use cases and more can be enabled by injecting an + instance of a custom subclass when constructing the DatabaseAdminRestTransport. + + .. code-block:: python + class MyCustomDatabaseAdminInterceptor(DatabaseAdminRestInterceptor): + def pre_add_split_points(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_add_split_points(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_copy_backup(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_copy_backup(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_create_backup(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_create_backup(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_create_backup_schedule(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_create_backup_schedule(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_create_database(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_create_database(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_delete_backup(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def pre_delete_backup_schedule(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def pre_drop_database(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def pre_get_backup(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_backup(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_get_backup_schedule(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_backup_schedule(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_get_database(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_database(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_get_database_ddl(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_database_ddl(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_get_iam_policy(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_iam_policy(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_internal_update_graph_operation(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_internal_update_graph_operation(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_backup_operations(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_backup_operations(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_backups(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_backups(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_backup_schedules(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_backup_schedules(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_database_operations(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_database_operations(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_database_roles(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_database_roles(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_databases(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_databases(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_restore_database(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_restore_database(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_set_iam_policy(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_set_iam_policy(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_test_iam_permissions(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_test_iam_permissions(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_update_backup(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_update_backup(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_update_backup_schedule(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_update_backup_schedule(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_update_database(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_update_database(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_update_database_ddl(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_update_database_ddl(self, response): + logging.log(f"Received response: {response}") + return response + + transport = DatabaseAdminRestTransport(interceptor=MyCustomDatabaseAdminInterceptor()) + client = DatabaseAdminClient(transport=transport) + + + """ + + def pre_add_split_points( + self, + request: spanner_database_admin.AddSplitPointsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.AddSplitPointsRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for add_split_points + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_add_split_points( + self, response: spanner_database_admin.AddSplitPointsResponse + ) -> spanner_database_admin.AddSplitPointsResponse: + """Post-rpc interceptor for add_split_points + + DEPRECATED. Please use the `post_add_split_points_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_add_split_points` interceptor runs + before the `post_add_split_points_with_metadata` interceptor. + """ + return response + + def post_add_split_points_with_metadata( + self, + response: spanner_database_admin.AddSplitPointsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.AddSplitPointsResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for add_split_points + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_add_split_points_with_metadata` + interceptor in new development instead of the `post_add_split_points` interceptor. + When both interceptors are used, this `post_add_split_points_with_metadata` interceptor runs after the + `post_add_split_points` interceptor. The (possibly modified) response returned by + `post_add_split_points` will be passed to + `post_add_split_points_with_metadata`. + """ + return response, metadata + + def pre_copy_backup( + self, + request: backup.CopyBackupRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[backup.CopyBackupRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for copy_backup + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_copy_backup( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for copy_backup + + DEPRECATED. Please use the `post_copy_backup_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_copy_backup` interceptor runs + before the `post_copy_backup_with_metadata` interceptor. + """ + return response + + def post_copy_backup_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for copy_backup + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_copy_backup_with_metadata` + interceptor in new development instead of the `post_copy_backup` interceptor. + When both interceptors are used, this `post_copy_backup_with_metadata` interceptor runs after the + `post_copy_backup` interceptor. The (possibly modified) response returned by + `post_copy_backup` will be passed to + `post_copy_backup_with_metadata`. + """ + return response, metadata + + def pre_create_backup( + self, + request: gsad_backup.CreateBackupRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + gsad_backup.CreateBackupRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for create_backup + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_create_backup( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for create_backup + + DEPRECATED. Please use the `post_create_backup_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_create_backup` interceptor runs + before the `post_create_backup_with_metadata` interceptor. + """ + return response + + def post_create_backup_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for create_backup + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_create_backup_with_metadata` + interceptor in new development instead of the `post_create_backup` interceptor. + When both interceptors are used, this `post_create_backup_with_metadata` interceptor runs after the + `post_create_backup` interceptor. The (possibly modified) response returned by + `post_create_backup` will be passed to + `post_create_backup_with_metadata`. + """ + return response, metadata + + def pre_create_backup_schedule( + self, + request: gsad_backup_schedule.CreateBackupScheduleRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + gsad_backup_schedule.CreateBackupScheduleRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for create_backup_schedule + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_create_backup_schedule( + self, response: gsad_backup_schedule.BackupSchedule + ) -> gsad_backup_schedule.BackupSchedule: + """Post-rpc interceptor for create_backup_schedule + + DEPRECATED. Please use the `post_create_backup_schedule_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_create_backup_schedule` interceptor runs + before the `post_create_backup_schedule_with_metadata` interceptor. + """ + return response + + def post_create_backup_schedule_with_metadata( + self, + response: gsad_backup_schedule.BackupSchedule, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + gsad_backup_schedule.BackupSchedule, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for create_backup_schedule + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_create_backup_schedule_with_metadata` + interceptor in new development instead of the `post_create_backup_schedule` interceptor. + When both interceptors are used, this `post_create_backup_schedule_with_metadata` interceptor runs after the + `post_create_backup_schedule` interceptor. The (possibly modified) response returned by + `post_create_backup_schedule` will be passed to + `post_create_backup_schedule_with_metadata`. + """ + return response, metadata + + def pre_create_database( + self, + request: spanner_database_admin.CreateDatabaseRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.CreateDatabaseRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for create_database + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_create_database( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for create_database + + DEPRECATED. Please use the `post_create_database_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_create_database` interceptor runs + before the `post_create_database_with_metadata` interceptor. + """ + return response + + def post_create_database_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for create_database + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_create_database_with_metadata` + interceptor in new development instead of the `post_create_database` interceptor. + When both interceptors are used, this `post_create_database_with_metadata` interceptor runs after the + `post_create_database` interceptor. The (possibly modified) response returned by + `post_create_database` will be passed to + `post_create_database_with_metadata`. + """ + return response, metadata + + def pre_delete_backup( + self, + request: backup.DeleteBackupRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[backup.DeleteBackupRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for delete_backup + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def pre_delete_backup_schedule( + self, + request: backup_schedule.DeleteBackupScheduleRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + backup_schedule.DeleteBackupScheduleRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for delete_backup_schedule + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def pre_drop_database( + self, + request: spanner_database_admin.DropDatabaseRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.DropDatabaseRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for drop_database + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def pre_get_backup( + self, + request: backup.GetBackupRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[backup.GetBackupRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for get_backup + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_get_backup(self, response: backup.Backup) -> backup.Backup: + """Post-rpc interceptor for get_backup + + DEPRECATED. Please use the `post_get_backup_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_get_backup` interceptor runs + before the `post_get_backup_with_metadata` interceptor. + """ + return response + + def post_get_backup_with_metadata( + self, response: backup.Backup, metadata: Sequence[Tuple[str, Union[str, bytes]]] + ) -> Tuple[backup.Backup, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for get_backup + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_get_backup_with_metadata` + interceptor in new development instead of the `post_get_backup` interceptor. + When both interceptors are used, this `post_get_backup_with_metadata` interceptor runs after the + `post_get_backup` interceptor. The (possibly modified) response returned by + `post_get_backup` will be passed to + `post_get_backup_with_metadata`. + """ + return response, metadata + + def pre_get_backup_schedule( + self, + request: backup_schedule.GetBackupScheduleRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + backup_schedule.GetBackupScheduleRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for get_backup_schedule + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_get_backup_schedule( + self, response: backup_schedule.BackupSchedule + ) -> backup_schedule.BackupSchedule: + """Post-rpc interceptor for get_backup_schedule + + DEPRECATED. Please use the `post_get_backup_schedule_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_get_backup_schedule` interceptor runs + before the `post_get_backup_schedule_with_metadata` interceptor. + """ + return response + + def post_get_backup_schedule_with_metadata( + self, + response: backup_schedule.BackupSchedule, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[backup_schedule.BackupSchedule, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for get_backup_schedule + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_get_backup_schedule_with_metadata` + interceptor in new development instead of the `post_get_backup_schedule` interceptor. + When both interceptors are used, this `post_get_backup_schedule_with_metadata` interceptor runs after the + `post_get_backup_schedule` interceptor. The (possibly modified) response returned by + `post_get_backup_schedule` will be passed to + `post_get_backup_schedule_with_metadata`. + """ + return response, metadata + + def pre_get_database( + self, + request: spanner_database_admin.GetDatabaseRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.GetDatabaseRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for get_database + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_get_database( + self, response: spanner_database_admin.Database + ) -> spanner_database_admin.Database: + """Post-rpc interceptor for get_database + + DEPRECATED. Please use the `post_get_database_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_get_database` interceptor runs + before the `post_get_database_with_metadata` interceptor. + """ + return response + + def post_get_database_with_metadata( + self, + response: spanner_database_admin.Database, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.Database, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for get_database + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_get_database_with_metadata` + interceptor in new development instead of the `post_get_database` interceptor. + When both interceptors are used, this `post_get_database_with_metadata` interceptor runs after the + `post_get_database` interceptor. The (possibly modified) response returned by + `post_get_database` will be passed to + `post_get_database_with_metadata`. + """ + return response, metadata + + def pre_get_database_ddl( + self, + request: spanner_database_admin.GetDatabaseDdlRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.GetDatabaseDdlRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for get_database_ddl + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_get_database_ddl( + self, response: spanner_database_admin.GetDatabaseDdlResponse + ) -> spanner_database_admin.GetDatabaseDdlResponse: + """Post-rpc interceptor for get_database_ddl + + DEPRECATED. Please use the `post_get_database_ddl_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_get_database_ddl` interceptor runs + before the `post_get_database_ddl_with_metadata` interceptor. + """ + return response + + def post_get_database_ddl_with_metadata( + self, + response: spanner_database_admin.GetDatabaseDdlResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.GetDatabaseDdlResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for get_database_ddl + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_get_database_ddl_with_metadata` + interceptor in new development instead of the `post_get_database_ddl` interceptor. + When both interceptors are used, this `post_get_database_ddl_with_metadata` interceptor runs after the + `post_get_database_ddl` interceptor. The (possibly modified) response returned by + `post_get_database_ddl` will be passed to + `post_get_database_ddl_with_metadata`. + """ + return response, metadata + + def pre_get_iam_policy( + self, + request: iam_policy_pb2.GetIamPolicyRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + iam_policy_pb2.GetIamPolicyRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for get_iam_policy + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_get_iam_policy(self, response: policy_pb2.Policy) -> policy_pb2.Policy: + """Post-rpc interceptor for get_iam_policy + + DEPRECATED. Please use the `post_get_iam_policy_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_get_iam_policy` interceptor runs + before the `post_get_iam_policy_with_metadata` interceptor. + """ + return response + + def post_get_iam_policy_with_metadata( + self, + response: policy_pb2.Policy, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[policy_pb2.Policy, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for get_iam_policy + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_get_iam_policy_with_metadata` + interceptor in new development instead of the `post_get_iam_policy` interceptor. + When both interceptors are used, this `post_get_iam_policy_with_metadata` interceptor runs after the + `post_get_iam_policy` interceptor. The (possibly modified) response returned by + `post_get_iam_policy` will be passed to + `post_get_iam_policy_with_metadata`. + """ + return response, metadata + + def pre_list_backup_operations( + self, + request: backup.ListBackupOperationsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + backup.ListBackupOperationsRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for list_backup_operations + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_list_backup_operations( + self, response: backup.ListBackupOperationsResponse + ) -> backup.ListBackupOperationsResponse: + """Post-rpc interceptor for list_backup_operations + + DEPRECATED. Please use the `post_list_backup_operations_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_list_backup_operations` interceptor runs + before the `post_list_backup_operations_with_metadata` interceptor. + """ + return response + + def post_list_backup_operations_with_metadata( + self, + response: backup.ListBackupOperationsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + backup.ListBackupOperationsResponse, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for list_backup_operations + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_list_backup_operations_with_metadata` + interceptor in new development instead of the `post_list_backup_operations` interceptor. + When both interceptors are used, this `post_list_backup_operations_with_metadata` interceptor runs after the + `post_list_backup_operations` interceptor. The (possibly modified) response returned by + `post_list_backup_operations` will be passed to + `post_list_backup_operations_with_metadata`. + """ + return response, metadata + + def pre_list_backups( + self, + request: backup.ListBackupsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[backup.ListBackupsRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for list_backups + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_list_backups( + self, response: backup.ListBackupsResponse + ) -> backup.ListBackupsResponse: + """Post-rpc interceptor for list_backups + + DEPRECATED. Please use the `post_list_backups_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_list_backups` interceptor runs + before the `post_list_backups_with_metadata` interceptor. + """ + return response + + def post_list_backups_with_metadata( + self, + response: backup.ListBackupsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[backup.ListBackupsResponse, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for list_backups + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_list_backups_with_metadata` + interceptor in new development instead of the `post_list_backups` interceptor. + When both interceptors are used, this `post_list_backups_with_metadata` interceptor runs after the + `post_list_backups` interceptor. The (possibly modified) response returned by + `post_list_backups` will be passed to + `post_list_backups_with_metadata`. + """ + return response, metadata + + def pre_list_backup_schedules( + self, + request: backup_schedule.ListBackupSchedulesRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + backup_schedule.ListBackupSchedulesRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_backup_schedules + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_list_backup_schedules( + self, response: backup_schedule.ListBackupSchedulesResponse + ) -> backup_schedule.ListBackupSchedulesResponse: + """Post-rpc interceptor for list_backup_schedules + + DEPRECATED. Please use the `post_list_backup_schedules_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_list_backup_schedules` interceptor runs + before the `post_list_backup_schedules_with_metadata` interceptor. + """ + return response + + def post_list_backup_schedules_with_metadata( + self, + response: backup_schedule.ListBackupSchedulesResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + backup_schedule.ListBackupSchedulesResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_backup_schedules + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_list_backup_schedules_with_metadata` + interceptor in new development instead of the `post_list_backup_schedules` interceptor. + When both interceptors are used, this `post_list_backup_schedules_with_metadata` interceptor runs after the + `post_list_backup_schedules` interceptor. The (possibly modified) response returned by + `post_list_backup_schedules` will be passed to + `post_list_backup_schedules_with_metadata`. + """ + return response, metadata + + def pre_list_database_operations( + self, + request: spanner_database_admin.ListDatabaseOperationsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.ListDatabaseOperationsRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_database_operations + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_list_database_operations( + self, response: spanner_database_admin.ListDatabaseOperationsResponse + ) -> spanner_database_admin.ListDatabaseOperationsResponse: + """Post-rpc interceptor for list_database_operations + + DEPRECATED. Please use the `post_list_database_operations_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_list_database_operations` interceptor runs + before the `post_list_database_operations_with_metadata` interceptor. + """ + return response + + def post_list_database_operations_with_metadata( + self, + response: spanner_database_admin.ListDatabaseOperationsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.ListDatabaseOperationsResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_database_operations + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_list_database_operations_with_metadata` + interceptor in new development instead of the `post_list_database_operations` interceptor. + When both interceptors are used, this `post_list_database_operations_with_metadata` interceptor runs after the + `post_list_database_operations` interceptor. The (possibly modified) response returned by + `post_list_database_operations` will be passed to + `post_list_database_operations_with_metadata`. + """ + return response, metadata + + def pre_list_database_roles( + self, + request: spanner_database_admin.ListDatabaseRolesRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.ListDatabaseRolesRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_database_roles + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_list_database_roles( + self, response: spanner_database_admin.ListDatabaseRolesResponse + ) -> spanner_database_admin.ListDatabaseRolesResponse: + """Post-rpc interceptor for list_database_roles + + DEPRECATED. Please use the `post_list_database_roles_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_list_database_roles` interceptor runs + before the `post_list_database_roles_with_metadata` interceptor. + """ + return response + + def post_list_database_roles_with_metadata( + self, + response: spanner_database_admin.ListDatabaseRolesResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.ListDatabaseRolesResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_database_roles + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_list_database_roles_with_metadata` + interceptor in new development instead of the `post_list_database_roles` interceptor. + When both interceptors are used, this `post_list_database_roles_with_metadata` interceptor runs after the + `post_list_database_roles` interceptor. The (possibly modified) response returned by + `post_list_database_roles` will be passed to + `post_list_database_roles_with_metadata`. + """ + return response, metadata + + def pre_list_databases( + self, + request: spanner_database_admin.ListDatabasesRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.ListDatabasesRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_databases + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_list_databases( + self, response: spanner_database_admin.ListDatabasesResponse + ) -> spanner_database_admin.ListDatabasesResponse: + """Post-rpc interceptor for list_databases + + DEPRECATED. Please use the `post_list_databases_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_list_databases` interceptor runs + before the `post_list_databases_with_metadata` interceptor. + """ + return response + + def post_list_databases_with_metadata( + self, + response: spanner_database_admin.ListDatabasesResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.ListDatabasesResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_databases + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_list_databases_with_metadata` + interceptor in new development instead of the `post_list_databases` interceptor. + When both interceptors are used, this `post_list_databases_with_metadata` interceptor runs after the + `post_list_databases` interceptor. The (possibly modified) response returned by + `post_list_databases` will be passed to + `post_list_databases_with_metadata`. + """ + return response, metadata + + def pre_restore_database( + self, + request: spanner_database_admin.RestoreDatabaseRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.RestoreDatabaseRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for restore_database + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_restore_database( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for restore_database + + DEPRECATED. Please use the `post_restore_database_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_restore_database` interceptor runs + before the `post_restore_database_with_metadata` interceptor. + """ + return response + + def post_restore_database_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for restore_database + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_restore_database_with_metadata` + interceptor in new development instead of the `post_restore_database` interceptor. + When both interceptors are used, this `post_restore_database_with_metadata` interceptor runs after the + `post_restore_database` interceptor. The (possibly modified) response returned by + `post_restore_database` will be passed to + `post_restore_database_with_metadata`. + """ + return response, metadata + + def pre_set_iam_policy( + self, + request: iam_policy_pb2.SetIamPolicyRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + iam_policy_pb2.SetIamPolicyRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for set_iam_policy + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_set_iam_policy(self, response: policy_pb2.Policy) -> policy_pb2.Policy: + """Post-rpc interceptor for set_iam_policy + + DEPRECATED. Please use the `post_set_iam_policy_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_set_iam_policy` interceptor runs + before the `post_set_iam_policy_with_metadata` interceptor. + """ + return response + + def post_set_iam_policy_with_metadata( + self, + response: policy_pb2.Policy, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[policy_pb2.Policy, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for set_iam_policy + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_set_iam_policy_with_metadata` + interceptor in new development instead of the `post_set_iam_policy` interceptor. + When both interceptors are used, this `post_set_iam_policy_with_metadata` interceptor runs after the + `post_set_iam_policy` interceptor. The (possibly modified) response returned by + `post_set_iam_policy` will be passed to + `post_set_iam_policy_with_metadata`. + """ + return response, metadata + + def pre_test_iam_permissions( + self, + request: iam_policy_pb2.TestIamPermissionsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + iam_policy_pb2.TestIamPermissionsRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for test_iam_permissions + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_test_iam_permissions( + self, response: iam_policy_pb2.TestIamPermissionsResponse + ) -> iam_policy_pb2.TestIamPermissionsResponse: + """Post-rpc interceptor for test_iam_permissions + + DEPRECATED. Please use the `post_test_iam_permissions_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_test_iam_permissions` interceptor runs + before the `post_test_iam_permissions_with_metadata` interceptor. + """ + return response + + def post_test_iam_permissions_with_metadata( + self, + response: iam_policy_pb2.TestIamPermissionsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + iam_policy_pb2.TestIamPermissionsResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for test_iam_permissions + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_test_iam_permissions_with_metadata` + interceptor in new development instead of the `post_test_iam_permissions` interceptor. + When both interceptors are used, this `post_test_iam_permissions_with_metadata` interceptor runs after the + `post_test_iam_permissions` interceptor. The (possibly modified) response returned by + `post_test_iam_permissions` will be passed to + `post_test_iam_permissions_with_metadata`. + """ + return response, metadata + + def pre_update_backup( + self, + request: gsad_backup.UpdateBackupRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + gsad_backup.UpdateBackupRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for update_backup + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_update_backup(self, response: gsad_backup.Backup) -> gsad_backup.Backup: + """Post-rpc interceptor for update_backup + + DEPRECATED. Please use the `post_update_backup_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_update_backup` interceptor runs + before the `post_update_backup_with_metadata` interceptor. + """ + return response + + def post_update_backup_with_metadata( + self, + response: gsad_backup.Backup, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[gsad_backup.Backup, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for update_backup + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_update_backup_with_metadata` + interceptor in new development instead of the `post_update_backup` interceptor. + When both interceptors are used, this `post_update_backup_with_metadata` interceptor runs after the + `post_update_backup` interceptor. The (possibly modified) response returned by + `post_update_backup` will be passed to + `post_update_backup_with_metadata`. + """ + return response, metadata + + def pre_update_backup_schedule( + self, + request: gsad_backup_schedule.UpdateBackupScheduleRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + gsad_backup_schedule.UpdateBackupScheduleRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for update_backup_schedule + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_update_backup_schedule( + self, response: gsad_backup_schedule.BackupSchedule + ) -> gsad_backup_schedule.BackupSchedule: + """Post-rpc interceptor for update_backup_schedule + + DEPRECATED. Please use the `post_update_backup_schedule_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_update_backup_schedule` interceptor runs + before the `post_update_backup_schedule_with_metadata` interceptor. + """ + return response + + def post_update_backup_schedule_with_metadata( + self, + response: gsad_backup_schedule.BackupSchedule, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + gsad_backup_schedule.BackupSchedule, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for update_backup_schedule + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_update_backup_schedule_with_metadata` + interceptor in new development instead of the `post_update_backup_schedule` interceptor. + When both interceptors are used, this `post_update_backup_schedule_with_metadata` interceptor runs after the + `post_update_backup_schedule` interceptor. The (possibly modified) response returned by + `post_update_backup_schedule` will be passed to + `post_update_backup_schedule_with_metadata`. + """ + return response, metadata + + def pre_update_database( + self, + request: spanner_database_admin.UpdateDatabaseRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.UpdateDatabaseRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for update_database + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_update_database( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for update_database + + DEPRECATED. Please use the `post_update_database_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_update_database` interceptor runs + before the `post_update_database_with_metadata` interceptor. + """ + return response + + def post_update_database_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for update_database + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_update_database_with_metadata` + interceptor in new development instead of the `post_update_database` interceptor. + When both interceptors are used, this `post_update_database_with_metadata` interceptor runs after the + `post_update_database` interceptor. The (possibly modified) response returned by + `post_update_database` will be passed to + `post_update_database_with_metadata`. + """ + return response, metadata + + def pre_update_database_ddl( + self, + request: spanner_database_admin.UpdateDatabaseDdlRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_database_admin.UpdateDatabaseDdlRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for update_database_ddl + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_update_database_ddl( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for update_database_ddl + + DEPRECATED. Please use the `post_update_database_ddl_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. This `post_update_database_ddl` interceptor runs + before the `post_update_database_ddl_with_metadata` interceptor. + """ + return response + + def post_update_database_ddl_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for update_database_ddl + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the DatabaseAdmin server but before it is returned to user code. + + We recommend only using this `post_update_database_ddl_with_metadata` + interceptor in new development instead of the `post_update_database_ddl` interceptor. + When both interceptors are used, this `post_update_database_ddl_with_metadata` interceptor runs after the + `post_update_database_ddl` interceptor. The (possibly modified) response returned by + `post_update_database_ddl` will be passed to + `post_update_database_ddl_with_metadata`. + """ + return response, metadata + + def pre_cancel_operation( + self, + request: operations_pb2.CancelOperationRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + operations_pb2.CancelOperationRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for cancel_operation + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_cancel_operation(self, response: None) -> None: + """Post-rpc interceptor for cancel_operation + + Override in a subclass to manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. + """ + return response + + def pre_delete_operation( + self, + request: operations_pb2.DeleteOperationRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + operations_pb2.DeleteOperationRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for delete_operation + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_delete_operation(self, response: None) -> None: + """Post-rpc interceptor for delete_operation + + Override in a subclass to manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. + """ + return response + + def pre_get_operation( + self, + request: operations_pb2.GetOperationRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + operations_pb2.GetOperationRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for get_operation + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_get_operation( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for get_operation + + Override in a subclass to manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. + """ + return response + + def pre_list_operations( + self, + request: operations_pb2.ListOperationsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + operations_pb2.ListOperationsRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for list_operations + + Override in a subclass to manipulate the request or metadata + before they are sent to the DatabaseAdmin server. + """ + return request, metadata + + def post_list_operations( + self, response: operations_pb2.ListOperationsResponse + ) -> operations_pb2.ListOperationsResponse: + """Post-rpc interceptor for list_operations + + Override in a subclass to manipulate the response + after it is returned by the DatabaseAdmin server but before + it is returned to user code. + """ + return response + + +@dataclasses.dataclass +class DatabaseAdminRestStub: + _session: AuthorizedSession + _host: str + _interceptor: DatabaseAdminRestInterceptor + + +class DatabaseAdminRestTransport(_BaseDatabaseAdminRestTransport): + """REST backend synchronous transport for DatabaseAdmin. + + Cloud Spanner Database Admin API + + The Cloud Spanner Database Admin API can be used to: + + - create, drop, and list databases + - update the schema of pre-existing databases + - create, delete, copy and list backups for a database + - restore a database from an existing backup + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends JSON representations of protocol buffers over HTTP/1.1 + """ + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + url_scheme: str = "https", + interceptor: Optional[DatabaseAdminRestInterceptor] = None, + api_audience: Optional[str] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + client_cert_source_for_mtls (Callable[[], Tuple[bytes, bytes]]): Client + certificate to configure mutual TLS HTTP channel. It is ignored + if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you are developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + url_scheme: the protocol scheme for the API endpoint. Normally + "https", but for testing or local servers, + "http" can be specified. + """ + # Run the base constructor + # TODO(yon-mg): resolve other ctor params i.e. scopes, quota, etc. + # TODO: When custom host (api_endpoint) is set, `scopes` must *also* be set on the + # credentials object + super().__init__( + host=host, + credentials=credentials, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + url_scheme=url_scheme, + api_audience=api_audience, + ) + self._session = AuthorizedSession( + self._credentials, default_host=self.DEFAULT_HOST + ) + self._operations_client: Optional[operations_v1.AbstractOperationsClient] = None + if client_cert_source_for_mtls: + self._session.configure_mtls_channel(client_cert_source_for_mtls) + self._interceptor = interceptor or DatabaseAdminRestInterceptor() + self._prep_wrapped_messages(client_info) + + @property + def operations_client(self) -> operations_v1.AbstractOperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Only create a new client if we do not already have one. + if self._operations_client is None: + http_options: Dict[str, List[Dict[str, str]]] = { + "google.longrunning.Operations.CancelOperation": [ + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}:cancel", + }, + ], + "google.longrunning.Operations.DeleteOperation": [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}", + }, + ], + "google.longrunning.Operations.GetOperation": [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}", + }, + ], + "google.longrunning.Operations.ListOperations": [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations}", + }, + ], + } + + rest_transport = operations_v1.OperationsRestTransport( + host=self._host, + # use the credentials which are saved + credentials=self._credentials, + scopes=self._scopes, + http_options=http_options, + path_prefix="v1", + ) + + self._operations_client = operations_v1.AbstractOperationsClient( + transport=rest_transport + ) + + # Return the client from cache. + return self._operations_client + + class _AddSplitPoints( + _BaseDatabaseAdminRestTransport._BaseAddSplitPoints, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.AddSplitPoints") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_database_admin.AddSplitPointsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.AddSplitPointsResponse: + r"""Call the add split points method over HTTP. + + Args: + request (~.spanner_database_admin.AddSplitPointsRequest): + The request object. The request for + [AddSplitPoints][google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_database_admin.AddSplitPointsResponse: + The response for + [AddSplitPoints][google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints]. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseAddSplitPoints._get_http_options() + ) + + request, metadata = self._interceptor.pre_add_split_points( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseAddSplitPoints._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseAddSplitPoints._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseAddSplitPoints._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.AddSplitPoints", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "AddSplitPoints", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._AddSplitPoints._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_database_admin.AddSplitPointsResponse() + pb_resp = spanner_database_admin.AddSplitPointsResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_add_split_points(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_add_split_points_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + spanner_database_admin.AddSplitPointsResponse.to_json(response) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.add_split_points", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "AddSplitPoints", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _CopyBackup( + _BaseDatabaseAdminRestTransport._BaseCopyBackup, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.CopyBackup") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: backup.CopyBackupRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the copy backup method over HTTP. + + Args: + request (~.backup.CopyBackupRequest): + The request object. The request for + [CopyBackup][google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseCopyBackup._get_http_options() + ) + + request, metadata = self._interceptor.pre_copy_backup(request, metadata) + transcoded_request = ( + _BaseDatabaseAdminRestTransport._BaseCopyBackup._get_transcoded_request( + http_options, request + ) + ) + + body = ( + _BaseDatabaseAdminRestTransport._BaseCopyBackup._get_request_body_json( + transcoded_request + ) + ) + + # Jsonify the query params + query_params = ( + _BaseDatabaseAdminRestTransport._BaseCopyBackup._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.CopyBackup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CopyBackup", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._CopyBackup._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_copy_backup(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_copy_backup_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.copy_backup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CopyBackup", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _CreateBackup( + _BaseDatabaseAdminRestTransport._BaseCreateBackup, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.CreateBackup") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: gsad_backup.CreateBackupRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the create backup method over HTTP. + + Args: + request (~.gsad_backup.CreateBackupRequest): + The request object. The request for + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseCreateBackup._get_http_options() + ) + + request, metadata = self._interceptor.pre_create_backup(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseCreateBackup._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseCreateBackup._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseCreateBackup._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.CreateBackup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CreateBackup", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._CreateBackup._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_create_backup(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_create_backup_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.create_backup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CreateBackup", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _CreateBackupSchedule( + _BaseDatabaseAdminRestTransport._BaseCreateBackupSchedule, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.CreateBackupSchedule") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: gsad_backup_schedule.CreateBackupScheduleRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup_schedule.BackupSchedule: + r"""Call the create backup schedule method over HTTP. + + Args: + request (~.gsad_backup_schedule.CreateBackupScheduleRequest): + The request object. The request for + [CreateBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackupSchedule]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.gsad_backup_schedule.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseCreateBackupSchedule._get_http_options() + ) + + request, metadata = self._interceptor.pre_create_backup_schedule( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseCreateBackupSchedule._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseCreateBackupSchedule._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseCreateBackupSchedule._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.CreateBackupSchedule", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CreateBackupSchedule", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._CreateBackupSchedule._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = gsad_backup_schedule.BackupSchedule() + pb_resp = gsad_backup_schedule.BackupSchedule.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_create_backup_schedule(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_create_backup_schedule_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = gsad_backup_schedule.BackupSchedule.to_json( + response + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.create_backup_schedule", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CreateBackupSchedule", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _CreateDatabase( + _BaseDatabaseAdminRestTransport._BaseCreateDatabase, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.CreateDatabase") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_database_admin.CreateDatabaseRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the create database method over HTTP. + + Args: + request (~.spanner_database_admin.CreateDatabaseRequest): + The request object. The request for + [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseCreateDatabase._get_http_options() + ) + + request, metadata = self._interceptor.pre_create_database(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseCreateDatabase._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseCreateDatabase._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseCreateDatabase._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.CreateDatabase", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CreateDatabase", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._CreateDatabase._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_create_database(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_create_database_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.create_database", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CreateDatabase", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _DeleteBackup( + _BaseDatabaseAdminRestTransport._BaseDeleteBackup, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.DeleteBackup") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: backup.DeleteBackupRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ): + r"""Call the delete backup method over HTTP. + + Args: + request (~.backup.DeleteBackupRequest): + The request object. The request for + [DeleteBackup][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseDeleteBackup._get_http_options() + ) + + request, metadata = self._interceptor.pre_delete_backup(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseDeleteBackup._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseDeleteBackup._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.DeleteBackup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "DeleteBackup", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._DeleteBackup._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + class _DeleteBackupSchedule( + _BaseDatabaseAdminRestTransport._BaseDeleteBackupSchedule, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.DeleteBackupSchedule") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: backup_schedule.DeleteBackupScheduleRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ): + r"""Call the delete backup schedule method over HTTP. + + Args: + request (~.backup_schedule.DeleteBackupScheduleRequest): + The request object. The request for + [DeleteBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackupSchedule]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseDeleteBackupSchedule._get_http_options() + ) + + request, metadata = self._interceptor.pre_delete_backup_schedule( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseDeleteBackupSchedule._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseDeleteBackupSchedule._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.DeleteBackupSchedule", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "DeleteBackupSchedule", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._DeleteBackupSchedule._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + class _DropDatabase( + _BaseDatabaseAdminRestTransport._BaseDropDatabase, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.DropDatabase") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_database_admin.DropDatabaseRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ): + r"""Call the drop database method over HTTP. + + Args: + request (~.spanner_database_admin.DropDatabaseRequest): + The request object. The request for + [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseDropDatabase._get_http_options() + ) + + request, metadata = self._interceptor.pre_drop_database(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseDropDatabase._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseDropDatabase._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.DropDatabase", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "DropDatabase", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._DropDatabase._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + class _GetBackup( + _BaseDatabaseAdminRestTransport._BaseGetBackup, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.GetBackup") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: backup.GetBackupRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup.Backup: + r"""Call the get backup method over HTTP. + + Args: + request (~.backup.GetBackupRequest): + The request object. The request for + [GetBackup][google.spanner.admin.database.v1.DatabaseAdmin.GetBackup]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.backup.Backup: + A backup of a Cloud Spanner database. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseGetBackup._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_backup(request, metadata) + transcoded_request = ( + _BaseDatabaseAdminRestTransport._BaseGetBackup._get_transcoded_request( + http_options, request + ) + ) + + # Jsonify the query params + query_params = ( + _BaseDatabaseAdminRestTransport._BaseGetBackup._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.GetBackup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetBackup", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._GetBackup._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = backup.Backup() + pb_resp = backup.Backup.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_backup(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_backup_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = backup.Backup.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.get_backup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetBackup", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _GetBackupSchedule( + _BaseDatabaseAdminRestTransport._BaseGetBackupSchedule, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.GetBackupSchedule") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: backup_schedule.GetBackupScheduleRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup_schedule.BackupSchedule: + r"""Call the get backup schedule method over HTTP. + + Args: + request (~.backup_schedule.GetBackupScheduleRequest): + The request object. The request for + [GetBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.GetBackupSchedule]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.backup_schedule.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseGetBackupSchedule._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_backup_schedule( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseGetBackupSchedule._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseGetBackupSchedule._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.GetBackupSchedule", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetBackupSchedule", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._GetBackupSchedule._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = backup_schedule.BackupSchedule() + pb_resp = backup_schedule.BackupSchedule.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_backup_schedule(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_backup_schedule_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = backup_schedule.BackupSchedule.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.get_backup_schedule", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetBackupSchedule", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _GetDatabase( + _BaseDatabaseAdminRestTransport._BaseGetDatabase, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.GetDatabase") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_database_admin.GetDatabaseRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.Database: + r"""Call the get database method over HTTP. + + Args: + request (~.spanner_database_admin.GetDatabaseRequest): + The request object. The request for + [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_database_admin.Database: + A Cloud Spanner database. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseGetDatabase._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_database(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseGetDatabase._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = ( + _BaseDatabaseAdminRestTransport._BaseGetDatabase._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.GetDatabase", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetDatabase", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._GetDatabase._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_database_admin.Database() + pb_resp = spanner_database_admin.Database.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_database(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_database_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner_database_admin.Database.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.get_database", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetDatabase", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _GetDatabaseDdl( + _BaseDatabaseAdminRestTransport._BaseGetDatabaseDdl, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.GetDatabaseDdl") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_database_admin.GetDatabaseDdlRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.GetDatabaseDdlResponse: + r"""Call the get database ddl method over HTTP. + + Args: + request (~.spanner_database_admin.GetDatabaseDdlRequest): + The request object. The request for + [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_database_admin.GetDatabaseDdlResponse: + The response for + [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseGetDatabaseDdl._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_database_ddl( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseGetDatabaseDdl._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseGetDatabaseDdl._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.GetDatabaseDdl", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetDatabaseDdl", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._GetDatabaseDdl._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_database_admin.GetDatabaseDdlResponse() + pb_resp = spanner_database_admin.GetDatabaseDdlResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_database_ddl(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_database_ddl_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + spanner_database_admin.GetDatabaseDdlResponse.to_json(response) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.get_database_ddl", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetDatabaseDdl", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _GetIamPolicy( + _BaseDatabaseAdminRestTransport._BaseGetIamPolicy, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.GetIamPolicy") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: iam_policy_pb2.GetIamPolicyRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Call the get iam policy method over HTTP. + + Args: + request (~.iam_policy_pb2.GetIamPolicyRequest): + The request object. Request message for ``GetIamPolicy`` method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which + specifies access controls for Google Cloud resources. + + A ``Policy`` is a collection of ``bindings``. A + ``binding`` binds one or more ``members``, or + principals, to a single ``role``. Principals can be user + accounts, service accounts, Google groups, and domains + (such as G Suite). A ``role`` is a named list of + permissions; each ``role`` can be an IAM predefined role + or a user-created custom role. + + For some types of Google Cloud resources, a ``binding`` + can also specify a ``condition``, which is a logical + expression that allows access to a resource only if the + expression evaluates to ``true``. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the `IAM + documentation `__. + + **JSON example:** + + :: + + { + "bindings": [ + { + "role": "roles/resourcemanager.organizationAdmin", + "members": [ + "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + }, + { + "role": "roles/resourcemanager.organizationViewer", + "members": [ + "user:eve@example.com" + ], + "condition": { + "title": "expirable access", + "description": "Does not grant access after Sep 2020", + "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", + } + } + ], + "etag": "BwWWja0YfJA=", + "version": 3 + } + + **YAML example:** + + :: + + bindings: + - members: + - user:mike@example.com + - group:admins@example.com + - domain:google.com + - serviceAccount:my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin + - members: + - user:eve@example.com + role: roles/resourcemanager.organizationViewer + condition: + title: expirable access + description: Does not grant access after Sep 2020 + expression: request.time < timestamp('2020-10-01T00:00:00.000Z') + etag: BwWWja0YfJA= + version: 3 + + For a description of IAM and its features, see the `IAM + documentation `__. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseGetIamPolicy._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_iam_policy(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseGetIamPolicy._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseGetIamPolicy._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseGetIamPolicy._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.GetIamPolicy", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetIamPolicy", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._GetIamPolicy._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = policy_pb2.Policy() + pb_resp = resp + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_iam_policy(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_iam_policy_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.get_iam_policy", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetIamPolicy", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _InternalUpdateGraphOperation( + _BaseDatabaseAdminRestTransport._BaseInternalUpdateGraphOperation, + DatabaseAdminRestStub, + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.InternalUpdateGraphOperation") + + def __call__( + self, + request: spanner_database_admin.InternalUpdateGraphOperationRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.InternalUpdateGraphOperationResponse: + raise NotImplementedError( + "Method InternalUpdateGraphOperation is not available over REST transport" + ) + + class _ListBackupOperations( + _BaseDatabaseAdminRestTransport._BaseListBackupOperations, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.ListBackupOperations") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: backup.ListBackupOperationsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup.ListBackupOperationsResponse: + r"""Call the list backup operations method over HTTP. + + Args: + request (~.backup.ListBackupOperationsRequest): + The request object. The request for + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.backup.ListBackupOperationsResponse: + The response for + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseListBackupOperations._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_backup_operations( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseListBackupOperations._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseListBackupOperations._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.ListBackupOperations", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListBackupOperations", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._ListBackupOperations._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = backup.ListBackupOperationsResponse() + pb_resp = backup.ListBackupOperationsResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_backup_operations(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_backup_operations_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = backup.ListBackupOperationsResponse.to_json( + response + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.list_backup_operations", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListBackupOperations", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListBackups( + _BaseDatabaseAdminRestTransport._BaseListBackups, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.ListBackups") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: backup.ListBackupsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup.ListBackupsResponse: + r"""Call the list backups method over HTTP. + + Args: + request (~.backup.ListBackupsRequest): + The request object. The request for + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.backup.ListBackupsResponse: + The response for + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseListBackups._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_backups(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseListBackups._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = ( + _BaseDatabaseAdminRestTransport._BaseListBackups._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.ListBackups", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListBackups", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._ListBackups._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = backup.ListBackupsResponse() + pb_resp = backup.ListBackupsResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_backups(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_backups_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = backup.ListBackupsResponse.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.list_backups", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListBackups", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListBackupSchedules( + _BaseDatabaseAdminRestTransport._BaseListBackupSchedules, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.ListBackupSchedules") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: backup_schedule.ListBackupSchedulesRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> backup_schedule.ListBackupSchedulesResponse: + r"""Call the list backup schedules method over HTTP. + + Args: + request (~.backup_schedule.ListBackupSchedulesRequest): + The request object. The request for + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.backup_schedule.ListBackupSchedulesResponse: + The response for + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules]. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseListBackupSchedules._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_backup_schedules( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseListBackupSchedules._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseListBackupSchedules._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.ListBackupSchedules", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListBackupSchedules", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._ListBackupSchedules._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = backup_schedule.ListBackupSchedulesResponse() + pb_resp = backup_schedule.ListBackupSchedulesResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_backup_schedules(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_backup_schedules_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + backup_schedule.ListBackupSchedulesResponse.to_json(response) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.list_backup_schedules", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListBackupSchedules", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListDatabaseOperations( + _BaseDatabaseAdminRestTransport._BaseListDatabaseOperations, + DatabaseAdminRestStub, + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.ListDatabaseOperations") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_database_admin.ListDatabaseOperationsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.ListDatabaseOperationsResponse: + r"""Call the list database operations method over HTTP. + + Args: + request (~.spanner_database_admin.ListDatabaseOperationsRequest): + The request object. The request for + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_database_admin.ListDatabaseOperationsResponse: + The response for + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseListDatabaseOperations._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_database_operations( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseListDatabaseOperations._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseListDatabaseOperations._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.ListDatabaseOperations", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListDatabaseOperations", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._ListDatabaseOperations._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_database_admin.ListDatabaseOperationsResponse() + pb_resp = spanner_database_admin.ListDatabaseOperationsResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_database_operations(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_database_operations_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + spanner_database_admin.ListDatabaseOperationsResponse.to_json( + response + ) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.list_database_operations", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListDatabaseOperations", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListDatabaseRoles( + _BaseDatabaseAdminRestTransport._BaseListDatabaseRoles, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.ListDatabaseRoles") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_database_admin.ListDatabaseRolesRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.ListDatabaseRolesResponse: + r"""Call the list database roles method over HTTP. + + Args: + request (~.spanner_database_admin.ListDatabaseRolesRequest): + The request object. The request for + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_database_admin.ListDatabaseRolesResponse: + The response for + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles]. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseListDatabaseRoles._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_database_roles( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseListDatabaseRoles._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseListDatabaseRoles._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.ListDatabaseRoles", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListDatabaseRoles", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._ListDatabaseRoles._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_database_admin.ListDatabaseRolesResponse() + pb_resp = spanner_database_admin.ListDatabaseRolesResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_database_roles(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_database_roles_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + spanner_database_admin.ListDatabaseRolesResponse.to_json( + response + ) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.list_database_roles", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListDatabaseRoles", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListDatabases( + _BaseDatabaseAdminRestTransport._BaseListDatabases, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.ListDatabases") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_database_admin.ListDatabasesRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_database_admin.ListDatabasesResponse: + r"""Call the list databases method over HTTP. + + Args: + request (~.spanner_database_admin.ListDatabasesRequest): + The request object. The request for + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_database_admin.ListDatabasesResponse: + The response for + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseListDatabases._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_databases(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseListDatabases._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseListDatabases._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.ListDatabases", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListDatabases", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._ListDatabases._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_database_admin.ListDatabasesResponse() + pb_resp = spanner_database_admin.ListDatabasesResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_databases(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_databases_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + spanner_database_admin.ListDatabasesResponse.to_json(response) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.list_databases", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListDatabases", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _RestoreDatabase( + _BaseDatabaseAdminRestTransport._BaseRestoreDatabase, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.RestoreDatabase") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_database_admin.RestoreDatabaseRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the restore database method over HTTP. + + Args: + request (~.spanner_database_admin.RestoreDatabaseRequest): + The request object. The request for + [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseRestoreDatabase._get_http_options() + ) + + request, metadata = self._interceptor.pre_restore_database( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseRestoreDatabase._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseRestoreDatabase._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseRestoreDatabase._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.RestoreDatabase", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "RestoreDatabase", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._RestoreDatabase._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_restore_database(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_restore_database_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.restore_database", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "RestoreDatabase", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _SetIamPolicy( + _BaseDatabaseAdminRestTransport._BaseSetIamPolicy, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.SetIamPolicy") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: iam_policy_pb2.SetIamPolicyRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Call the set iam policy method over HTTP. + + Args: + request (~.iam_policy_pb2.SetIamPolicyRequest): + The request object. Request message for ``SetIamPolicy`` method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which + specifies access controls for Google Cloud resources. + + A ``Policy`` is a collection of ``bindings``. A + ``binding`` binds one or more ``members``, or + principals, to a single ``role``. Principals can be user + accounts, service accounts, Google groups, and domains + (such as G Suite). A ``role`` is a named list of + permissions; each ``role`` can be an IAM predefined role + or a user-created custom role. + + For some types of Google Cloud resources, a ``binding`` + can also specify a ``condition``, which is a logical + expression that allows access to a resource only if the + expression evaluates to ``true``. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the `IAM + documentation `__. + + **JSON example:** + + :: + + { + "bindings": [ + { + "role": "roles/resourcemanager.organizationAdmin", + "members": [ + "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + }, + { + "role": "roles/resourcemanager.organizationViewer", + "members": [ + "user:eve@example.com" + ], + "condition": { + "title": "expirable access", + "description": "Does not grant access after Sep 2020", + "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", + } + } + ], + "etag": "BwWWja0YfJA=", + "version": 3 + } + + **YAML example:** + + :: + + bindings: + - members: + - user:mike@example.com + - group:admins@example.com + - domain:google.com + - serviceAccount:my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin + - members: + - user:eve@example.com + role: roles/resourcemanager.organizationViewer + condition: + title: expirable access + description: Does not grant access after Sep 2020 + expression: request.time < timestamp('2020-10-01T00:00:00.000Z') + etag: BwWWja0YfJA= + version: 3 + + For a description of IAM and its features, see the `IAM + documentation `__. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseSetIamPolicy._get_http_options() + ) + + request, metadata = self._interceptor.pre_set_iam_policy(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseSetIamPolicy._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseSetIamPolicy._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseSetIamPolicy._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.SetIamPolicy", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "SetIamPolicy", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._SetIamPolicy._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = policy_pb2.Policy() + pb_resp = resp + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_set_iam_policy(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_set_iam_policy_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.set_iam_policy", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "SetIamPolicy", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _TestIamPermissions( + _BaseDatabaseAdminRestTransport._BaseTestIamPermissions, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.TestIamPermissions") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: iam_policy_pb2.TestIamPermissionsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> iam_policy_pb2.TestIamPermissionsResponse: + r"""Call the test iam permissions method over HTTP. + + Args: + request (~.iam_policy_pb2.TestIamPermissionsRequest): + The request object. Request message for ``TestIamPermissions`` method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.iam_policy_pb2.TestIamPermissionsResponse: + Response message for ``TestIamPermissions`` method. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseTestIamPermissions._get_http_options() + ) + + request, metadata = self._interceptor.pre_test_iam_permissions( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseTestIamPermissions._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseTestIamPermissions._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseTestIamPermissions._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.TestIamPermissions", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "TestIamPermissions", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._TestIamPermissions._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = iam_policy_pb2.TestIamPermissionsResponse() + pb_resp = resp + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_test_iam_permissions(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_test_iam_permissions_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.test_iam_permissions", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "TestIamPermissions", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _UpdateBackup( + _BaseDatabaseAdminRestTransport._BaseUpdateBackup, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.UpdateBackup") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: gsad_backup.UpdateBackupRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup.Backup: + r"""Call the update backup method over HTTP. + + Args: + request (~.gsad_backup.UpdateBackupRequest): + The request object. The request for + [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.gsad_backup.Backup: + A backup of a Cloud Spanner database. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseUpdateBackup._get_http_options() + ) + + request, metadata = self._interceptor.pre_update_backup(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseUpdateBackup._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseUpdateBackup._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseUpdateBackup._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.UpdateBackup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "UpdateBackup", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._UpdateBackup._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = gsad_backup.Backup() + pb_resp = gsad_backup.Backup.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_update_backup(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_update_backup_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = gsad_backup.Backup.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.update_backup", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "UpdateBackup", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _UpdateBackupSchedule( + _BaseDatabaseAdminRestTransport._BaseUpdateBackupSchedule, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.UpdateBackupSchedule") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: gsad_backup_schedule.UpdateBackupScheduleRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> gsad_backup_schedule.BackupSchedule: + r"""Call the update backup schedule method over HTTP. + + Args: + request (~.gsad_backup_schedule.UpdateBackupScheduleRequest): + The request object. The request for + [UpdateBackupScheduleRequest][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackupSchedule]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.gsad_backup_schedule.BackupSchedule: + BackupSchedule expresses the + automated backup creation specification + for a Spanner database. Next ID: 10 + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseUpdateBackupSchedule._get_http_options() + ) + + request, metadata = self._interceptor.pre_update_backup_schedule( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseUpdateBackupSchedule._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseUpdateBackupSchedule._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseUpdateBackupSchedule._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.UpdateBackupSchedule", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "UpdateBackupSchedule", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._UpdateBackupSchedule._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = gsad_backup_schedule.BackupSchedule() + pb_resp = gsad_backup_schedule.BackupSchedule.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_update_backup_schedule(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_update_backup_schedule_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = gsad_backup_schedule.BackupSchedule.to_json( + response + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.update_backup_schedule", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "UpdateBackupSchedule", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _UpdateDatabase( + _BaseDatabaseAdminRestTransport._BaseUpdateDatabase, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.UpdateDatabase") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_database_admin.UpdateDatabaseRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the update database method over HTTP. + + Args: + request (~.spanner_database_admin.UpdateDatabaseRequest): + The request object. The request for + [UpdateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseUpdateDatabase._get_http_options() + ) + + request, metadata = self._interceptor.pre_update_database(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseUpdateDatabase._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseUpdateDatabase._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseUpdateDatabase._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.UpdateDatabase", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "UpdateDatabase", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._UpdateDatabase._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_update_database(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_update_database_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.update_database", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "UpdateDatabase", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _UpdateDatabaseDdl( + _BaseDatabaseAdminRestTransport._BaseUpdateDatabaseDdl, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.UpdateDatabaseDdl") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_database_admin.UpdateDatabaseDdlRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the update database ddl method over HTTP. + + Args: + request (~.spanner_database_admin.UpdateDatabaseDdlRequest): + The request object. Enqueues the given DDL statements to be applied, in + order but not necessarily all at once, to the database + schema at some point (or points) in the future. The + server checks that the statements are executable + (syntactically valid, name tables that exist, etc.) + before enqueueing them, but they may still fail upon + later execution (e.g., if a statement from another batch + of statements is applied first and it conflicts in some + way, or if there is some data-related problem like a + ``NULL`` value in a column to which ``NOT NULL`` would + be added). If a statement fails, all subsequent + statements in the batch are automatically cancelled. + + Each batch of statements is assigned a name which can be + used with the + [Operations][google.longrunning.Operations] API to + monitor progress. See the + [operation_id][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.operation_id] + field for more details. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseUpdateDatabaseDdl._get_http_options() + ) + + request, metadata = self._interceptor.pre_update_database_ddl( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseUpdateDatabaseDdl._get_transcoded_request( + http_options, request + ) + + body = _BaseDatabaseAdminRestTransport._BaseUpdateDatabaseDdl._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseUpdateDatabaseDdl._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.UpdateDatabaseDdl", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "UpdateDatabaseDdl", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._UpdateDatabaseDdl._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_update_database_ddl(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_update_database_ddl_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminClient.update_database_ddl", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "UpdateDatabaseDdl", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + @property + def add_split_points( + self, + ) -> Callable[ + [spanner_database_admin.AddSplitPointsRequest], + spanner_database_admin.AddSplitPointsResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._AddSplitPoints(self._session, self._host, self._interceptor) # type: ignore + + @property + def copy_backup( + self, + ) -> Callable[[backup.CopyBackupRequest], operations_pb2.Operation]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._CopyBackup(self._session, self._host, self._interceptor) # type: ignore + + @property + def create_backup( + self, + ) -> Callable[[gsad_backup.CreateBackupRequest], operations_pb2.Operation]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._CreateBackup(self._session, self._host, self._interceptor) # type: ignore + + @property + def create_backup_schedule( + self, + ) -> Callable[ + [gsad_backup_schedule.CreateBackupScheduleRequest], + gsad_backup_schedule.BackupSchedule, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._CreateBackupSchedule(self._session, self._host, self._interceptor) # type: ignore + + @property + def create_database( + self, + ) -> Callable[ + [spanner_database_admin.CreateDatabaseRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._CreateDatabase(self._session, self._host, self._interceptor) # type: ignore + + @property + def delete_backup(self) -> Callable[[backup.DeleteBackupRequest], empty_pb2.Empty]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._DeleteBackup(self._session, self._host, self._interceptor) # type: ignore + + @property + def delete_backup_schedule( + self, + ) -> Callable[[backup_schedule.DeleteBackupScheduleRequest], empty_pb2.Empty]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._DeleteBackupSchedule(self._session, self._host, self._interceptor) # type: ignore + + @property + def drop_database( + self, + ) -> Callable[[spanner_database_admin.DropDatabaseRequest], empty_pb2.Empty]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._DropDatabase(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_backup(self) -> Callable[[backup.GetBackupRequest], backup.Backup]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetBackup(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_backup_schedule( + self, + ) -> Callable[ + [backup_schedule.GetBackupScheduleRequest], backup_schedule.BackupSchedule + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetBackupSchedule(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_database( + self, + ) -> Callable[ + [spanner_database_admin.GetDatabaseRequest], spanner_database_admin.Database + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetDatabase(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_database_ddl( + self, + ) -> Callable[ + [spanner_database_admin.GetDatabaseDdlRequest], + spanner_database_admin.GetDatabaseDdlResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetDatabaseDdl(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.GetIamPolicyRequest], policy_pb2.Policy]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetIamPolicy(self._session, self._host, self._interceptor) # type: ignore + + @property + def internal_update_graph_operation( + self, + ) -> Callable[ + [spanner_database_admin.InternalUpdateGraphOperationRequest], + spanner_database_admin.InternalUpdateGraphOperationResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._InternalUpdateGraphOperation(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_backup_operations( + self, + ) -> Callable[ + [backup.ListBackupOperationsRequest], backup.ListBackupOperationsResponse + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListBackupOperations(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_backups( + self, + ) -> Callable[[backup.ListBackupsRequest], backup.ListBackupsResponse]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListBackups(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_backup_schedules( + self, + ) -> Callable[ + [backup_schedule.ListBackupSchedulesRequest], + backup_schedule.ListBackupSchedulesResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListBackupSchedules(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_database_operations( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabaseOperationsRequest], + spanner_database_admin.ListDatabaseOperationsResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListDatabaseOperations(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_database_roles( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabaseRolesRequest], + spanner_database_admin.ListDatabaseRolesResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListDatabaseRoles(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_databases( + self, + ) -> Callable[ + [spanner_database_admin.ListDatabasesRequest], + spanner_database_admin.ListDatabasesResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListDatabases(self._session, self._host, self._interceptor) # type: ignore + + @property + def restore_database( + self, + ) -> Callable[ + [spanner_database_admin.RestoreDatabaseRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._RestoreDatabase(self._session, self._host, self._interceptor) # type: ignore + + @property + def set_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.SetIamPolicyRequest], policy_pb2.Policy]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._SetIamPolicy(self._session, self._host, self._interceptor) # type: ignore + + @property + def test_iam_permissions( + self, + ) -> Callable[ + [iam_policy_pb2.TestIamPermissionsRequest], + iam_policy_pb2.TestIamPermissionsResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._TestIamPermissions(self._session, self._host, self._interceptor) # type: ignore + + @property + def update_backup( + self, + ) -> Callable[[gsad_backup.UpdateBackupRequest], gsad_backup.Backup]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._UpdateBackup(self._session, self._host, self._interceptor) # type: ignore + + @property + def update_backup_schedule( + self, + ) -> Callable[ + [gsad_backup_schedule.UpdateBackupScheduleRequest], + gsad_backup_schedule.BackupSchedule, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._UpdateBackupSchedule(self._session, self._host, self._interceptor) # type: ignore + + @property + def update_database( + self, + ) -> Callable[ + [spanner_database_admin.UpdateDatabaseRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._UpdateDatabase(self._session, self._host, self._interceptor) # type: ignore + + @property + def update_database_ddl( + self, + ) -> Callable[ + [spanner_database_admin.UpdateDatabaseDdlRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._UpdateDatabaseDdl(self._session, self._host, self._interceptor) # type: ignore + + @property + def cancel_operation(self): + return self._CancelOperation(self._session, self._host, self._interceptor) # type: ignore + + class _CancelOperation( + _BaseDatabaseAdminRestTransport._BaseCancelOperation, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.CancelOperation") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: operations_pb2.CancelOperationRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Call the cancel operation method over HTTP. + + Args: + request (operations_pb2.CancelOperationRequest): + The request object for CancelOperation method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseCancelOperation._get_http_options() + ) + + request, metadata = self._interceptor.pre_cancel_operation( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseCancelOperation._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseCancelOperation._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.CancelOperation", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "CancelOperation", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._CancelOperation._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + return self._interceptor.post_cancel_operation(None) + + @property + def delete_operation(self): + return self._DeleteOperation(self._session, self._host, self._interceptor) # type: ignore + + class _DeleteOperation( + _BaseDatabaseAdminRestTransport._BaseDeleteOperation, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.DeleteOperation") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: operations_pb2.DeleteOperationRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Call the delete operation method over HTTP. + + Args: + request (operations_pb2.DeleteOperationRequest): + The request object for DeleteOperation method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseDeleteOperation._get_http_options() + ) + + request, metadata = self._interceptor.pre_delete_operation( + request, metadata + ) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseDeleteOperation._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseDeleteOperation._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.DeleteOperation", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "DeleteOperation", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._DeleteOperation._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + return self._interceptor.post_delete_operation(None) + + @property + def get_operation(self): + return self._GetOperation(self._session, self._host, self._interceptor) # type: ignore + + class _GetOperation( + _BaseDatabaseAdminRestTransport._BaseGetOperation, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.GetOperation") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: operations_pb2.GetOperationRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the get operation method over HTTP. + + Args: + request (operations_pb2.GetOperationRequest): + The request object for GetOperation method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + operations_pb2.Operation: Response from GetOperation method. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseGetOperation._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_operation(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseGetOperation._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseGetOperation._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.GetOperation", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetOperation", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._GetOperation._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + content = response.content.decode("utf-8") + resp = operations_pb2.Operation() + resp = json_format.Parse(content, resp) + resp = self._interceptor.post_get_operation(resp) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminAsyncClient.GetOperation", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "GetOperation", + "httpResponse": http_response, + "metadata": http_response["headers"], + }, + ) + return resp + + @property + def list_operations(self): + return self._ListOperations(self._session, self._host, self._interceptor) # type: ignore + + class _ListOperations( + _BaseDatabaseAdminRestTransport._BaseListOperations, DatabaseAdminRestStub + ): + def __hash__(self): + return hash("DatabaseAdminRestTransport.ListOperations") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: operations_pb2.ListOperationsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.ListOperationsResponse: + r"""Call the list operations method over HTTP. + + Args: + request (operations_pb2.ListOperationsRequest): + The request object for ListOperations method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + operations_pb2.ListOperationsResponse: Response from ListOperations method. + """ + + http_options = ( + _BaseDatabaseAdminRestTransport._BaseListOperations._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_operations(request, metadata) + transcoded_request = _BaseDatabaseAdminRestTransport._BaseListOperations._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseDatabaseAdminRestTransport._BaseListOperations._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.database_v1.DatabaseAdminClient.ListOperations", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListOperations", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = DatabaseAdminRestTransport._ListOperations._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + content = response.content.decode("utf-8") + resp = operations_pb2.ListOperationsResponse() + resp = json_format.Parse(content, resp) + resp = self._interceptor.post_list_operations(resp) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.database_v1.DatabaseAdminAsyncClient.ListOperations", + extra={ + "serviceName": "google.spanner.admin.database.v1.DatabaseAdmin", + "rpcName": "ListOperations", + "httpResponse": http_response, + "metadata": http_response["headers"], + }, + ) + return resp + + @property + def kind(self) -> str: + return "rest" + + def close(self): + self._session.close() + + +__all__ = ("DatabaseAdminRestTransport",) diff --git a/google/cloud/spanner_admin_database_v1/services/database_admin/transports/rest_base.py b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/rest_base.py new file mode 100644 index 0000000000..d0ee0a2cbb --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/services/database_admin/transports/rest_base.py @@ -0,0 +1,1654 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import json # type: ignore +from google.api_core import path_template +from google.api_core import gapic_v1 + +from google.protobuf import json_format +from .base import DatabaseAdminTransport, DEFAULT_CLIENT_INFO + +import re +from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union + + +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as gsad_backup_schedule, +) +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore + + +class _BaseDatabaseAdminRestTransport(DatabaseAdminTransport): + """Base REST backend transport for DatabaseAdmin. + + Note: This class is not meant to be used directly. Use its sync and + async sub-classes instead. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends JSON representations of protocol buffers over HTTP/1.1 + """ + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[Any] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + url_scheme: str = "https", + api_audience: Optional[str] = None, + ) -> None: + """Instantiate the transport. + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[Any]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you are developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + url_scheme: the protocol scheme for the API endpoint. Normally + "https", but for testing or local servers, + "http" can be specified. + """ + # Run the base constructor + maybe_url_match = re.match("^(?Phttp(?:s)?://)?(?P.*)$", host) + if maybe_url_match is None: + raise ValueError( + f"Unexpected hostname structure: {host}" + ) # pragma: NO COVER + + url_match_items = maybe_url_match.groupdict() + + host = f"{url_scheme}://{host}" if not url_match_items["scheme"] else host + + super().__init__( + host=host, + credentials=credentials, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + class _BaseAddSplitPoints: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{database=projects/*/instances/*/databases/*}:addSplitPoints", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.AddSplitPointsRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseAddSplitPoints._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCopyBackup: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{parent=projects/*/instances/*}/backups:copy", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = backup.CopyBackupRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseCopyBackup._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCreateBackup: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = { + "backupId": "", + } + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{parent=projects/*/instances/*}/backups", + "body": "backup", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = gsad_backup.CreateBackupRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseCreateBackup._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCreateBackupSchedule: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = { + "backupScheduleId": "", + } + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{parent=projects/*/instances/*/databases/*}/backupSchedules", + "body": "backup_schedule", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = gsad_backup_schedule.CreateBackupScheduleRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseCreateBackupSchedule._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCreateDatabase: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{parent=projects/*/instances/*}/databases", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.CreateDatabaseRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseCreateDatabase._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseDeleteBackup: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/backups/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = backup.DeleteBackupRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseDeleteBackup._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseDeleteBackupSchedule: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/databases/*/backupSchedules/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = backup_schedule.DeleteBackupScheduleRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseDeleteBackupSchedule._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseDropDatabase: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{database=projects/*/instances/*/databases/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.DropDatabaseRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseDropDatabase._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetBackup: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = backup.GetBackupRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseGetBackup._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetBackupSchedule: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/backupSchedules/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = backup_schedule.GetBackupScheduleRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseGetBackupSchedule._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetDatabase: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.GetDatabaseRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseGetDatabase._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetDatabaseDdl: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{database=projects/*/instances/*/databases/*}/ddl", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.GetDatabaseDdlRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseGetDatabaseDdl._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetIamPolicy: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/databases/*}:getIamPolicy", + "body": "*", + }, + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/backups/*}:getIamPolicy", + "body": "*", + }, + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/databases/*/backupSchedules/*}:getIamPolicy", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = request + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseGetIamPolicy._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseInternalUpdateGraphOperation: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + class _BaseListBackupOperations: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*/instances/*}/backupOperations", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = backup.ListBackupOperationsRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseListBackupOperations._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListBackups: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*/instances/*}/backups", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = backup.ListBackupsRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseListBackups._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListBackupSchedules: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*/instances/*/databases/*}/backupSchedules", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = backup_schedule.ListBackupSchedulesRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseListBackupSchedules._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListDatabaseOperations: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*/instances/*}/databaseOperations", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.ListDatabaseOperationsRequest.pb( + request + ) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseListDatabaseOperations._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListDatabaseRoles: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*/instances/*/databases/*}/databaseRoles", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.ListDatabaseRolesRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseListDatabaseRoles._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListDatabases: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*/instances/*}/databases", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.ListDatabasesRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseListDatabases._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseRestoreDatabase: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{parent=projects/*/instances/*}/databases:restore", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.RestoreDatabaseRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseRestoreDatabase._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseSetIamPolicy: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/databases/*}:setIamPolicy", + "body": "*", + }, + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/backups/*}:setIamPolicy", + "body": "*", + }, + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/databases/*/backupSchedules/*}:setIamPolicy", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = request + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseSetIamPolicy._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseTestIamPermissions: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/databases/*}:testIamPermissions", + "body": "*", + }, + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/backups/*}:testIamPermissions", + "body": "*", + }, + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/databases/*/backupSchedules/*}:testIamPermissions", + "body": "*", + }, + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*/databases/*/databaseRoles/*}:testIamPermissions", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = request + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseTestIamPermissions._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseUpdateBackup: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = { + "updateMask": {}, + } + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "patch", + "uri": "/v1/{backup.name=projects/*/instances/*/backups/*}", + "body": "backup", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = gsad_backup.UpdateBackupRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseUpdateBackup._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseUpdateBackupSchedule: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = { + "updateMask": {}, + } + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "patch", + "uri": "/v1/{backup_schedule.name=projects/*/instances/*/databases/*/backupSchedules/*}", + "body": "backup_schedule", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = gsad_backup_schedule.UpdateBackupScheduleRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseUpdateBackupSchedule._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseUpdateDatabase: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = { + "updateMask": {}, + } + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "patch", + "uri": "/v1/{database.name=projects/*/instances/*/databases/*}", + "body": "database", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.UpdateDatabaseRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseUpdateDatabase._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseUpdateDatabaseDdl: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "patch", + "uri": "/v1/{database=projects/*/instances/*/databases/*}/ddl", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_database_admin.UpdateDatabaseDdlRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseDatabaseAdminRestTransport._BaseUpdateDatabaseDdl._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCancelOperation: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}:cancel", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + request_kwargs = json_format.MessageToDict(request) + transcoded_request = path_template.transcode(http_options, **request_kwargs) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads(json.dumps(transcoded_request["query_params"])) + return query_params + + class _BaseDeleteOperation: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + request_kwargs = json_format.MessageToDict(request) + transcoded_request = path_template.transcode(http_options, **request_kwargs) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads(json.dumps(transcoded_request["query_params"])) + return query_params + + class _BaseGetOperation: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + request_kwargs = json_format.MessageToDict(request) + transcoded_request = path_template.transcode(http_options, **request_kwargs) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads(json.dumps(transcoded_request["query_params"])) + return query_params + + class _BaseListOperations: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + request_kwargs = json_format.MessageToDict(request) + transcoded_request = path_template.transcode(http_options, **request_kwargs) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads(json.dumps(transcoded_request["query_params"])) + return query_params + + +__all__ = ("_BaseDatabaseAdminRestTransport",) diff --git a/google/cloud/spanner_admin_database_v1/types.py b/google/cloud/spanner_admin_database_v1/types.py deleted file mode 100644 index 43103a0b6d..0000000000 --- a/google/cloud/spanner_admin_database_v1/types.py +++ /dev/null @@ -1,63 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2018 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from __future__ import absolute_import -import sys - -from google.api import http_pb2 -from google.iam.v1 import iam_policy_pb2 -from google.iam.v1 import policy_pb2 -from google.iam.v1.logging import audit_data_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import any_pb2 -from google.protobuf import descriptor_pb2 -from google.protobuf import empty_pb2 -from google.protobuf import timestamp_pb2 -from google.rpc import status_pb2 - -from google.api_core.protobuf_helpers import get_messages -from google.cloud.spanner_admin_database_v1.proto import spanner_database_admin_pb2 - - -_shared_modules = [ - http_pb2, - iam_policy_pb2, - policy_pb2, - audit_data_pb2, - operations_pb2, - any_pb2, - descriptor_pb2, - empty_pb2, - timestamp_pb2, - status_pb2, -] - -_local_modules = [spanner_database_admin_pb2] - -names = [] - -for module in _shared_modules: - for name, message in get_messages(module).items(): - setattr(sys.modules[__name__], name, message) - names.append(name) - -for module in _local_modules: - for name, message in get_messages(module).items(): - message.__module__ = "google.cloud.spanner_admin_database_v1.types" - setattr(sys.modules[__name__], name, message) - names.append(name) - -__all__ = tuple(sorted(names)) diff --git a/google/cloud/spanner_admin_database_v1/types/__init__.py b/google/cloud/spanner_admin_database_v1/types/__init__.py new file mode 100644 index 0000000000..ca79ddec90 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/types/__init__.py @@ -0,0 +1,148 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from .backup import ( + Backup, + BackupInfo, + BackupInstancePartition, + CopyBackupEncryptionConfig, + CopyBackupMetadata, + CopyBackupRequest, + CreateBackupEncryptionConfig, + CreateBackupMetadata, + CreateBackupRequest, + DeleteBackupRequest, + FullBackupSpec, + GetBackupRequest, + IncrementalBackupSpec, + ListBackupOperationsRequest, + ListBackupOperationsResponse, + ListBackupsRequest, + ListBackupsResponse, + UpdateBackupRequest, +) +from .backup_schedule import ( + BackupSchedule, + BackupScheduleSpec, + CreateBackupScheduleRequest, + CrontabSpec, + DeleteBackupScheduleRequest, + GetBackupScheduleRequest, + ListBackupSchedulesRequest, + ListBackupSchedulesResponse, + UpdateBackupScheduleRequest, +) +from .common import ( + EncryptionConfig, + EncryptionInfo, + OperationProgress, + DatabaseDialect, +) +from .spanner_database_admin import ( + AddSplitPointsRequest, + AddSplitPointsResponse, + CreateDatabaseMetadata, + CreateDatabaseRequest, + Database, + DatabaseRole, + DdlStatementActionInfo, + DropDatabaseRequest, + GetDatabaseDdlRequest, + GetDatabaseDdlResponse, + GetDatabaseRequest, + InternalUpdateGraphOperationRequest, + InternalUpdateGraphOperationResponse, + ListDatabaseOperationsRequest, + ListDatabaseOperationsResponse, + ListDatabaseRolesRequest, + ListDatabaseRolesResponse, + ListDatabasesRequest, + ListDatabasesResponse, + OptimizeRestoredDatabaseMetadata, + RestoreDatabaseEncryptionConfig, + RestoreDatabaseMetadata, + RestoreDatabaseRequest, + RestoreInfo, + SplitPoints, + UpdateDatabaseDdlMetadata, + UpdateDatabaseDdlRequest, + UpdateDatabaseMetadata, + UpdateDatabaseRequest, + RestoreSourceType, +) + +__all__ = ( + "Backup", + "BackupInfo", + "BackupInstancePartition", + "CopyBackupEncryptionConfig", + "CopyBackupMetadata", + "CopyBackupRequest", + "CreateBackupEncryptionConfig", + "CreateBackupMetadata", + "CreateBackupRequest", + "DeleteBackupRequest", + "FullBackupSpec", + "GetBackupRequest", + "IncrementalBackupSpec", + "ListBackupOperationsRequest", + "ListBackupOperationsResponse", + "ListBackupsRequest", + "ListBackupsResponse", + "UpdateBackupRequest", + "BackupSchedule", + "BackupScheduleSpec", + "CreateBackupScheduleRequest", + "CrontabSpec", + "DeleteBackupScheduleRequest", + "GetBackupScheduleRequest", + "ListBackupSchedulesRequest", + "ListBackupSchedulesResponse", + "UpdateBackupScheduleRequest", + "EncryptionConfig", + "EncryptionInfo", + "OperationProgress", + "DatabaseDialect", + "AddSplitPointsRequest", + "AddSplitPointsResponse", + "CreateDatabaseMetadata", + "CreateDatabaseRequest", + "Database", + "DatabaseRole", + "DdlStatementActionInfo", + "DropDatabaseRequest", + "GetDatabaseDdlRequest", + "GetDatabaseDdlResponse", + "GetDatabaseRequest", + "InternalUpdateGraphOperationRequest", + "InternalUpdateGraphOperationResponse", + "ListDatabaseOperationsRequest", + "ListDatabaseOperationsResponse", + "ListDatabaseRolesRequest", + "ListDatabaseRolesResponse", + "ListDatabasesRequest", + "ListDatabasesResponse", + "OptimizeRestoredDatabaseMetadata", + "RestoreDatabaseEncryptionConfig", + "RestoreDatabaseMetadata", + "RestoreDatabaseRequest", + "RestoreInfo", + "SplitPoints", + "UpdateDatabaseDdlMetadata", + "UpdateDatabaseDdlRequest", + "UpdateDatabaseMetadata", + "UpdateDatabaseRequest", + "RestoreSourceType", +) diff --git a/google/cloud/spanner_admin_database_v1/types/backup.py b/google/cloud/spanner_admin_database_v1/types/backup.py new file mode 100644 index 0000000000..da236fb4ff --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/types/backup.py @@ -0,0 +1,1104 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_admin_database_v1.types import common +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.admin.database.v1", + manifest={ + "Backup", + "CreateBackupRequest", + "CreateBackupMetadata", + "CopyBackupRequest", + "CopyBackupMetadata", + "UpdateBackupRequest", + "GetBackupRequest", + "DeleteBackupRequest", + "ListBackupsRequest", + "ListBackupsResponse", + "ListBackupOperationsRequest", + "ListBackupOperationsResponse", + "BackupInfo", + "CreateBackupEncryptionConfig", + "CopyBackupEncryptionConfig", + "FullBackupSpec", + "IncrementalBackupSpec", + "BackupInstancePartition", + }, +) + + +class Backup(proto.Message): + r"""A backup of a Cloud Spanner database. + + Attributes: + database (str): + Required for the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + operation. Name of the database from which this backup was + created. This needs to be in the same instance as the + backup. Values are of the form + ``projects//instances//databases/``. + version_time (google.protobuf.timestamp_pb2.Timestamp): + The backup will contain an externally consistent copy of the + database at the timestamp specified by ``version_time``. If + ``version_time`` is not specified, the system will set + ``version_time`` to the ``create_time`` of the backup. + expire_time (google.protobuf.timestamp_pb2.Timestamp): + Required for the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + operation. The expiration time of the backup, with + microseconds granularity that must be at least 6 hours and + at most 366 days from the time the CreateBackup request is + processed. Once the ``expire_time`` has passed, the backup + is eligible to be automatically deleted by Cloud Spanner to + free the resources used by the backup. + name (str): + Output only for the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + operation. Required for the + [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup] + operation. + + A globally unique identifier for the backup which cannot be + changed. Values are of the form + ``projects//instances//backups/[a-z][a-z0-9_\-]*[a-z0-9]`` + The final segment of the name must be between 2 and 60 + characters in length. + + The backup is stored in the location(s) specified in the + instance configuration of the instance containing the + backup, identified by the prefix of the backup name of the + form ``projects//instances/``. + create_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The time the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + request is received. If the request does not specify + ``version_time``, the ``version_time`` of the backup will be + equivalent to the ``create_time``. + size_bytes (int): + Output only. Size of the backup in bytes. + freeable_size_bytes (int): + Output only. The number of bytes that will be + freed by deleting this backup. This value will + be zero if, for example, this backup is part of + an incremental backup chain and younger backups + in the chain require that we keep its data. For + backups not in an incremental backup chain, this + is always the size of the backup. This value may + change if backups on the same chain get created, + deleted or expired. + exclusive_size_bytes (int): + Output only. For a backup in an incremental + backup chain, this is the storage space needed + to keep the data that has changed since the + previous backup. For all other backups, this is + always the size of the backup. This value may + change if backups on the same chain get deleted + or expired. + + This field can be used to calculate the total + storage space used by a set of backups. For + example, the total space used by all backups of + a database can be computed by summing up this + field. + state (google.cloud.spanner_admin_database_v1.types.Backup.State): + Output only. The current state of the backup. + referencing_databases (MutableSequence[str]): + Output only. The names of the restored databases that + reference the backup. The database names are of the form + ``projects//instances//databases/``. + Referencing databases may exist in different instances. The + existence of any referencing database prevents the backup + from being deleted. When a restored database from the backup + enters the ``READY`` state, the reference to the backup is + removed. + encryption_info (google.cloud.spanner_admin_database_v1.types.EncryptionInfo): + Output only. The encryption information for + the backup. + encryption_information (MutableSequence[google.cloud.spanner_admin_database_v1.types.EncryptionInfo]): + Output only. The encryption information for the backup, + whether it is protected by one or more KMS keys. The + information includes all Cloud KMS key versions used to + encrypt the backup. The + ``encryption_status' field inside of each``\ EncryptionInfo\` + is not populated. At least one of the key versions must be + available for the backup to be restored. If a key version is + revoked in the middle of a restore, the restore behavior is + undefined. + database_dialect (google.cloud.spanner_admin_database_v1.types.DatabaseDialect): + Output only. The database dialect information + for the backup. + referencing_backups (MutableSequence[str]): + Output only. The names of the destination backups being + created by copying this source backup. The backup names are + of the form + ``projects//instances//backups/``. + Referencing backups may exist in different instances. The + existence of any referencing backup prevents the backup from + being deleted. When the copy operation is done (either + successfully completed or cancelled or the destination + backup is deleted), the reference to the backup is removed. + max_expire_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The max allowed expiration time of the backup, + with microseconds granularity. A backup's expiration time + can be configured in multiple APIs: CreateBackup, + UpdateBackup, CopyBackup. When updating or copying an + existing backup, the expiration time specified must be less + than ``Backup.max_expire_time``. + backup_schedules (MutableSequence[str]): + Output only. List of backup schedule URIs + that are associated with creating this backup. + This is only applicable for scheduled backups, + and is empty for on-demand backups. + + To optimize for storage, whenever possible, + multiple schedules are collapsed together to + create one backup. In such cases, this field + captures the list of all backup schedule URIs + that are associated with creating this backup. + If collapsing is not done, then this field + captures the single backup schedule URI + associated with creating this backup. + incremental_backup_chain_id (str): + Output only. Populated only for backups in an incremental + backup chain. Backups share the same chain id if and only if + they belong to the same incremental backup chain. Use this + field to determine which backups are part of the same + incremental backup chain. The ordering of backups in the + chain can be determined by ordering the backup + ``version_time``. + oldest_version_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. Data deleted at a time older + than this is guaranteed not to be retained in + order to support this backup. For a backup in an + incremental backup chain, this is the version + time of the oldest backup that exists or ever + existed in the chain. For all other backups, + this is the version time of the backup. This + field can be used to understand what data is + being retained by the backup system. + instance_partitions (MutableSequence[google.cloud.spanner_admin_database_v1.types.BackupInstancePartition]): + Output only. The instance partition(s) storing the backup. + + This is the same as the list of the instance partition(s) + that the database had footprint in at the backup's + ``version_time``. + """ + + class State(proto.Enum): + r"""Indicates the current state of the backup. + + Values: + STATE_UNSPECIFIED (0): + Not specified. + CREATING (1): + The pending backup is still being created. Operations on the + backup may fail with ``FAILED_PRECONDITION`` in this state. + READY (2): + The backup is complete and ready for use. + """ + STATE_UNSPECIFIED = 0 + CREATING = 1 + READY = 2 + + database: str = proto.Field( + proto.STRING, + number=2, + ) + version_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=9, + message=timestamp_pb2.Timestamp, + ) + expire_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + name: str = proto.Field( + proto.STRING, + number=1, + ) + create_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + size_bytes: int = proto.Field( + proto.INT64, + number=5, + ) + freeable_size_bytes: int = proto.Field( + proto.INT64, + number=15, + ) + exclusive_size_bytes: int = proto.Field( + proto.INT64, + number=16, + ) + state: State = proto.Field( + proto.ENUM, + number=6, + enum=State, + ) + referencing_databases: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=7, + ) + encryption_info: common.EncryptionInfo = proto.Field( + proto.MESSAGE, + number=8, + message=common.EncryptionInfo, + ) + encryption_information: MutableSequence[ + common.EncryptionInfo + ] = proto.RepeatedField( + proto.MESSAGE, + number=13, + message=common.EncryptionInfo, + ) + database_dialect: common.DatabaseDialect = proto.Field( + proto.ENUM, + number=10, + enum=common.DatabaseDialect, + ) + referencing_backups: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=11, + ) + max_expire_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=12, + message=timestamp_pb2.Timestamp, + ) + backup_schedules: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=14, + ) + incremental_backup_chain_id: str = proto.Field( + proto.STRING, + number=17, + ) + oldest_version_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=18, + message=timestamp_pb2.Timestamp, + ) + instance_partitions: MutableSequence[ + "BackupInstancePartition" + ] = proto.RepeatedField( + proto.MESSAGE, + number=19, + message="BackupInstancePartition", + ) + + +class CreateBackupRequest(proto.Message): + r"""The request for + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. + + Attributes: + parent (str): + Required. The name of the instance in which the backup will + be created. This must be the same instance that contains the + database the backup will be created from. The backup will be + stored in the location(s) specified in the instance + configuration of this instance. Values are of the form + ``projects//instances/``. + backup_id (str): + Required. The id of the backup to be created. The + ``backup_id`` appended to ``parent`` forms the full backup + name of the form + ``projects//instances//backups/``. + backup (google.cloud.spanner_admin_database_v1.types.Backup): + Required. The backup to create. + encryption_config (google.cloud.spanner_admin_database_v1.types.CreateBackupEncryptionConfig): + Optional. The encryption configuration used to encrypt the + backup. If this field is not specified, the backup will use + the same encryption configuration as the database by + default, namely + [encryption_type][google.spanner.admin.database.v1.CreateBackupEncryptionConfig.encryption_type] + = ``USE_DATABASE_ENCRYPTION``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + backup_id: str = proto.Field( + proto.STRING, + number=2, + ) + backup: "Backup" = proto.Field( + proto.MESSAGE, + number=3, + message="Backup", + ) + encryption_config: "CreateBackupEncryptionConfig" = proto.Field( + proto.MESSAGE, + number=4, + message="CreateBackupEncryptionConfig", + ) + + +class CreateBackupMetadata(proto.Message): + r"""Metadata type for the operation returned by + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup]. + + Attributes: + name (str): + The name of the backup being created. + database (str): + The name of the database the backup is + created from. + progress (google.cloud.spanner_admin_database_v1.types.OperationProgress): + The progress of the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + operation. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which cancellation of this operation was + received. + [Operations.CancelOperation][google.longrunning.Operations.CancelOperation] + starts asynchronous cancellation on a long-running + operation. The server makes a best effort to cancel the + operation, but success is not guaranteed. Clients can use + [Operations.GetOperation][google.longrunning.Operations.GetOperation] + or other methods to check whether the cancellation succeeded + or whether the operation completed despite cancellation. On + successful cancellation, the operation is not deleted; + instead, it becomes an operation with an + [Operation.error][google.longrunning.Operation.error] value + with a [google.rpc.Status.code][google.rpc.Status.code] of + 1, corresponding to ``Code.CANCELLED``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + database: str = proto.Field( + proto.STRING, + number=2, + ) + progress: common.OperationProgress = proto.Field( + proto.MESSAGE, + number=3, + message=common.OperationProgress, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + + +class CopyBackupRequest(proto.Message): + r"""The request for + [CopyBackup][google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup]. + + Attributes: + parent (str): + Required. The name of the destination instance that will + contain the backup copy. Values are of the form: + ``projects//instances/``. + backup_id (str): + Required. The id of the backup copy. The ``backup_id`` + appended to ``parent`` forms the full backup_uri of the form + ``projects//instances//backups/``. + source_backup (str): + Required. The source backup to be copied. The source backup + needs to be in READY state for it to be copied. Once + CopyBackup is in progress, the source backup cannot be + deleted or cleaned up on expiration until CopyBackup is + finished. Values are of the form: + ``projects//instances//backups/``. + expire_time (google.protobuf.timestamp_pb2.Timestamp): + Required. The expiration time of the backup in microsecond + granularity. The expiration time must be at least 6 hours + and at most 366 days from the ``create_time`` of the source + backup. Once the ``expire_time`` has passed, the backup is + eligible to be automatically deleted by Cloud Spanner to + free the resources used by the backup. + encryption_config (google.cloud.spanner_admin_database_v1.types.CopyBackupEncryptionConfig): + Optional. The encryption configuration used to encrypt the + backup. If this field is not specified, the backup will use + the same encryption configuration as the source backup by + default, namely + [encryption_type][google.spanner.admin.database.v1.CopyBackupEncryptionConfig.encryption_type] + = ``USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + backup_id: str = proto.Field( + proto.STRING, + number=2, + ) + source_backup: str = proto.Field( + proto.STRING, + number=3, + ) + expire_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + encryption_config: "CopyBackupEncryptionConfig" = proto.Field( + proto.MESSAGE, + number=5, + message="CopyBackupEncryptionConfig", + ) + + +class CopyBackupMetadata(proto.Message): + r"""Metadata type for the operation returned by + [CopyBackup][google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup]. + + Attributes: + name (str): + The name of the backup being created through the copy + operation. Values are of the form + ``projects//instances//backups/``. + source_backup (str): + The name of the source backup that is being copied. Values + are of the form + ``projects//instances//backups/``. + progress (google.cloud.spanner_admin_database_v1.types.OperationProgress): + The progress of the + [CopyBackup][google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup] + operation. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which cancellation of CopyBackup operation was + received. + [Operations.CancelOperation][google.longrunning.Operations.CancelOperation] + starts asynchronous cancellation on a long-running + operation. The server makes a best effort to cancel the + operation, but success is not guaranteed. Clients can use + [Operations.GetOperation][google.longrunning.Operations.GetOperation] + or other methods to check whether the cancellation succeeded + or whether the operation completed despite cancellation. On + successful cancellation, the operation is not deleted; + instead, it becomes an operation with an + [Operation.error][google.longrunning.Operation.error] value + with a [google.rpc.Status.code][google.rpc.Status.code] of + 1, corresponding to ``Code.CANCELLED``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + source_backup: str = proto.Field( + proto.STRING, + number=2, + ) + progress: common.OperationProgress = proto.Field( + proto.MESSAGE, + number=3, + message=common.OperationProgress, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + + +class UpdateBackupRequest(proto.Message): + r"""The request for + [UpdateBackup][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup]. + + Attributes: + backup (google.cloud.spanner_admin_database_v1.types.Backup): + Required. The backup to update. ``backup.name``, and the + fields to be updated as specified by ``update_mask`` are + required. Other fields are ignored. Update is only supported + for the following fields: + + - ``backup.expire_time``. + update_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields (e.g. + ``expire_time``) in the Backup resource should be updated. + This mask is relative to the Backup resource, not to the + request message. The field mask must always be specified; + this prevents any future fields from being erased + accidentally by clients that do not know about them. + """ + + backup: "Backup" = proto.Field( + proto.MESSAGE, + number=1, + message="Backup", + ) + update_mask: field_mask_pb2.FieldMask = proto.Field( + proto.MESSAGE, + number=2, + message=field_mask_pb2.FieldMask, + ) + + +class GetBackupRequest(proto.Message): + r"""The request for + [GetBackup][google.spanner.admin.database.v1.DatabaseAdmin.GetBackup]. + + Attributes: + name (str): + Required. Name of the backup. Values are of the form + ``projects//instances//backups/``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class DeleteBackupRequest(proto.Message): + r"""The request for + [DeleteBackup][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup]. + + Attributes: + name (str): + Required. Name of the backup to delete. Values are of the + form + ``projects//instances//backups/``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class ListBackupsRequest(proto.Message): + r"""The request for + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. + + Attributes: + parent (str): + Required. The instance to list backups from. Values are of + the form ``projects//instances/``. + filter (str): + An expression that filters the list of returned backups. + + A filter expression consists of a field name, a comparison + operator, and a value for filtering. The value must be a + string, a number, or a boolean. The comparison operator must + be one of: ``<``, ``>``, ``<=``, ``>=``, ``!=``, ``=``, or + ``:``. Colon ``:`` is the contains operator. Filter rules + are not case sensitive. + + The following fields in the + [Backup][google.spanner.admin.database.v1.Backup] are + eligible for filtering: + + - ``name`` + - ``database`` + - ``state`` + - ``create_time`` (and values are of the format + YYYY-MM-DDTHH:MM:SSZ) + - ``expire_time`` (and values are of the format + YYYY-MM-DDTHH:MM:SSZ) + - ``version_time`` (and values are of the format + YYYY-MM-DDTHH:MM:SSZ) + - ``size_bytes`` + - ``backup_schedules`` + + You can combine multiple expressions by enclosing each + expression in parentheses. By default, expressions are + combined with AND logic, but you can specify AND, OR, and + NOT logic explicitly. + + Here are a few examples: + + - ``name:Howl`` - The backup's name contains the string + "howl". + - ``database:prod`` - The database's name contains the + string "prod". + - ``state:CREATING`` - The backup is pending creation. + - ``state:READY`` - The backup is fully created and ready + for use. + - ``(name:howl) AND (create_time < \"2018-03-28T14:50:00Z\")`` + - The backup name contains the string "howl" and + ``create_time`` of the backup is before + 2018-03-28T14:50:00Z. + - ``expire_time < \"2018-03-28T14:50:00Z\"`` - The backup + ``expire_time`` is before 2018-03-28T14:50:00Z. + - ``size_bytes > 10000000000`` - The backup's size is + greater than 10GB + - ``backup_schedules:daily`` - The backup is created from a + schedule with "daily" in its name. + page_size (int): + Number of backups to be returned in the + response. If 0 or less, defaults to the server's + maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.database.v1.ListBackupsResponse.next_page_token] + from a previous + [ListBackupsResponse][google.spanner.admin.database.v1.ListBackupsResponse] + to the same ``parent`` and with the same ``filter``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + filter: str = proto.Field( + proto.STRING, + number=2, + ) + page_size: int = proto.Field( + proto.INT32, + number=3, + ) + page_token: str = proto.Field( + proto.STRING, + number=4, + ) + + +class ListBackupsResponse(proto.Message): + r"""The response for + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups]. + + Attributes: + backups (MutableSequence[google.cloud.spanner_admin_database_v1.types.Backup]): + The list of matching backups. Backups returned are ordered + by ``create_time`` in descending order, starting from the + most recent ``create_time``. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListBackups][google.spanner.admin.database.v1.DatabaseAdmin.ListBackups] + call to fetch more of the matching backups. + """ + + @property + def raw_page(self): + return self + + backups: MutableSequence["Backup"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="Backup", + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class ListBackupOperationsRequest(proto.Message): + r"""The request for + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + + Attributes: + parent (str): + Required. The instance of the backup operations. Values are + of the form ``projects//instances/``. + filter (str): + An expression that filters the list of returned backup + operations. + + A filter expression consists of a field name, a comparison + operator, and a value for filtering. The value must be a + string, a number, or a boolean. The comparison operator must + be one of: ``<``, ``>``, ``<=``, ``>=``, ``!=``, ``=``, or + ``:``. Colon ``:`` is the contains operator. Filter rules + are not case sensitive. + + The following fields in the + [operation][google.longrunning.Operation] are eligible for + filtering: + + - ``name`` - The name of the long-running operation + - ``done`` - False if the operation is in progress, else + true. + - ``metadata.@type`` - the type of metadata. For example, + the type string for + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata] + is + ``type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata``. + - ``metadata.`` - any field in metadata.value. + ``metadata.@type`` must be specified first if filtering on + metadata fields. + - ``error`` - Error associated with the long-running + operation. + - ``response.@type`` - the type of response. + - ``response.`` - any field in response.value. + + You can combine multiple expressions by enclosing each + expression in parentheses. By default, expressions are + combined with AND logic, but you can specify AND, OR, and + NOT logic explicitly. + + Here are a few examples: + + - ``done:true`` - The operation is complete. + - ``(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND`` + ``metadata.database:prod`` - Returns operations where: + + - The operation's metadata type is + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. + - The source database name of backup contains the string + "prod". + + - ``(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND`` + ``(metadata.name:howl) AND`` + ``(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND`` + ``(error:*)`` - Returns operations where: + + - The operation's metadata type is + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. + - The backup name contains the string "howl". + - The operation started before 2018-03-28T14:50:00Z. + - The operation resulted in an error. + + - ``(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) AND`` + ``(metadata.source_backup:test) AND`` + ``(metadata.progress.start_time < \"2022-01-18T14:50:00Z\") AND`` + ``(error:*)`` - Returns operations where: + + - The operation's metadata type is + [CopyBackupMetadata][google.spanner.admin.database.v1.CopyBackupMetadata]. + - The source backup name contains the string "test". + - The operation started before 2022-01-18T14:50:00Z. + - The operation resulted in an error. + + - ``((metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CreateBackupMetadata) AND`` + ``(metadata.database:test_db)) OR`` + ``((metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) AND`` + ``(metadata.source_backup:test_bkp)) AND`` + ``(error:*)`` - Returns operations where: + + - The operation's metadata matches either of criteria: + + - The operation's metadata type is + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata] + AND the source database name of the backup contains + the string "test_db" + - The operation's metadata type is + [CopyBackupMetadata][google.spanner.admin.database.v1.CopyBackupMetadata] + AND the source backup name contains the string + "test_bkp" + + - The operation resulted in an error. + page_size (int): + Number of operations to be returned in the + response. If 0 or less, defaults to the server's + maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.database.v1.ListBackupOperationsResponse.next_page_token] + from a previous + [ListBackupOperationsResponse][google.spanner.admin.database.v1.ListBackupOperationsResponse] + to the same ``parent`` and with the same ``filter``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + filter: str = proto.Field( + proto.STRING, + number=2, + ) + page_size: int = proto.Field( + proto.INT32, + number=3, + ) + page_token: str = proto.Field( + proto.STRING, + number=4, + ) + + +class ListBackupOperationsResponse(proto.Message): + r"""The response for + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations]. + + Attributes: + operations (MutableSequence[google.longrunning.operations_pb2.Operation]): + The list of matching backup [long-running + operations][google.longrunning.Operation]. Each operation's + name will be prefixed by the backup's name. The operation's + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that are pending or have + completed/failed/canceled within the last 7 days. Operations + returned are ordered by + ``operation.metadata.value.progress.start_time`` in + descending order starting from the most recently started + operation. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListBackupOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations] + call to fetch more of the matching metadata. + """ + + @property + def raw_page(self): + return self + + operations: MutableSequence[operations_pb2.Operation] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message=operations_pb2.Operation, + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class BackupInfo(proto.Message): + r"""Information about a backup. + + Attributes: + backup (str): + Name of the backup. + version_time (google.protobuf.timestamp_pb2.Timestamp): + The backup contains an externally consistent copy of + ``source_database`` at the timestamp specified by + ``version_time``. If the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + request did not specify ``version_time``, the + ``version_time`` of the backup is equivalent to the + ``create_time``. + create_time (google.protobuf.timestamp_pb2.Timestamp): + The time the + [CreateBackup][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup] + request was received. + source_database (str): + Name of the database the backup was created + from. + """ + + backup: str = proto.Field( + proto.STRING, + number=1, + ) + version_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + create_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + source_database: str = proto.Field( + proto.STRING, + number=3, + ) + + +class CreateBackupEncryptionConfig(proto.Message): + r"""Encryption configuration for the backup to create. + + Attributes: + encryption_type (google.cloud.spanner_admin_database_v1.types.CreateBackupEncryptionConfig.EncryptionType): + Required. The encryption type of the backup. + kms_key_name (str): + Optional. The Cloud KMS key that will be used to protect the + backup. This field should be set only when + [encryption_type][google.spanner.admin.database.v1.CreateBackupEncryptionConfig.encryption_type] + is ``CUSTOMER_MANAGED_ENCRYPTION``. Values are of the form + ``projects//locations//keyRings//cryptoKeys/``. + kms_key_names (MutableSequence[str]): + Optional. Specifies the KMS configuration for the one or + more keys used to protect the backup. Values are of the form + ``projects//locations//keyRings//cryptoKeys/``. + + The keys referenced by kms_key_names must fully cover all + regions of the backup's instance configuration. Some + examples: + + - For single region instance configs, specify a single + regional location KMS key. + - For multi-regional instance configs of type + GOOGLE_MANAGED, either specify a multi-regional location + KMS key or multiple regional location KMS keys that cover + all regions in the instance config. + - For an instance config of type USER_MANAGED, please + specify only regional location KMS keys to cover each + region in the instance config. Multi-regional location KMS + keys are not supported for USER_MANAGED instance configs. + """ + + class EncryptionType(proto.Enum): + r"""Encryption types for the backup. + + Values: + ENCRYPTION_TYPE_UNSPECIFIED (0): + Unspecified. Do not use. + USE_DATABASE_ENCRYPTION (1): + Use the same encryption configuration as the database. This + is the default option when + [encryption_config][google.spanner.admin.database.v1.CreateBackupEncryptionConfig] + is empty. For example, if the database is using + ``Customer_Managed_Encryption``, the backup will be using + the same Cloud KMS key as the database. + GOOGLE_DEFAULT_ENCRYPTION (2): + Use Google default encryption. + CUSTOMER_MANAGED_ENCRYPTION (3): + Use customer managed encryption. If specified, + ``kms_key_name`` must contain a valid Cloud KMS key. + """ + ENCRYPTION_TYPE_UNSPECIFIED = 0 + USE_DATABASE_ENCRYPTION = 1 + GOOGLE_DEFAULT_ENCRYPTION = 2 + CUSTOMER_MANAGED_ENCRYPTION = 3 + + encryption_type: EncryptionType = proto.Field( + proto.ENUM, + number=1, + enum=EncryptionType, + ) + kms_key_name: str = proto.Field( + proto.STRING, + number=2, + ) + kms_key_names: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + +class CopyBackupEncryptionConfig(proto.Message): + r"""Encryption configuration for the copied backup. + + Attributes: + encryption_type (google.cloud.spanner_admin_database_v1.types.CopyBackupEncryptionConfig.EncryptionType): + Required. The encryption type of the backup. + kms_key_name (str): + Optional. The Cloud KMS key that will be used to protect the + backup. This field should be set only when + [encryption_type][google.spanner.admin.database.v1.CopyBackupEncryptionConfig.encryption_type] + is ``CUSTOMER_MANAGED_ENCRYPTION``. Values are of the form + ``projects//locations//keyRings//cryptoKeys/``. + kms_key_names (MutableSequence[str]): + Optional. Specifies the KMS configuration for the one or + more keys used to protect the backup. Values are of the form + ``projects//locations//keyRings//cryptoKeys/``. + Kms keys specified can be in any order. + + The keys referenced by kms_key_names must fully cover all + regions of the backup's instance configuration. Some + examples: + + - For single region instance configs, specify a single + regional location KMS key. + - For multi-regional instance configs of type + GOOGLE_MANAGED, either specify a multi-regional location + KMS key or multiple regional location KMS keys that cover + all regions in the instance config. + - For an instance config of type USER_MANAGED, please + specify only regional location KMS keys to cover each + region in the instance config. Multi-regional location KMS + keys are not supported for USER_MANAGED instance configs. + """ + + class EncryptionType(proto.Enum): + r"""Encryption types for the backup. + + Values: + ENCRYPTION_TYPE_UNSPECIFIED (0): + Unspecified. Do not use. + USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION (1): + This is the default option for + [CopyBackup][google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup] + when + [encryption_config][google.spanner.admin.database.v1.CopyBackupEncryptionConfig] + is not specified. For example, if the source backup is using + ``Customer_Managed_Encryption``, the backup will be using + the same Cloud KMS key as the source backup. + GOOGLE_DEFAULT_ENCRYPTION (2): + Use Google default encryption. + CUSTOMER_MANAGED_ENCRYPTION (3): + Use customer managed encryption. If specified, either + ``kms_key_name`` or ``kms_key_names`` must contain valid + Cloud KMS key(s). + """ + ENCRYPTION_TYPE_UNSPECIFIED = 0 + USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION = 1 + GOOGLE_DEFAULT_ENCRYPTION = 2 + CUSTOMER_MANAGED_ENCRYPTION = 3 + + encryption_type: EncryptionType = proto.Field( + proto.ENUM, + number=1, + enum=EncryptionType, + ) + kms_key_name: str = proto.Field( + proto.STRING, + number=2, + ) + kms_key_names: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + +class FullBackupSpec(proto.Message): + r"""The specification for full backups. + A full backup stores the entire contents of the database at a + given version time. + + """ + + +class IncrementalBackupSpec(proto.Message): + r"""The specification for incremental backup chains. + An incremental backup stores the delta of changes between a + previous backup and the database contents at a given version + time. An incremental backup chain consists of a full backup and + zero or more successive incremental backups. The first backup + created for an incremental backup chain is always a full backup. + + """ + + +class BackupInstancePartition(proto.Message): + r"""Instance partition information for the backup. + + Attributes: + instance_partition (str): + A unique identifier for the instance partition. Values are + of the form + ``projects//instances//instancePartitions/`` + """ + + instance_partition: str = proto.Field( + proto.STRING, + number=1, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_admin_database_v1/types/backup_schedule.py b/google/cloud/spanner_admin_database_v1/types/backup_schedule.py new file mode 100644 index 0000000000..2773c1ef63 --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/types/backup_schedule.py @@ -0,0 +1,369 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_admin_database_v1.types import backup +from google.protobuf import duration_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.admin.database.v1", + manifest={ + "BackupScheduleSpec", + "BackupSchedule", + "CrontabSpec", + "CreateBackupScheduleRequest", + "GetBackupScheduleRequest", + "DeleteBackupScheduleRequest", + "ListBackupSchedulesRequest", + "ListBackupSchedulesResponse", + "UpdateBackupScheduleRequest", + }, +) + + +class BackupScheduleSpec(proto.Message): + r"""Defines specifications of the backup schedule. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + cron_spec (google.cloud.spanner_admin_database_v1.types.CrontabSpec): + Cron style schedule specification. + + This field is a member of `oneof`_ ``schedule_spec``. + """ + + cron_spec: "CrontabSpec" = proto.Field( + proto.MESSAGE, + number=1, + oneof="schedule_spec", + message="CrontabSpec", + ) + + +class BackupSchedule(proto.Message): + r"""BackupSchedule expresses the automated backup creation + specification for a Spanner database. + Next ID: 10 + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + name (str): + Identifier. Output only for the + [CreateBackupSchedule][DatabaseAdmin.CreateBackupSchededule] + operation. Required for the + [UpdateBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackupSchedule] + operation. A globally unique identifier for the backup + schedule which cannot be changed. Values are of the form + ``projects//instances//databases//backupSchedules/[a-z][a-z0-9_\-]*[a-z0-9]`` + The final segment of the name must be between 2 and 60 + characters in length. + spec (google.cloud.spanner_admin_database_v1.types.BackupScheduleSpec): + Optional. The schedule specification based on + which the backup creations are triggered. + retention_duration (google.protobuf.duration_pb2.Duration): + Optional. The retention duration of a backup + that must be at least 6 hours and at most 366 + days. The backup is eligible to be automatically + deleted once the retention period has elapsed. + encryption_config (google.cloud.spanner_admin_database_v1.types.CreateBackupEncryptionConfig): + Optional. The encryption configuration that + will be used to encrypt the backup. If this + field is not specified, the backup will use the + same encryption configuration as the database. + full_backup_spec (google.cloud.spanner_admin_database_v1.types.FullBackupSpec): + The schedule creates only full backups. + + This field is a member of `oneof`_ ``backup_type_spec``. + incremental_backup_spec (google.cloud.spanner_admin_database_v1.types.IncrementalBackupSpec): + The schedule creates incremental backup + chains. + + This field is a member of `oneof`_ ``backup_type_spec``. + update_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The timestamp at which the + schedule was last updated. If the schedule has + never been updated, this field contains the + timestamp when the schedule was first created. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + spec: "BackupScheduleSpec" = proto.Field( + proto.MESSAGE, + number=6, + message="BackupScheduleSpec", + ) + retention_duration: duration_pb2.Duration = proto.Field( + proto.MESSAGE, + number=3, + message=duration_pb2.Duration, + ) + encryption_config: backup.CreateBackupEncryptionConfig = proto.Field( + proto.MESSAGE, + number=4, + message=backup.CreateBackupEncryptionConfig, + ) + full_backup_spec: backup.FullBackupSpec = proto.Field( + proto.MESSAGE, + number=7, + oneof="backup_type_spec", + message=backup.FullBackupSpec, + ) + incremental_backup_spec: backup.IncrementalBackupSpec = proto.Field( + proto.MESSAGE, + number=8, + oneof="backup_type_spec", + message=backup.IncrementalBackupSpec, + ) + update_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=9, + message=timestamp_pb2.Timestamp, + ) + + +class CrontabSpec(proto.Message): + r"""CrontabSpec can be used to specify the version time and + frequency at which the backup should be created. + + Attributes: + text (str): + Required. Textual representation of the crontab. User can + customize the backup frequency and the backup version time + using the cron expression. The version time must be in UTC + timezone. + + The backup will contain an externally consistent copy of the + database at the version time. Allowed frequencies are 12 + hour, 1 day, 1 week and 1 month. Examples of valid cron + specifications: + + - ``0 2/12 * * *`` : every 12 hours at (2, 14) hours past + midnight in UTC. + - ``0 2,14 * * *`` : every 12 hours at (2,14) hours past + midnight in UTC. + - ``0 2 * * *`` : once a day at 2 past midnight in UTC. + - ``0 2 * * 0`` : once a week every Sunday at 2 past + midnight in UTC. + - ``0 2 8 * *`` : once a month on 8th day at 2 past midnight + in UTC. + time_zone (str): + Output only. The time zone of the times in + ``CrontabSpec.text``. Currently only UTC is supported. + creation_window (google.protobuf.duration_pb2.Duration): + Output only. Schedule backups will contain an externally + consistent copy of the database at the version time + specified in ``schedule_spec.cron_spec``. However, Spanner + may not initiate the creation of the scheduled backups at + that version time. Spanner will initiate the creation of + scheduled backups within the time window bounded by the + version_time specified in ``schedule_spec.cron_spec`` and + version_time + ``creation_window``. + """ + + text: str = proto.Field( + proto.STRING, + number=1, + ) + time_zone: str = proto.Field( + proto.STRING, + number=2, + ) + creation_window: duration_pb2.Duration = proto.Field( + proto.MESSAGE, + number=3, + message=duration_pb2.Duration, + ) + + +class CreateBackupScheduleRequest(proto.Message): + r"""The request for + [CreateBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.CreateBackupSchedule]. + + Attributes: + parent (str): + Required. The name of the database that this + backup schedule applies to. + backup_schedule_id (str): + Required. The Id to use for the backup schedule. The + ``backup_schedule_id`` appended to ``parent`` forms the full + backup schedule name of the form + ``projects//instances//databases//backupSchedules/``. + backup_schedule (google.cloud.spanner_admin_database_v1.types.BackupSchedule): + Required. The backup schedule to create. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + backup_schedule_id: str = proto.Field( + proto.STRING, + number=2, + ) + backup_schedule: "BackupSchedule" = proto.Field( + proto.MESSAGE, + number=3, + message="BackupSchedule", + ) + + +class GetBackupScheduleRequest(proto.Message): + r"""The request for + [GetBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.GetBackupSchedule]. + + Attributes: + name (str): + Required. The name of the schedule to retrieve. Values are + of the form + ``projects//instances//databases//backupSchedules/``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class DeleteBackupScheduleRequest(proto.Message): + r"""The request for + [DeleteBackupSchedule][google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackupSchedule]. + + Attributes: + name (str): + Required. The name of the schedule to delete. Values are of + the form + ``projects//instances//databases//backupSchedules/``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class ListBackupSchedulesRequest(proto.Message): + r"""The request for + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules]. + + Attributes: + parent (str): + Required. Database is the parent resource + whose backup schedules should be listed. Values + are of the form + projects//instances//databases/ + page_size (int): + Optional. Number of backup schedules to be + returned in the response. If 0 or less, defaults + to the server's maximum allowed page size. + page_token (str): + Optional. If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.database.v1.ListBackupSchedulesResponse.next_page_token] + from a previous + [ListBackupSchedulesResponse][google.spanner.admin.database.v1.ListBackupSchedulesResponse] + to the same ``parent``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + page_size: int = proto.Field( + proto.INT32, + number=2, + ) + page_token: str = proto.Field( + proto.STRING, + number=4, + ) + + +class ListBackupSchedulesResponse(proto.Message): + r"""The response for + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules]. + + Attributes: + backup_schedules (MutableSequence[google.cloud.spanner_admin_database_v1.types.BackupSchedule]): + The list of backup schedules for a database. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListBackupSchedules][google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules] + call to fetch more of the schedules. + """ + + @property + def raw_page(self): + return self + + backup_schedules: MutableSequence["BackupSchedule"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="BackupSchedule", + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class UpdateBackupScheduleRequest(proto.Message): + r"""The request for + [UpdateBackupScheduleRequest][google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackupSchedule]. + + Attributes: + backup_schedule (google.cloud.spanner_admin_database_v1.types.BackupSchedule): + Required. The backup schedule to update. + ``backup_schedule.name``, and the fields to be updated as + specified by ``update_mask`` are required. Other fields are + ignored. + update_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields in + the BackupSchedule resource should be updated. + This mask is relative to the BackupSchedule + resource, not to the request message. The field + mask must always be specified; this prevents any + future fields from being erased accidentally. + """ + + backup_schedule: "BackupSchedule" = proto.Field( + proto.MESSAGE, + number=1, + message="BackupSchedule", + ) + update_mask: field_mask_pb2.FieldMask = proto.Field( + proto.MESSAGE, + number=2, + message=field_mask_pb2.FieldMask, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_admin_database_v1/types/common.py b/google/cloud/spanner_admin_database_v1/types/common.py new file mode 100644 index 0000000000..fff1a8756c --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/types/common.py @@ -0,0 +1,180 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.protobuf import timestamp_pb2 # type: ignore +from google.rpc import status_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.admin.database.v1", + manifest={ + "DatabaseDialect", + "OperationProgress", + "EncryptionConfig", + "EncryptionInfo", + }, +) + + +class DatabaseDialect(proto.Enum): + r"""Indicates the dialect type of a database. + + Values: + DATABASE_DIALECT_UNSPECIFIED (0): + Default value. This value will create a database with the + GOOGLE_STANDARD_SQL dialect. + GOOGLE_STANDARD_SQL (1): + GoogleSQL supported SQL. + POSTGRESQL (2): + PostgreSQL supported SQL. + """ + DATABASE_DIALECT_UNSPECIFIED = 0 + GOOGLE_STANDARD_SQL = 1 + POSTGRESQL = 2 + + +class OperationProgress(proto.Message): + r"""Encapsulates progress related information for a Cloud Spanner + long running operation. + + Attributes: + progress_percent (int): + Percent completion of the operation. + Values are between 0 and 100 inclusive. + start_time (google.protobuf.timestamp_pb2.Timestamp): + Time the request was received. + end_time (google.protobuf.timestamp_pb2.Timestamp): + If set, the time at which this operation + failed or was completed successfully. + """ + + progress_percent: int = proto.Field( + proto.INT32, + number=1, + ) + start_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + end_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + + +class EncryptionConfig(proto.Message): + r"""Encryption configuration for a Cloud Spanner database. + + Attributes: + kms_key_name (str): + The Cloud KMS key to be used for encrypting and decrypting + the database. Values are of the form + ``projects//locations//keyRings//cryptoKeys/``. + kms_key_names (MutableSequence[str]): + Specifies the KMS configuration for the one or more keys + used to encrypt the database. Values are of the form + ``projects//locations//keyRings//cryptoKeys/``. + + The keys referenced by kms_key_names must fully cover all + regions of the database instance configuration. Some + examples: + + - For single region database instance configs, specify a + single regional location KMS key. + - For multi-regional database instance configs of type + GOOGLE_MANAGED, either specify a multi-regional location + KMS key or multiple regional location KMS keys that cover + all regions in the instance config. + - For a database instance config of type USER_MANAGED, + please specify only regional location KMS keys to cover + each region in the instance config. Multi-regional + location KMS keys are not supported for USER_MANAGED + instance configs. + """ + + kms_key_name: str = proto.Field( + proto.STRING, + number=2, + ) + kms_key_names: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + +class EncryptionInfo(proto.Message): + r"""Encryption information for a Cloud Spanner database or + backup. + + Attributes: + encryption_type (google.cloud.spanner_admin_database_v1.types.EncryptionInfo.Type): + Output only. The type of encryption. + encryption_status (google.rpc.status_pb2.Status): + Output only. If present, the status of a + recent encrypt/decrypt call on underlying data + for this database or backup. Regardless of + status, data is always encrypted at rest. + kms_key_version (str): + Output only. A Cloud KMS key version that is + being used to protect the database or backup. + """ + + class Type(proto.Enum): + r"""Possible encryption types. + + Values: + TYPE_UNSPECIFIED (0): + Encryption type was not specified, though + data at rest remains encrypted. + GOOGLE_DEFAULT_ENCRYPTION (1): + The data is encrypted at rest with a key that + is fully managed by Google. No key version or + status will be populated. This is the default + state. + CUSTOMER_MANAGED_ENCRYPTION (2): + The data is encrypted at rest with a key that is managed by + the customer. The active version of the key. + ``kms_key_version`` will be populated, and + ``encryption_status`` may be populated. + """ + TYPE_UNSPECIFIED = 0 + GOOGLE_DEFAULT_ENCRYPTION = 1 + CUSTOMER_MANAGED_ENCRYPTION = 2 + + encryption_type: Type = proto.Field( + proto.ENUM, + number=3, + enum=Type, + ) + encryption_status: status_pb2.Status = proto.Field( + proto.MESSAGE, + number=4, + message=status_pb2.Status, + ) + kms_key_version: str = proto.Field( + proto.STRING, + number=2, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_admin_database_v1/types/spanner_database_admin.py b/google/cloud/spanner_admin_database_v1/types/spanner_database_admin.py new file mode 100644 index 0000000000..c82fdc87df --- /dev/null +++ b/google/cloud/spanner_admin_database_v1/types/spanner_database_admin.py @@ -0,0 +1,1349 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import common +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import struct_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from google.rpc import status_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.admin.database.v1", + manifest={ + "RestoreSourceType", + "RestoreInfo", + "Database", + "ListDatabasesRequest", + "ListDatabasesResponse", + "CreateDatabaseRequest", + "CreateDatabaseMetadata", + "GetDatabaseRequest", + "UpdateDatabaseRequest", + "UpdateDatabaseMetadata", + "UpdateDatabaseDdlRequest", + "DdlStatementActionInfo", + "UpdateDatabaseDdlMetadata", + "DropDatabaseRequest", + "GetDatabaseDdlRequest", + "GetDatabaseDdlResponse", + "ListDatabaseOperationsRequest", + "ListDatabaseOperationsResponse", + "RestoreDatabaseRequest", + "RestoreDatabaseEncryptionConfig", + "RestoreDatabaseMetadata", + "OptimizeRestoredDatabaseMetadata", + "DatabaseRole", + "ListDatabaseRolesRequest", + "ListDatabaseRolesResponse", + "AddSplitPointsRequest", + "AddSplitPointsResponse", + "SplitPoints", + "InternalUpdateGraphOperationRequest", + "InternalUpdateGraphOperationResponse", + }, +) + + +class RestoreSourceType(proto.Enum): + r"""Indicates the type of the restore source. + + Values: + TYPE_UNSPECIFIED (0): + No restore associated. + BACKUP (1): + A backup was used as the source of the + restore. + """ + TYPE_UNSPECIFIED = 0 + BACKUP = 1 + + +class RestoreInfo(proto.Message): + r"""Information about the database restore. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + source_type (google.cloud.spanner_admin_database_v1.types.RestoreSourceType): + The type of the restore source. + backup_info (google.cloud.spanner_admin_database_v1.types.BackupInfo): + Information about the backup used to restore + the database. The backup may no longer exist. + + This field is a member of `oneof`_ ``source_info``. + """ + + source_type: "RestoreSourceType" = proto.Field( + proto.ENUM, + number=1, + enum="RestoreSourceType", + ) + backup_info: gsad_backup.BackupInfo = proto.Field( + proto.MESSAGE, + number=2, + oneof="source_info", + message=gsad_backup.BackupInfo, + ) + + +class Database(proto.Message): + r"""A Cloud Spanner database. + + Attributes: + name (str): + Required. The name of the database. Values are of the form + ``projects//instances//databases/``, + where ```` is as specified in the + ``CREATE DATABASE`` statement. This name can be passed to + other API methods to identify the database. + state (google.cloud.spanner_admin_database_v1.types.Database.State): + Output only. The current database state. + create_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. If exists, the time at which the + database creation started. + restore_info (google.cloud.spanner_admin_database_v1.types.RestoreInfo): + Output only. Applicable only for restored + databases. Contains information about the + restore source. + encryption_config (google.cloud.spanner_admin_database_v1.types.EncryptionConfig): + Output only. For databases that are using + customer managed encryption, this field contains + the encryption configuration for the database. + For databases that are using Google default or + other types of encryption, this field is empty. + encryption_info (MutableSequence[google.cloud.spanner_admin_database_v1.types.EncryptionInfo]): + Output only. For databases that are using customer managed + encryption, this field contains the encryption information + for the database, such as all Cloud KMS key versions that + are in use. The + ``encryption_status' field inside of each``\ EncryptionInfo\` + is not populated. + + For databases that are using Google default or other types + of encryption, this field is empty. + + This field is propagated lazily from the backend. There + might be a delay from when a key version is being used and + when it appears in this field. + version_retention_period (str): + Output only. The period in which Cloud Spanner retains all + versions of data for the database. This is the same as the + value of version_retention_period database option set using + [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl]. + Defaults to 1 hour, if not set. + earliest_version_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. Earliest timestamp at which + older versions of the data can be read. This + value is continuously updated by Cloud Spanner + and becomes stale the moment it is queried. If + you are using this value to recover data, make + sure to account for the time from the moment + when the value is queried to the moment when you + initiate the recovery. + default_leader (str): + Output only. The read-write region which contains the + database's leader replicas. + + This is the same as the value of default_leader database + option set using DatabaseAdmin.CreateDatabase or + DatabaseAdmin.UpdateDatabaseDdl. If not explicitly set, this + is empty. + database_dialect (google.cloud.spanner_admin_database_v1.types.DatabaseDialect): + Output only. The dialect of the Cloud Spanner + Database. + enable_drop_protection (bool): + Whether drop protection is enabled for this database. + Defaults to false, if not set. For more details, please see + how to `prevent accidental database + deletion `__. + reconciling (bool): + Output only. If true, the database is being + updated. If false, there are no ongoing update + operations for the database. + """ + + class State(proto.Enum): + r"""Indicates the current state of the database. + + Values: + STATE_UNSPECIFIED (0): + Not specified. + CREATING (1): + The database is still being created. Operations on the + database may fail with ``FAILED_PRECONDITION`` in this + state. + READY (2): + The database is fully created and ready for + use. + READY_OPTIMIZING (3): + The database is fully created and ready for use, but is + still being optimized for performance and cannot handle full + load. + + In this state, the database still references the backup it + was restore from, preventing the backup from being deleted. + When optimizations are complete, the full performance of the + database will be restored, and the database will transition + to ``READY`` state. + """ + STATE_UNSPECIFIED = 0 + CREATING = 1 + READY = 2 + READY_OPTIMIZING = 3 + + name: str = proto.Field( + proto.STRING, + number=1, + ) + state: State = proto.Field( + proto.ENUM, + number=2, + enum=State, + ) + create_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + restore_info: "RestoreInfo" = proto.Field( + proto.MESSAGE, + number=4, + message="RestoreInfo", + ) + encryption_config: common.EncryptionConfig = proto.Field( + proto.MESSAGE, + number=5, + message=common.EncryptionConfig, + ) + encryption_info: MutableSequence[common.EncryptionInfo] = proto.RepeatedField( + proto.MESSAGE, + number=8, + message=common.EncryptionInfo, + ) + version_retention_period: str = proto.Field( + proto.STRING, + number=6, + ) + earliest_version_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=7, + message=timestamp_pb2.Timestamp, + ) + default_leader: str = proto.Field( + proto.STRING, + number=9, + ) + database_dialect: common.DatabaseDialect = proto.Field( + proto.ENUM, + number=10, + enum=common.DatabaseDialect, + ) + enable_drop_protection: bool = proto.Field( + proto.BOOL, + number=11, + ) + reconciling: bool = proto.Field( + proto.BOOL, + number=12, + ) + + +class ListDatabasesRequest(proto.Message): + r"""The request for + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. + + Attributes: + parent (str): + Required. The instance whose databases should be listed. + Values are of the form + ``projects//instances/``. + page_size (int): + Number of databases to be returned in the + response. If 0 or less, defaults to the server's + maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.database.v1.ListDatabasesResponse.next_page_token] + from a previous + [ListDatabasesResponse][google.spanner.admin.database.v1.ListDatabasesResponse]. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + page_size: int = proto.Field( + proto.INT32, + number=3, + ) + page_token: str = proto.Field( + proto.STRING, + number=4, + ) + + +class ListDatabasesResponse(proto.Message): + r"""The response for + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. + + Attributes: + databases (MutableSequence[google.cloud.spanner_admin_database_v1.types.Database]): + Databases that matched the request. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases] + call to fetch more of the matching databases. + """ + + @property + def raw_page(self): + return self + + databases: MutableSequence["Database"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="Database", + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class CreateDatabaseRequest(proto.Message): + r"""The request for + [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. + + Attributes: + parent (str): + Required. The name of the instance that will serve the new + database. Values are of the form + ``projects//instances/``. + create_statement (str): + Required. A ``CREATE DATABASE`` statement, which specifies + the ID of the new database. The database ID must conform to + the regular expression ``[a-z][a-z0-9_\-]*[a-z0-9]`` and be + between 2 and 30 characters in length. If the database ID is + a reserved word or if it contains a hyphen, the database ID + must be enclosed in backticks (:literal:`\``). + extra_statements (MutableSequence[str]): + Optional. A list of DDL statements to run + inside the newly created database. Statements + can create tables, indexes, etc. These + statements execute atomically with the creation + of the database: + + if there is an error in any statement, the + database is not created. + encryption_config (google.cloud.spanner_admin_database_v1.types.EncryptionConfig): + Optional. The encryption configuration for + the database. If this field is not specified, + Cloud Spanner will encrypt/decrypt all data at + rest using Google default encryption. + database_dialect (google.cloud.spanner_admin_database_v1.types.DatabaseDialect): + Optional. The dialect of the Cloud Spanner + Database. + proto_descriptors (bytes): + Optional. Proto descriptors used by CREATE/ALTER PROTO + BUNDLE statements in 'extra_statements' above. Contains a + protobuf-serialized + `google.protobuf.FileDescriptorSet `__. + To generate it, + `install `__ and + run ``protoc`` with --include_imports and + --descriptor_set_out. For example, to generate for + moon/shot/app.proto, run + + :: + + $protoc --proto_path=/app_path --proto_path=/lib_path \ + --include_imports \ + --descriptor_set_out=descriptors.data \ + moon/shot/app.proto + + For more details, see protobuffer `self + description `__. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + create_statement: str = proto.Field( + proto.STRING, + number=2, + ) + extra_statements: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + encryption_config: common.EncryptionConfig = proto.Field( + proto.MESSAGE, + number=4, + message=common.EncryptionConfig, + ) + database_dialect: common.DatabaseDialect = proto.Field( + proto.ENUM, + number=5, + enum=common.DatabaseDialect, + ) + proto_descriptors: bytes = proto.Field( + proto.BYTES, + number=6, + ) + + +class CreateDatabaseMetadata(proto.Message): + r"""Metadata type for the operation returned by + [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. + + Attributes: + database (str): + The database being created. + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + + +class GetDatabaseRequest(proto.Message): + r"""The request for + [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. + + Attributes: + name (str): + Required. The name of the requested database. Values are of + the form + ``projects//instances//databases/``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class UpdateDatabaseRequest(proto.Message): + r"""The request for + [UpdateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase]. + + Attributes: + database (google.cloud.spanner_admin_database_v1.types.Database): + Required. The database to update. The ``name`` field of the + database is of the form + ``projects//instances//databases/``. + update_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. The list of fields to update. Currently, only + ``enable_drop_protection`` field can be updated. + """ + + database: "Database" = proto.Field( + proto.MESSAGE, + number=1, + message="Database", + ) + update_mask: field_mask_pb2.FieldMask = proto.Field( + proto.MESSAGE, + number=2, + message=field_mask_pb2.FieldMask, + ) + + +class UpdateDatabaseMetadata(proto.Message): + r"""Metadata type for the operation returned by + [UpdateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase]. + + Attributes: + request (google.cloud.spanner_admin_database_v1.types.UpdateDatabaseRequest): + The request for + [UpdateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase]. + progress (google.cloud.spanner_admin_database_v1.types.OperationProgress): + The progress of the + [UpdateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase] + operation. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation was + cancelled. If set, this operation is in the + process of undoing itself (which is + best-effort). + """ + + request: "UpdateDatabaseRequest" = proto.Field( + proto.MESSAGE, + number=1, + message="UpdateDatabaseRequest", + ) + progress: common.OperationProgress = proto.Field( + proto.MESSAGE, + number=2, + message=common.OperationProgress, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + + +class UpdateDatabaseDdlRequest(proto.Message): + r"""Enqueues the given DDL statements to be applied, in order but not + necessarily all at once, to the database schema at some point (or + points) in the future. The server checks that the statements are + executable (syntactically valid, name tables that exist, etc.) + before enqueueing them, but they may still fail upon later execution + (e.g., if a statement from another batch of statements is applied + first and it conflicts in some way, or if there is some data-related + problem like a ``NULL`` value in a column to which ``NOT NULL`` + would be added). If a statement fails, all subsequent statements in + the batch are automatically cancelled. + + Each batch of statements is assigned a name which can be used with + the [Operations][google.longrunning.Operations] API to monitor + progress. See the + [operation_id][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.operation_id] + field for more details. + + Attributes: + database (str): + Required. The database to update. + statements (MutableSequence[str]): + Required. DDL statements to be applied to the + database. + operation_id (str): + If empty, the new update request is assigned an + automatically-generated operation ID. Otherwise, + ``operation_id`` is used to construct the name of the + resulting [Operation][google.longrunning.Operation]. + + Specifying an explicit operation ID simplifies determining + whether the statements were executed in the event that the + [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl] + call is replayed, or the return value is otherwise lost: the + [database][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.database] + and ``operation_id`` fields can be combined to form the + [name][google.longrunning.Operation.name] of the resulting + [longrunning.Operation][google.longrunning.Operation]: + ``/operations/``. + + ``operation_id`` should be unique within the database, and + must be a valid identifier: ``[a-z][a-z0-9_]*``. Note that + automatically-generated operation IDs always begin with an + underscore. If the named operation already exists, + [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl] + returns ``ALREADY_EXISTS``. + proto_descriptors (bytes): + Optional. Proto descriptors used by CREATE/ALTER PROTO + BUNDLE statements. Contains a protobuf-serialized + `google.protobuf.FileDescriptorSet `__. + To generate it, + `install `__ and + run ``protoc`` with --include_imports and + --descriptor_set_out. For example, to generate for + moon/shot/app.proto, run + + :: + + $protoc --proto_path=/app_path --proto_path=/lib_path \ + --include_imports \ + --descriptor_set_out=descriptors.data \ + moon/shot/app.proto + + For more details, see protobuffer `self + description `__. + throughput_mode (bool): + Optional. This field is exposed to be used by the Spanner + Migration Tool. For more details, see + `SMT `__. + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + statements: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=2, + ) + operation_id: str = proto.Field( + proto.STRING, + number=3, + ) + proto_descriptors: bytes = proto.Field( + proto.BYTES, + number=4, + ) + throughput_mode: bool = proto.Field( + proto.BOOL, + number=5, + ) + + +class DdlStatementActionInfo(proto.Message): + r"""Action information extracted from a DDL statement. This proto is + used to display the brief info of the DDL statement for the + operation + [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl]. + + Attributes: + action (str): + The action for the DDL statement, e.g. + CREATE, ALTER, DROP, GRANT, etc. This field is a + non-empty string. + entity_type (str): + The entity type for the DDL statement, e.g. TABLE, INDEX, + VIEW, etc. This field can be empty string for some DDL + statement, e.g. for statement "ANALYZE", ``entity_type`` = + "". + entity_names (MutableSequence[str]): + The entity name(s) being operated on the DDL statement. E.g. + + 1. For statement "CREATE TABLE t1(...)", ``entity_names`` = + ["t1"]. + 2. For statement "GRANT ROLE r1, r2 ...", ``entity_names`` = + ["r1", "r2"]. + 3. For statement "ANALYZE", ``entity_names`` = []. + """ + + action: str = proto.Field( + proto.STRING, + number=1, + ) + entity_type: str = proto.Field( + proto.STRING, + number=2, + ) + entity_names: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + +class UpdateDatabaseDdlMetadata(proto.Message): + r"""Metadata type for the operation returned by + [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl]. + + Attributes: + database (str): + The database being modified. + statements (MutableSequence[str]): + For an update this list contains all the + statements. For an individual statement, this + list contains only that statement. + commit_timestamps (MutableSequence[google.protobuf.timestamp_pb2.Timestamp]): + Reports the commit timestamps of all statements that have + succeeded so far, where ``commit_timestamps[i]`` is the + commit timestamp for the statement ``statements[i]``. + throttled (bool): + Output only. When true, indicates that the + operation is throttled e.g. due to resource + constraints. When resources become available the + operation will resume and this field will be + false again. + progress (MutableSequence[google.cloud.spanner_admin_database_v1.types.OperationProgress]): + The progress of the + [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl] + operations. All DDL statements will have continuously + updating progress, and ``progress[i]`` is the operation + progress for ``statements[i]``. Also, ``progress[i]`` will + have start time and end time populated with commit timestamp + of operation, as well as a progress of 100% once the + operation has completed. + actions (MutableSequence[google.cloud.spanner_admin_database_v1.types.DdlStatementActionInfo]): + The brief action info for the DDL statements. ``actions[i]`` + is the brief info for ``statements[i]``. + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + statements: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=2, + ) + commit_timestamps: MutableSequence[timestamp_pb2.Timestamp] = proto.RepeatedField( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + throttled: bool = proto.Field( + proto.BOOL, + number=4, + ) + progress: MutableSequence[common.OperationProgress] = proto.RepeatedField( + proto.MESSAGE, + number=5, + message=common.OperationProgress, + ) + actions: MutableSequence["DdlStatementActionInfo"] = proto.RepeatedField( + proto.MESSAGE, + number=6, + message="DdlStatementActionInfo", + ) + + +class DropDatabaseRequest(proto.Message): + r"""The request for + [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. + + Attributes: + database (str): + Required. The database to be dropped. + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + + +class GetDatabaseDdlRequest(proto.Message): + r"""The request for + [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. + + Attributes: + database (str): + Required. The database whose schema we wish to get. Values + are of the form + ``projects//instances//databases/`` + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + + +class GetDatabaseDdlResponse(proto.Message): + r"""The response for + [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. + + Attributes: + statements (MutableSequence[str]): + A list of formatted DDL statements defining + the schema of the database specified in the + request. + proto_descriptors (bytes): + Proto descriptors stored in the database. Contains a + protobuf-serialized + `google.protobuf.FileDescriptorSet `__. + For more details, see protobuffer `self + description `__. + """ + + statements: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=1, + ) + proto_descriptors: bytes = proto.Field( + proto.BYTES, + number=2, + ) + + +class ListDatabaseOperationsRequest(proto.Message): + r"""The request for + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + + Attributes: + parent (str): + Required. The instance of the database operations. Values + are of the form ``projects//instances/``. + filter (str): + An expression that filters the list of returned operations. + + A filter expression consists of a field name, a comparison + operator, and a value for filtering. The value must be a + string, a number, or a boolean. The comparison operator must + be one of: ``<``, ``>``, ``<=``, ``>=``, ``!=``, ``=``, or + ``:``. Colon ``:`` is the contains operator. Filter rules + are not case sensitive. + + The following fields in the + [Operation][google.longrunning.Operation] are eligible for + filtering: + + - ``name`` - The name of the long-running operation + - ``done`` - False if the operation is in progress, else + true. + - ``metadata.@type`` - the type of metadata. For example, + the type string for + [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata] + is + ``type.googleapis.com/google.spanner.admin.database.v1.RestoreDatabaseMetadata``. + - ``metadata.`` - any field in metadata.value. + ``metadata.@type`` must be specified first, if filtering + on metadata fields. + - ``error`` - Error associated with the long-running + operation. + - ``response.@type`` - the type of response. + - ``response.`` - any field in response.value. + + You can combine multiple expressions by enclosing each + expression in parentheses. By default, expressions are + combined with AND logic. However, you can specify AND, OR, + and NOT logic explicitly. + + Here are a few examples: + + - ``done:true`` - The operation is complete. + - ``(metadata.@type=type.googleapis.com/google.spanner.admin.database.v1.RestoreDatabaseMetadata) AND`` + ``(metadata.source_type:BACKUP) AND`` + ``(metadata.backup_info.backup:backup_howl) AND`` + ``(metadata.name:restored_howl) AND`` + ``(metadata.progress.start_time < \"2018-03-28T14:50:00Z\") AND`` + ``(error:*)`` - Return operations where: + + - The operation's metadata type is + [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. + - The database is restored from a backup. + - The backup name contains "backup_howl". + - The restored database's name contains "restored_howl". + - The operation started before 2018-03-28T14:50:00Z. + - The operation resulted in an error. + page_size (int): + Number of operations to be returned in the + response. If 0 or less, defaults to the server's + maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.database.v1.ListDatabaseOperationsResponse.next_page_token] + from a previous + [ListDatabaseOperationsResponse][google.spanner.admin.database.v1.ListDatabaseOperationsResponse] + to the same ``parent`` and with the same ``filter``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + filter: str = proto.Field( + proto.STRING, + number=2, + ) + page_size: int = proto.Field( + proto.INT32, + number=3, + ) + page_token: str = proto.Field( + proto.STRING, + number=4, + ) + + +class ListDatabaseOperationsResponse(proto.Message): + r"""The response for + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations]. + + Attributes: + operations (MutableSequence[google.longrunning.operations_pb2.Operation]): + The list of matching database [long-running + operations][google.longrunning.Operation]. Each operation's + name will be prefixed by the database's name. The + operation's + [metadata][google.longrunning.Operation.metadata] field type + ``metadata.type_url`` describes the type of the metadata. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListDatabaseOperations][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations] + call to fetch more of the matching metadata. + """ + + @property + def raw_page(self): + return self + + operations: MutableSequence[operations_pb2.Operation] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message=operations_pb2.Operation, + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class RestoreDatabaseRequest(proto.Message): + r"""The request for + [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. + + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + parent (str): + Required. The name of the instance in which to create the + restored database. This instance must be in the same project + and have the same instance configuration as the instance + containing the source backup. Values are of the form + ``projects//instances/``. + database_id (str): + Required. The id of the database to create and restore to. + This database must not already exist. The ``database_id`` + appended to ``parent`` forms the full database name of the + form + ``projects//instances//databases/``. + backup (str): + Name of the backup from which to restore. Values are of the + form + ``projects//instances//backups/``. + + This field is a member of `oneof`_ ``source``. + encryption_config (google.cloud.spanner_admin_database_v1.types.RestoreDatabaseEncryptionConfig): + Optional. An encryption configuration describing the + encryption type and key resources in Cloud KMS used to + encrypt/decrypt the database to restore to. If this field is + not specified, the restored database will use the same + encryption configuration as the backup by default, namely + [encryption_type][google.spanner.admin.database.v1.RestoreDatabaseEncryptionConfig.encryption_type] + = ``USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + database_id: str = proto.Field( + proto.STRING, + number=2, + ) + backup: str = proto.Field( + proto.STRING, + number=3, + oneof="source", + ) + encryption_config: "RestoreDatabaseEncryptionConfig" = proto.Field( + proto.MESSAGE, + number=4, + message="RestoreDatabaseEncryptionConfig", + ) + + +class RestoreDatabaseEncryptionConfig(proto.Message): + r"""Encryption configuration for the restored database. + + Attributes: + encryption_type (google.cloud.spanner_admin_database_v1.types.RestoreDatabaseEncryptionConfig.EncryptionType): + Required. The encryption type of the restored + database. + kms_key_name (str): + Optional. The Cloud KMS key that will be used to + encrypt/decrypt the restored database. This field should be + set only when + [encryption_type][google.spanner.admin.database.v1.RestoreDatabaseEncryptionConfig.encryption_type] + is ``CUSTOMER_MANAGED_ENCRYPTION``. Values are of the form + ``projects//locations//keyRings//cryptoKeys/``. + kms_key_names (MutableSequence[str]): + Optional. Specifies the KMS configuration for the one or + more keys used to encrypt the database. Values are of the + form + ``projects//locations//keyRings//cryptoKeys/``. + + The keys referenced by kms_key_names must fully cover all + regions of the database instance configuration. Some + examples: + + - For single region database instance configs, specify a + single regional location KMS key. + - For multi-regional database instance configs of type + GOOGLE_MANAGED, either specify a multi-regional location + KMS key or multiple regional location KMS keys that cover + all regions in the instance config. + - For a database instance config of type USER_MANAGED, + please specify only regional location KMS keys to cover + each region in the instance config. Multi-regional + location KMS keys are not supported for USER_MANAGED + instance configs. + """ + + class EncryptionType(proto.Enum): + r"""Encryption types for the database to be restored. + + Values: + ENCRYPTION_TYPE_UNSPECIFIED (0): + Unspecified. Do not use. + USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION (1): + This is the default option when + [encryption_config][google.spanner.admin.database.v1.RestoreDatabaseEncryptionConfig] + is not specified. + GOOGLE_DEFAULT_ENCRYPTION (2): + Use Google default encryption. + CUSTOMER_MANAGED_ENCRYPTION (3): + Use customer managed encryption. If specified, + ``kms_key_name`` must must contain a valid Cloud KMS key. + """ + ENCRYPTION_TYPE_UNSPECIFIED = 0 + USE_CONFIG_DEFAULT_OR_BACKUP_ENCRYPTION = 1 + GOOGLE_DEFAULT_ENCRYPTION = 2 + CUSTOMER_MANAGED_ENCRYPTION = 3 + + encryption_type: EncryptionType = proto.Field( + proto.ENUM, + number=1, + enum=EncryptionType, + ) + kms_key_name: str = proto.Field( + proto.STRING, + number=2, + ) + kms_key_names: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + +class RestoreDatabaseMetadata(proto.Message): + r"""Metadata type for the long-running operation returned by + [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase]. + + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + name (str): + Name of the database being created and + restored to. + source_type (google.cloud.spanner_admin_database_v1.types.RestoreSourceType): + The type of the restore source. + backup_info (google.cloud.spanner_admin_database_v1.types.BackupInfo): + Information about the backup used to restore + the database. + + This field is a member of `oneof`_ ``source_info``. + progress (google.cloud.spanner_admin_database_v1.types.OperationProgress): + The progress of the + [RestoreDatabase][google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase] + operation. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which cancellation of this operation was + received. + [Operations.CancelOperation][google.longrunning.Operations.CancelOperation] + starts asynchronous cancellation on a long-running + operation. The server makes a best effort to cancel the + operation, but success is not guaranteed. Clients can use + [Operations.GetOperation][google.longrunning.Operations.GetOperation] + or other methods to check whether the cancellation succeeded + or whether the operation completed despite cancellation. On + successful cancellation, the operation is not deleted; + instead, it becomes an operation with an + [Operation.error][google.longrunning.Operation.error] value + with a [google.rpc.Status.code][google.rpc.Status.code] of + 1, corresponding to ``Code.CANCELLED``. + optimize_database_operation_name (str): + If exists, the name of the long-running operation that will + be used to track the post-restore optimization process to + optimize the performance of the restored database, and + remove the dependency on the restore source. The name is of + the form + ``projects//instances//databases//operations/`` + where the is the name of database being created and restored + to. The metadata type of the long-running operation is + [OptimizeRestoredDatabaseMetadata][google.spanner.admin.database.v1.OptimizeRestoredDatabaseMetadata]. + This long-running operation will be automatically created by + the system after the RestoreDatabase long-running operation + completes successfully. This operation will not be created + if the restore was not successful. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + source_type: "RestoreSourceType" = proto.Field( + proto.ENUM, + number=2, + enum="RestoreSourceType", + ) + backup_info: gsad_backup.BackupInfo = proto.Field( + proto.MESSAGE, + number=3, + oneof="source_info", + message=gsad_backup.BackupInfo, + ) + progress: common.OperationProgress = proto.Field( + proto.MESSAGE, + number=4, + message=common.OperationProgress, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=5, + message=timestamp_pb2.Timestamp, + ) + optimize_database_operation_name: str = proto.Field( + proto.STRING, + number=6, + ) + + +class OptimizeRestoredDatabaseMetadata(proto.Message): + r"""Metadata type for the long-running operation used to track + the progress of optimizations performed on a newly restored + database. This long-running operation is automatically created + by the system after the successful completion of a database + restore, and cannot be cancelled. + + Attributes: + name (str): + Name of the restored database being + optimized. + progress (google.cloud.spanner_admin_database_v1.types.OperationProgress): + The progress of the post-restore + optimizations. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + progress: common.OperationProgress = proto.Field( + proto.MESSAGE, + number=2, + message=common.OperationProgress, + ) + + +class DatabaseRole(proto.Message): + r"""A Cloud Spanner database role. + + Attributes: + name (str): + Required. The name of the database role. Values are of the + form + ``projects//instances//databases//databaseRoles/`` + where ```` is as specified in the ``CREATE ROLE`` DDL + statement. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class ListDatabaseRolesRequest(proto.Message): + r"""The request for + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles]. + + Attributes: + parent (str): + Required. The database whose roles should be listed. Values + are of the form + ``projects//instances//databases/``. + page_size (int): + Number of database roles to be returned in + the response. If 0 or less, defaults to the + server's maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.database.v1.ListDatabaseRolesResponse.next_page_token] + from a previous + [ListDatabaseRolesResponse][google.spanner.admin.database.v1.ListDatabaseRolesResponse]. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + page_size: int = proto.Field( + proto.INT32, + number=2, + ) + page_token: str = proto.Field( + proto.STRING, + number=3, + ) + + +class ListDatabaseRolesResponse(proto.Message): + r"""The response for + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles]. + + Attributes: + database_roles (MutableSequence[google.cloud.spanner_admin_database_v1.types.DatabaseRole]): + Database roles that matched the request. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListDatabaseRoles][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles] + call to fetch more of the matching roles. + """ + + @property + def raw_page(self): + return self + + database_roles: MutableSequence["DatabaseRole"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="DatabaseRole", + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class AddSplitPointsRequest(proto.Message): + r"""The request for + [AddSplitPoints][google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints]. + + Attributes: + database (str): + Required. The database on whose tables/indexes split points + are to be added. Values are of the form + ``projects//instances//databases/``. + split_points (MutableSequence[google.cloud.spanner_admin_database_v1.types.SplitPoints]): + Required. The split points to add. + initiator (str): + Optional. A user-supplied tag associated with the split + points. For example, "intital_data_load", "special_event_1". + Defaults to "CloudAddSplitPointsAPI" if not specified. The + length of the tag must not exceed 50 characters,else will be + trimmed. Only valid UTF8 characters are allowed. + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + split_points: MutableSequence["SplitPoints"] = proto.RepeatedField( + proto.MESSAGE, + number=2, + message="SplitPoints", + ) + initiator: str = proto.Field( + proto.STRING, + number=3, + ) + + +class AddSplitPointsResponse(proto.Message): + r"""The response for + [AddSplitPoints][google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints]. + + """ + + +class SplitPoints(proto.Message): + r"""The split points of a table/index. + + Attributes: + table (str): + The table to split. + index (str): + The index to split. If specified, the ``table`` field must + refer to the index's base table. + keys (MutableSequence[google.cloud.spanner_admin_database_v1.types.SplitPoints.Key]): + Required. The list of split keys, i.e., the + split boundaries. + expire_time (google.protobuf.timestamp_pb2.Timestamp): + Optional. The expiration timestamp of the + split points. A timestamp in the past means + immediate expiration. The maximum value can be + 30 days in the future. Defaults to 10 days in + the future if not specified. + """ + + class Key(proto.Message): + r"""A split key. + + Attributes: + key_parts (google.protobuf.struct_pb2.ListValue): + Required. The column values making up the + split key. + """ + + key_parts: struct_pb2.ListValue = proto.Field( + proto.MESSAGE, + number=1, + message=struct_pb2.ListValue, + ) + + table: str = proto.Field( + proto.STRING, + number=1, + ) + index: str = proto.Field( + proto.STRING, + number=2, + ) + keys: MutableSequence[Key] = proto.RepeatedField( + proto.MESSAGE, + number=3, + message=Key, + ) + expire_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=5, + message=timestamp_pb2.Timestamp, + ) + + +class InternalUpdateGraphOperationRequest(proto.Message): + r"""Internal request proto, do not use directly. + + Attributes: + database (str): + Internal field, do not use directly. + operation_id (str): + Internal field, do not use directly. + vm_identity_token (str): + Internal field, do not use directly. + progress (float): + Internal field, do not use directly. + status (google.rpc.status_pb2.Status): + Internal field, do not use directly. + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + operation_id: str = proto.Field( + proto.STRING, + number=2, + ) + vm_identity_token: str = proto.Field( + proto.STRING, + number=5, + ) + progress: float = proto.Field( + proto.DOUBLE, + number=3, + ) + status: status_pb2.Status = proto.Field( + proto.MESSAGE, + number=6, + message=status_pb2.Status, + ) + + +class InternalUpdateGraphOperationResponse(proto.Message): + r"""Internal response proto, do not use directly.""" + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_admin_instance_v1/__init__.py b/google/cloud/spanner_admin_instance_v1/__init__.py index 53f32d3b47..5368b59895 100644 --- a/google/cloud/spanner_admin_instance_v1/__init__.py +++ b/google/cloud/spanner_admin_instance_v1/__init__.py @@ -1,29 +1,110 @@ # -*- coding: utf-8 -*- -# -# Copyright 2018 Google LLC +# Copyright 2025 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# +from google.cloud.spanner_admin_instance_v1 import gapic_version as package_version -from __future__ import absolute_import - -from google.cloud.spanner_admin_instance_v1 import types -from google.cloud.spanner_admin_instance_v1.gapic import enums -from google.cloud.spanner_admin_instance_v1.gapic import instance_admin_client +__version__ = package_version.__version__ -class InstanceAdminClient(instance_admin_client.InstanceAdminClient): - __doc__ = instance_admin_client.InstanceAdminClient.__doc__ - enums = enums +from .services.instance_admin import InstanceAdminClient +from .services.instance_admin import InstanceAdminAsyncClient +from .types.common import OperationProgress +from .types.common import ReplicaSelection +from .types.common import FulfillmentPeriod +from .types.spanner_instance_admin import AutoscalingConfig +from .types.spanner_instance_admin import CreateInstanceConfigMetadata +from .types.spanner_instance_admin import CreateInstanceConfigRequest +from .types.spanner_instance_admin import CreateInstanceMetadata +from .types.spanner_instance_admin import CreateInstancePartitionMetadata +from .types.spanner_instance_admin import CreateInstancePartitionRequest +from .types.spanner_instance_admin import CreateInstanceRequest +from .types.spanner_instance_admin import DeleteInstanceConfigRequest +from .types.spanner_instance_admin import DeleteInstancePartitionRequest +from .types.spanner_instance_admin import DeleteInstanceRequest +from .types.spanner_instance_admin import FreeInstanceMetadata +from .types.spanner_instance_admin import GetInstanceConfigRequest +from .types.spanner_instance_admin import GetInstancePartitionRequest +from .types.spanner_instance_admin import GetInstanceRequest +from .types.spanner_instance_admin import Instance +from .types.spanner_instance_admin import InstanceConfig +from .types.spanner_instance_admin import InstancePartition +from .types.spanner_instance_admin import ListInstanceConfigOperationsRequest +from .types.spanner_instance_admin import ListInstanceConfigOperationsResponse +from .types.spanner_instance_admin import ListInstanceConfigsRequest +from .types.spanner_instance_admin import ListInstanceConfigsResponse +from .types.spanner_instance_admin import ListInstancePartitionOperationsRequest +from .types.spanner_instance_admin import ListInstancePartitionOperationsResponse +from .types.spanner_instance_admin import ListInstancePartitionsRequest +from .types.spanner_instance_admin import ListInstancePartitionsResponse +from .types.spanner_instance_admin import ListInstancesRequest +from .types.spanner_instance_admin import ListInstancesResponse +from .types.spanner_instance_admin import MoveInstanceMetadata +from .types.spanner_instance_admin import MoveInstanceRequest +from .types.spanner_instance_admin import MoveInstanceResponse +from .types.spanner_instance_admin import ReplicaComputeCapacity +from .types.spanner_instance_admin import ReplicaInfo +from .types.spanner_instance_admin import UpdateInstanceConfigMetadata +from .types.spanner_instance_admin import UpdateInstanceConfigRequest +from .types.spanner_instance_admin import UpdateInstanceMetadata +from .types.spanner_instance_admin import UpdateInstancePartitionMetadata +from .types.spanner_instance_admin import UpdateInstancePartitionRequest +from .types.spanner_instance_admin import UpdateInstanceRequest -__all__ = ("enums", "types", "InstanceAdminClient") +__all__ = ( + "InstanceAdminAsyncClient", + "AutoscalingConfig", + "CreateInstanceConfigMetadata", + "CreateInstanceConfigRequest", + "CreateInstanceMetadata", + "CreateInstancePartitionMetadata", + "CreateInstancePartitionRequest", + "CreateInstanceRequest", + "DeleteInstanceConfigRequest", + "DeleteInstancePartitionRequest", + "DeleteInstanceRequest", + "FreeInstanceMetadata", + "FulfillmentPeriod", + "GetInstanceConfigRequest", + "GetInstancePartitionRequest", + "GetInstanceRequest", + "Instance", + "InstanceAdminClient", + "InstanceConfig", + "InstancePartition", + "ListInstanceConfigOperationsRequest", + "ListInstanceConfigOperationsResponse", + "ListInstanceConfigsRequest", + "ListInstanceConfigsResponse", + "ListInstancePartitionOperationsRequest", + "ListInstancePartitionOperationsResponse", + "ListInstancePartitionsRequest", + "ListInstancePartitionsResponse", + "ListInstancesRequest", + "ListInstancesResponse", + "MoveInstanceMetadata", + "MoveInstanceRequest", + "MoveInstanceResponse", + "OperationProgress", + "ReplicaComputeCapacity", + "ReplicaInfo", + "ReplicaSelection", + "UpdateInstanceConfigMetadata", + "UpdateInstanceConfigRequest", + "UpdateInstanceMetadata", + "UpdateInstancePartitionMetadata", + "UpdateInstancePartitionRequest", + "UpdateInstanceRequest", +) diff --git a/google/cloud/spanner_admin_instance_v1/gapic/enums.py b/google/cloud/spanner_admin_instance_v1/gapic/enums.py deleted file mode 100644 index cfb40655be..0000000000 --- a/google/cloud/spanner_admin_instance_v1/gapic/enums.py +++ /dev/null @@ -1,75 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Wrappers for protocol buffer enum types.""" - -import enum - - -class Instance(object): - class State(enum.IntEnum): - """ - Indicates the current state of the instance. - - Attributes: - STATE_UNSPECIFIED (int): Not specified. - CREATING (int): The instance is still being created. Resources may not be - available yet, and operations such as database creation may not - work. - READY (int): The instance is fully created and ready to do work such as - creating databases. - """ - - STATE_UNSPECIFIED = 0 - CREATING = 1 - READY = 2 - - -class ReplicaInfo(object): - class ReplicaType(enum.IntEnum): - """ - Indicates the type of replica. See the `replica types - documentation `__ - for more details. - - Attributes: - TYPE_UNSPECIFIED (int): Not specified. - READ_WRITE (int): Read-write replicas support both reads and writes. These replicas: - - - Maintain a full copy of your data. - - Serve reads. - - Can vote whether to commit a write. - - Participate in leadership election. - - Are eligible to become a leader. - READ_ONLY (int): Read-only replicas only support reads (not writes). Read-only replicas: - - - Maintain a full copy of your data. - - Serve reads. - - Do not participate in voting to commit writes. - - Are not eligible to become a leader. - WITNESS (int): Witness replicas don't support reads but do participate in voting to - commit writes. Witness replicas: - - - Do not maintain a full copy of data. - - Do not serve reads. - - Vote whether to commit writes. - - Participate in leader election but are not eligible to become leader. - """ - - TYPE_UNSPECIFIED = 0 - READ_WRITE = 1 - READ_ONLY = 2 - WITNESS = 3 diff --git a/google/cloud/spanner_admin_instance_v1/gapic/instance_admin_client.py b/google/cloud/spanner_admin_instance_v1/gapic/instance_admin_client.py deleted file mode 100644 index 0dd85486b2..0000000000 --- a/google/cloud/spanner_admin_instance_v1/gapic/instance_admin_client.py +++ /dev/null @@ -1,1223 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.spanner.admin.instance.v1 InstanceAdmin API.""" - -import functools -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.gapic_v1.routing_header -import google.api_core.grpc_helpers -import google.api_core.operation -import google.api_core.operations_v1 -import google.api_core.page_iterator -import google.api_core.path_template -import grpc - -from google.cloud.spanner_admin_instance_v1.gapic import enums -from google.cloud.spanner_admin_instance_v1.gapic import instance_admin_client_config -from google.cloud.spanner_admin_instance_v1.gapic.transports import ( - instance_admin_grpc_transport, -) -from google.cloud.spanner_admin_instance_v1.proto import spanner_instance_admin_pb2 -from google.cloud.spanner_admin_instance_v1.proto import spanner_instance_admin_pb2_grpc -from google.iam.v1 import iam_policy_pb2 -from google.iam.v1 import options_pb2 -from google.iam.v1 import policy_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import empty_pb2 -from google.protobuf import field_mask_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution("google-cloud-spanner").version - - -class InstanceAdminClient(object): - """ - Cloud Spanner Instance Admin API - - The Cloud Spanner Instance Admin API can be used to create, delete, - modify and list instances. Instances are dedicated Cloud Spanner serving - and storage resources to be used by Cloud Spanner databases. - - Each instance has a "configuration", which dictates where the - serving resources for the Cloud Spanner instance are located (e.g., - US-central, Europe). Configurations are created by Google based on - resource availability. - - Cloud Spanner billing is based on the instances that exist and their - sizes. After an instance exists, there are no additional - per-database or per-operation charges for use of the instance - (though there may be additional network bandwidth charges). - Instances offer isolation: problems with databases in one instance - will not affect other instances. However, within an instance - databases can affect each other. For example, if one database in an - instance receives a lot of requests and consumes most of the - instance resources, fewer resources are available for other - databases in that instance, and their performance may suffer. - """ - - SERVICE_ADDRESS = "spanner.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = "google.spanner.admin.instance.v1.InstanceAdmin" - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - InstanceAdminClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - @classmethod - def instance_path(cls, project, instance): - """Return a fully-qualified instance string.""" - return google.api_core.path_template.expand( - "projects/{project}/instances/{instance}", - project=project, - instance=instance, - ) - - @classmethod - def instance_config_path(cls, project, instance_config): - """Return a fully-qualified instance_config string.""" - return google.api_core.path_template.expand( - "projects/{project}/instanceConfigs/{instance_config}", - project=project, - instance_config=instance_config, - ) - - @classmethod - def project_path(cls, project): - """Return a fully-qualified project string.""" - return google.api_core.path_template.expand( - "projects/{project}", project=project - ) - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.InstanceAdminGrpcTransport, - Callable[[~.Credentials, type], ~.InstanceAdminGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = instance_admin_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=instance_admin_grpc_transport.InstanceAdminGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = instance_admin_grpc_transport.InstanceAdminGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME] - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def list_instance_configs( - self, - parent, - page_size=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Lists the supported instance configurations for a given project. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> parent = client.project_path('[PROJECT]') - >>> - >>> # Iterate over all results - >>> for element in client.list_instance_configs(parent): - ... # process element - ... pass - >>> - >>> - >>> # Alternatively: - >>> - >>> # Iterate over results one page at a time - >>> for page in client.list_instance_configs(parent).pages: - ... for element in page: - ... # process element - ... pass - - Args: - parent (str): Required. The name of the project for which a list of supported instance - configurations is requested. Values are of the form - ``projects/``. - page_size (int): The maximum number of resources contained in the - underlying API response. If page streaming is performed per- - resource, this parameter does not affect the return value. If page - streaming is performed per-page, this determines the maximum number - of resources in a page. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.page_iterator.PageIterator` instance. - An iterable of :class:`~google.cloud.spanner_admin_instance_v1.types.InstanceConfig` instances. - You can also iterate over the pages of the response - using its `pages` property. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "list_instance_configs" not in self._inner_api_calls: - self._inner_api_calls[ - "list_instance_configs" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.list_instance_configs, - default_retry=self._method_configs["ListInstanceConfigs"].retry, - default_timeout=self._method_configs["ListInstanceConfigs"].timeout, - client_info=self._client_info, - ) - - request = spanner_instance_admin_pb2.ListInstanceConfigsRequest( - parent=parent, page_size=page_size - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - iterator = google.api_core.page_iterator.GRPCIterator( - client=None, - method=functools.partial( - self._inner_api_calls["list_instance_configs"], - retry=retry, - timeout=timeout, - metadata=metadata, - ), - request=request, - items_field="instance_configs", - request_token_field="page_token", - response_token_field="next_page_token", - ) - return iterator - - def get_instance_config( - self, - name, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Gets information about a particular instance configuration. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> name = client.instance_config_path('[PROJECT]', '[INSTANCE_CONFIG]') - >>> - >>> response = client.get_instance_config(name) - - Args: - name (str): Required. The name of the requested instance configuration. Values are - of the form ``projects//instanceConfigs/``. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_instance_v1.types.InstanceConfig` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "get_instance_config" not in self._inner_api_calls: - self._inner_api_calls[ - "get_instance_config" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.get_instance_config, - default_retry=self._method_configs["GetInstanceConfig"].retry, - default_timeout=self._method_configs["GetInstanceConfig"].timeout, - client_info=self._client_info, - ) - - request = spanner_instance_admin_pb2.GetInstanceConfigRequest(name=name) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("name", name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["get_instance_config"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def list_instances( - self, - parent, - page_size=None, - filter_=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Lists all instances in the given project. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> parent = client.project_path('[PROJECT]') - >>> - >>> # Iterate over all results - >>> for element in client.list_instances(parent): - ... # process element - ... pass - >>> - >>> - >>> # Alternatively: - >>> - >>> # Iterate over results one page at a time - >>> for page in client.list_instances(parent).pages: - ... for element in page: - ... # process element - ... pass - - Args: - parent (str): Required. The name of the project for which a list of instances is - requested. Values are of the form ``projects/``. - page_size (int): The maximum number of resources contained in the - underlying API response. If page streaming is performed per- - resource, this parameter does not affect the return value. If page - streaming is performed per-page, this determines the maximum number - of resources in a page. - filter_ (str): An expression for filtering the results of the request. Filter rules are - case insensitive. The fields eligible for filtering are: - - - ``name`` - - ``display_name`` - - ``labels.key`` where key is the name of a label - - Some examples of using filters are: - - - ``name:*`` --> The instance has a name. - - ``name:Howl`` --> The instance's name contains the string "howl". - - ``name:HOWL`` --> Equivalent to above. - - ``NAME:howl`` --> Equivalent to above. - - ``labels.env:*`` --> The instance has the label "env". - - ``labels.env:dev`` --> The instance has the label "env" and the value - of the label contains the string "dev". - - ``name:howl labels.env:dev`` --> The instance's name contains "howl" - and it has the label "env" with its value containing "dev". - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.page_iterator.PageIterator` instance. - An iterable of :class:`~google.cloud.spanner_admin_instance_v1.types.Instance` instances. - You can also iterate over the pages of the response - using its `pages` property. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "list_instances" not in self._inner_api_calls: - self._inner_api_calls[ - "list_instances" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.list_instances, - default_retry=self._method_configs["ListInstances"].retry, - default_timeout=self._method_configs["ListInstances"].timeout, - client_info=self._client_info, - ) - - request = spanner_instance_admin_pb2.ListInstancesRequest( - parent=parent, page_size=page_size, filter=filter_ - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - iterator = google.api_core.page_iterator.GRPCIterator( - client=None, - method=functools.partial( - self._inner_api_calls["list_instances"], - retry=retry, - timeout=timeout, - metadata=metadata, - ), - request=request, - items_field="instances", - request_token_field="page_token", - response_token_field="next_page_token", - ) - return iterator - - def get_instance( - self, - name, - field_mask=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Gets information about a particular instance. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> name = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> response = client.get_instance(name) - - Args: - name (str): Required. The name of the requested instance. Values are of the form - ``projects//instances/``. - field_mask (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.FieldMask]): If field\_mask is present, specifies the subset of ``Instance`` fields - that should be returned. If absent, all ``Instance`` fields are - returned. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_instance_v1.types.FieldMask` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_instance_v1.types.Instance` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "get_instance" not in self._inner_api_calls: - self._inner_api_calls[ - "get_instance" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.get_instance, - default_retry=self._method_configs["GetInstance"].retry, - default_timeout=self._method_configs["GetInstance"].timeout, - client_info=self._client_info, - ) - - request = spanner_instance_admin_pb2.GetInstanceRequest( - name=name, field_mask=field_mask - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("name", name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["get_instance"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def create_instance( - self, - parent, - instance_id, - instance, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Creates an instance and begins preparing it to begin serving. The - returned ``long-running operation`` can be used to track the progress of - preparing the new instance. The instance name is assigned by the caller. - If the named instance already exists, ``CreateInstance`` returns - ``ALREADY_EXISTS``. - - Immediately upon completion of this request: - - - The instance is readable via the API, with all requested attributes - but no allocated resources. Its state is ``CREATING``. - - Until completion of the returned operation: - - - Cancelling the operation renders the instance immediately unreadable - via the API. - - The instance can be deleted. - - All other attempts to modify the instance are rejected. - - Upon completion of the returned operation: - - - Billing for all successfully-allocated resources begins (some types - may have lower than the requested levels). - - Databases can be created in the instance. - - The instance's allocated resource levels are readable via the API. - - The instance's state becomes ``READY``. - - The returned ``long-running operation`` will have a name of the format - ``/operations/`` and can be used to track - creation of the instance. The ``metadata`` field type is - ``CreateInstanceMetadata``. The ``response`` field type is ``Instance``, - if successful. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> parent = client.project_path('[PROJECT]') - >>> - >>> # TODO: Initialize `instance_id`: - >>> instance_id = '' - >>> - >>> # TODO: Initialize `instance`: - >>> instance = {} - >>> - >>> response = client.create_instance(parent, instance_id, instance) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - parent (str): Required. The name of the project in which to create the instance. - Values are of the form ``projects/``. - instance_id (str): Required. The ID of the instance to create. Valid identifiers are of the - form ``[a-z][-a-z0-9]*[a-z0-9]`` and must be between 2 and 64 characters - in length. - instance (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.Instance]): Required. The instance to create. The name may be omitted, but if - specified must be ``/instances/``. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_instance_v1.types.Instance` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.operation.Operation` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "create_instance" not in self._inner_api_calls: - self._inner_api_calls[ - "create_instance" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.create_instance, - default_retry=self._method_configs["CreateInstance"].retry, - default_timeout=self._method_configs["CreateInstance"].timeout, - client_info=self._client_info, - ) - - request = spanner_instance_admin_pb2.CreateInstanceRequest( - parent=parent, instance_id=instance_id, instance=instance - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("parent", parent)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - operation = self._inner_api_calls["create_instance"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - spanner_instance_admin_pb2.Instance, - metadata_type=spanner_instance_admin_pb2.CreateInstanceMetadata, - ) - - def update_instance( - self, - instance, - field_mask, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Updates an instance, and begins allocating or releasing resources as - requested. The returned ``long-running operation`` can be used to track - the progress of updating the instance. If the named instance does not - exist, returns ``NOT_FOUND``. - - Immediately upon completion of this request: - - - For resource types for which a decrease in the instance's allocation - has been requested, billing is based on the newly-requested level. - - Until completion of the returned operation: - - - Cancelling the operation sets its metadata's ``cancel_time``, and - begins restoring resources to their pre-request values. The operation - is guaranteed to succeed at undoing all resource changes, after which - point it terminates with a ``CANCELLED`` status. - - All other attempts to modify the instance are rejected. - - Reading the instance via the API continues to give the pre-request - resource levels. - - Upon completion of the returned operation: - - - Billing begins for all successfully-allocated resources (some types - may have lower than the requested levels). - - All newly-reserved resources are available for serving the instance's - tables. - - The instance's new resource levels are readable via the API. - - The returned ``long-running operation`` will have a name of the format - ``/operations/`` and can be used to track - the instance modification. The ``metadata`` field type is - ``UpdateInstanceMetadata``. The ``response`` field type is ``Instance``, - if successful. - - Authorization requires ``spanner.instances.update`` permission on - resource ``name``. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> # TODO: Initialize `instance`: - >>> instance = {} - >>> - >>> # TODO: Initialize `field_mask`: - >>> field_mask = {} - >>> - >>> response = client.update_instance(instance, field_mask) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - instance (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.Instance]): Required. The instance to update, which must always include the instance - name. Otherwise, only fields mentioned in ``field_mask`` need be - included. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_instance_v1.types.Instance` - field_mask (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.FieldMask]): Required. A mask specifying which fields in ``Instance`` should be - updated. The field mask must always be specified; this prevents any - future fields in ``Instance`` from being erased accidentally by clients - that do not know about them. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_instance_v1.types.FieldMask` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.operation.Operation` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "update_instance" not in self._inner_api_calls: - self._inner_api_calls[ - "update_instance" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.update_instance, - default_retry=self._method_configs["UpdateInstance"].retry, - default_timeout=self._method_configs["UpdateInstance"].timeout, - client_info=self._client_info, - ) - - request = spanner_instance_admin_pb2.UpdateInstanceRequest( - instance=instance, field_mask=field_mask - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("instance.name", instance.name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - operation = self._inner_api_calls["update_instance"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - spanner_instance_admin_pb2.Instance, - metadata_type=spanner_instance_admin_pb2.UpdateInstanceMetadata, - ) - - def delete_instance( - self, - name, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Deletes an instance. - - Immediately upon completion of the request: - - - Billing ceases for all of the instance's reserved resources. - - Soon afterward: - - - The instance and *all of its databases* immediately and irrevocably - disappear from the API. All data in the databases is permanently - deleted. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> name = client.instance_path('[PROJECT]', '[INSTANCE]') - >>> - >>> client.delete_instance(name) - - Args: - name (str): Required. The name of the instance to be deleted. Values are of the form - ``projects//instances/`` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "delete_instance" not in self._inner_api_calls: - self._inner_api_calls[ - "delete_instance" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.delete_instance, - default_retry=self._method_configs["DeleteInstance"].retry, - default_timeout=self._method_configs["DeleteInstance"].timeout, - client_info=self._client_info, - ) - - request = spanner_instance_admin_pb2.DeleteInstanceRequest(name=name) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("name", name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - self._inner_api_calls["delete_instance"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def set_iam_policy( - self, - resource, - policy, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Sets the access control policy on an instance resource. Replaces any - existing policy. - - Authorization requires ``spanner.instances.setIamPolicy`` on - ``resource``. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> # TODO: Initialize `resource`: - >>> resource = '' - >>> - >>> # TODO: Initialize `policy`: - >>> policy = {} - >>> - >>> response = client.set_iam_policy(resource, policy) - - Args: - resource (str): REQUIRED: The resource for which the policy is being specified. - See the operation documentation for the appropriate value for this field. - policy (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.Policy]): REQUIRED: The complete policy to be applied to the ``resource``. The - size of the policy is limited to a few 10s of KB. An empty policy is a - valid policy but certain Cloud Platform services (such as Projects) - might reject them. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_instance_v1.types.Policy` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_instance_v1.types.Policy` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "set_iam_policy" not in self._inner_api_calls: - self._inner_api_calls[ - "set_iam_policy" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.set_iam_policy, - default_retry=self._method_configs["SetIamPolicy"].retry, - default_timeout=self._method_configs["SetIamPolicy"].timeout, - client_info=self._client_info, - ) - - request = iam_policy_pb2.SetIamPolicyRequest(resource=resource, policy=policy) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("resource", resource)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["set_iam_policy"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def get_iam_policy( - self, - resource, - options_=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Gets the access control policy for an instance resource. Returns an - empty policy if an instance exists but does not have a policy set. - - Authorization requires ``spanner.instances.getIamPolicy`` on - ``resource``. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> # TODO: Initialize `resource`: - >>> resource = '' - >>> - >>> response = client.get_iam_policy(resource) - - Args: - resource (str): REQUIRED: The resource for which the policy is being requested. - See the operation documentation for the appropriate value for this field. - options_ (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.GetPolicyOptions]): OPTIONAL: A ``GetPolicyOptions`` object for specifying options to - ``GetIamPolicy``. This field is only used by Cloud IAM. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_admin_instance_v1.types.GetPolicyOptions` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_instance_v1.types.Policy` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "get_iam_policy" not in self._inner_api_calls: - self._inner_api_calls[ - "get_iam_policy" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.get_iam_policy, - default_retry=self._method_configs["GetIamPolicy"].retry, - default_timeout=self._method_configs["GetIamPolicy"].timeout, - client_info=self._client_info, - ) - - request = iam_policy_pb2.GetIamPolicyRequest( - resource=resource, options=options_ - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("resource", resource)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["get_iam_policy"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def test_iam_permissions( - self, - resource, - permissions, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Returns permissions that the caller has on the specified instance - resource. - - Attempting this RPC on a non-existent Cloud Spanner instance resource - will result in a NOT\_FOUND error if the user has - ``spanner.instances.list`` permission on the containing Google Cloud - Project. Otherwise returns an empty set of permissions. - - Example: - >>> from google.cloud import spanner_admin_instance_v1 - >>> - >>> client = spanner_admin_instance_v1.InstanceAdminClient() - >>> - >>> # TODO: Initialize `resource`: - >>> resource = '' - >>> - >>> # TODO: Initialize `permissions`: - >>> permissions = [] - >>> - >>> response = client.test_iam_permissions(resource, permissions) - - Args: - resource (str): REQUIRED: The resource for which the policy detail is being requested. - See the operation documentation for the appropriate value for this field. - permissions (list[str]): The set of permissions to check for the ``resource``. Permissions with - wildcards (such as '*' or 'storage.*') are not allowed. For more - information see `IAM - Overview `__. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_admin_instance_v1.types.TestIamPermissionsResponse` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "test_iam_permissions" not in self._inner_api_calls: - self._inner_api_calls[ - "test_iam_permissions" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.test_iam_permissions, - default_retry=self._method_configs["TestIamPermissions"].retry, - default_timeout=self._method_configs["TestIamPermissions"].timeout, - client_info=self._client_info, - ) - - request = iam_policy_pb2.TestIamPermissionsRequest( - resource=resource, permissions=permissions - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("resource", resource)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["test_iam_permissions"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) diff --git a/google/cloud/spanner_admin_instance_v1/gapic/instance_admin_client_config.py b/google/cloud/spanner_admin_instance_v1/gapic/instance_admin_client_config.py deleted file mode 100644 index 4331f879de..0000000000 --- a/google/cloud/spanner_admin_instance_v1/gapic/instance_admin_client_config.py +++ /dev/null @@ -1,73 +0,0 @@ -config = { - "interfaces": { - "google.spanner.admin.instance.v1.InstanceAdmin": { - "retry_codes": { - "idempotent": ["DEADLINE_EXCEEDED", "UNAVAILABLE"], - "non_idempotent": [], - }, - "retry_params": { - "default": { - "initial_retry_delay_millis": 1000, - "retry_delay_multiplier": 1.3, - "max_retry_delay_millis": 32000, - "initial_rpc_timeout_millis": 60000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 60000, - "total_timeout_millis": 600000, - } - }, - "methods": { - "ListInstanceConfigs": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "GetInstanceConfig": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "ListInstances": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "GetInstance": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "CreateInstance": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "UpdateInstance": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "DeleteInstance": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "SetIamPolicy": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - "GetIamPolicy": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "TestIamPermissions": { - "timeout_millis": 30000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "default", - }, - }, - } - } -} diff --git a/google/cloud/spanner_admin_instance_v1/gapic/transports/instance_admin_grpc_transport.py b/google/cloud/spanner_admin_instance_v1/gapic/transports/instance_admin_grpc_transport.py deleted file mode 100644 index 1d3c404bf6..0000000000 --- a/google/cloud/spanner_admin_instance_v1/gapic/transports/instance_admin_grpc_transport.py +++ /dev/null @@ -1,340 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import google.api_core.grpc_helpers -import google.api_core.operations_v1 - -from google.cloud.spanner_admin_instance_v1.proto import spanner_instance_admin_pb2_grpc - - -class InstanceAdminGrpcTransport(object): - """gRPC transport class providing stubs for - google.spanner.admin.instance.v1 InstanceAdmin API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ( - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/spanner.admin", - ) - - def __init__( - self, channel=None, credentials=None, address="spanner.googleapis.com:443" - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive." - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = { - "instance_admin_stub": spanner_instance_admin_pb2_grpc.InstanceAdminStub( - channel - ) - } - - # Because this API includes a method that returns a - # long-running operation (proto: google.longrunning.Operation), - # instantiate an LRO client. - self._operations_client = google.api_core.operations_v1.OperationsClient( - channel - ) - - @classmethod - def create_channel( - cls, address="spanner.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def list_instance_configs(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.list_instance_configs`. - - Lists the supported instance configurations for a given project. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].ListInstanceConfigs - - @property - def get_instance_config(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.get_instance_config`. - - Gets information about a particular instance configuration. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].GetInstanceConfig - - @property - def list_instances(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.list_instances`. - - Lists all instances in the given project. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].ListInstances - - @property - def get_instance(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.get_instance`. - - Gets information about a particular instance. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].GetInstance - - @property - def create_instance(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.create_instance`. - - Creates an instance and begins preparing it to begin serving. The - returned ``long-running operation`` can be used to track the progress of - preparing the new instance. The instance name is assigned by the caller. - If the named instance already exists, ``CreateInstance`` returns - ``ALREADY_EXISTS``. - - Immediately upon completion of this request: - - - The instance is readable via the API, with all requested attributes - but no allocated resources. Its state is ``CREATING``. - - Until completion of the returned operation: - - - Cancelling the operation renders the instance immediately unreadable - via the API. - - The instance can be deleted. - - All other attempts to modify the instance are rejected. - - Upon completion of the returned operation: - - - Billing for all successfully-allocated resources begins (some types - may have lower than the requested levels). - - Databases can be created in the instance. - - The instance's allocated resource levels are readable via the API. - - The instance's state becomes ``READY``. - - The returned ``long-running operation`` will have a name of the format - ``/operations/`` and can be used to track - creation of the instance. The ``metadata`` field type is - ``CreateInstanceMetadata``. The ``response`` field type is ``Instance``, - if successful. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].CreateInstance - - @property - def update_instance(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.update_instance`. - - Updates an instance, and begins allocating or releasing resources as - requested. The returned ``long-running operation`` can be used to track - the progress of updating the instance. If the named instance does not - exist, returns ``NOT_FOUND``. - - Immediately upon completion of this request: - - - For resource types for which a decrease in the instance's allocation - has been requested, billing is based on the newly-requested level. - - Until completion of the returned operation: - - - Cancelling the operation sets its metadata's ``cancel_time``, and - begins restoring resources to their pre-request values. The operation - is guaranteed to succeed at undoing all resource changes, after which - point it terminates with a ``CANCELLED`` status. - - All other attempts to modify the instance are rejected. - - Reading the instance via the API continues to give the pre-request - resource levels. - - Upon completion of the returned operation: - - - Billing begins for all successfully-allocated resources (some types - may have lower than the requested levels). - - All newly-reserved resources are available for serving the instance's - tables. - - The instance's new resource levels are readable via the API. - - The returned ``long-running operation`` will have a name of the format - ``/operations/`` and can be used to track - the instance modification. The ``metadata`` field type is - ``UpdateInstanceMetadata``. The ``response`` field type is ``Instance``, - if successful. - - Authorization requires ``spanner.instances.update`` permission on - resource ``name``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].UpdateInstance - - @property - def delete_instance(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.delete_instance`. - - Deletes an instance. - - Immediately upon completion of the request: - - - Billing ceases for all of the instance's reserved resources. - - Soon afterward: - - - The instance and *all of its databases* immediately and irrevocably - disappear from the API. All data in the databases is permanently - deleted. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].DeleteInstance - - @property - def set_iam_policy(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.set_iam_policy`. - - Sets the access control policy on an instance resource. Replaces any - existing policy. - - Authorization requires ``spanner.instances.setIamPolicy`` on - ``resource``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].SetIamPolicy - - @property - def get_iam_policy(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.get_iam_policy`. - - Gets the access control policy for an instance resource. Returns an - empty policy if an instance exists but does not have a policy set. - - Authorization requires ``spanner.instances.getIamPolicy`` on - ``resource``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].GetIamPolicy - - @property - def test_iam_permissions(self): - """Return the gRPC stub for :meth:`InstanceAdminClient.test_iam_permissions`. - - Returns permissions that the caller has on the specified instance - resource. - - Attempting this RPC on a non-existent Cloud Spanner instance resource - will result in a NOT\_FOUND error if the user has - ``spanner.instances.list`` permission on the containing Google Cloud - Project. Otherwise returns an empty set of permissions. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["instance_admin_stub"].TestIamPermissions diff --git a/google/cloud/spanner_admin_instance_v1/gapic_metadata.json b/google/cloud/spanner_admin_instance_v1/gapic_metadata.json new file mode 100644 index 0000000000..60fa46718a --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/gapic_metadata.json @@ -0,0 +1,343 @@ + { + "comment": "This file maps proto services/RPCs to the corresponding library clients/methods", + "language": "python", + "libraryPackage": "google.cloud.spanner_admin_instance_v1", + "protoPackage": "google.spanner.admin.instance.v1", + "schema": "1.0", + "services": { + "InstanceAdmin": { + "clients": { + "grpc": { + "libraryClient": "InstanceAdminClient", + "rpcs": { + "CreateInstance": { + "methods": [ + "create_instance" + ] + }, + "CreateInstanceConfig": { + "methods": [ + "create_instance_config" + ] + }, + "CreateInstancePartition": { + "methods": [ + "create_instance_partition" + ] + }, + "DeleteInstance": { + "methods": [ + "delete_instance" + ] + }, + "DeleteInstanceConfig": { + "methods": [ + "delete_instance_config" + ] + }, + "DeleteInstancePartition": { + "methods": [ + "delete_instance_partition" + ] + }, + "GetIamPolicy": { + "methods": [ + "get_iam_policy" + ] + }, + "GetInstance": { + "methods": [ + "get_instance" + ] + }, + "GetInstanceConfig": { + "methods": [ + "get_instance_config" + ] + }, + "GetInstancePartition": { + "methods": [ + "get_instance_partition" + ] + }, + "ListInstanceConfigOperations": { + "methods": [ + "list_instance_config_operations" + ] + }, + "ListInstanceConfigs": { + "methods": [ + "list_instance_configs" + ] + }, + "ListInstancePartitionOperations": { + "methods": [ + "list_instance_partition_operations" + ] + }, + "ListInstancePartitions": { + "methods": [ + "list_instance_partitions" + ] + }, + "ListInstances": { + "methods": [ + "list_instances" + ] + }, + "MoveInstance": { + "methods": [ + "move_instance" + ] + }, + "SetIamPolicy": { + "methods": [ + "set_iam_policy" + ] + }, + "TestIamPermissions": { + "methods": [ + "test_iam_permissions" + ] + }, + "UpdateInstance": { + "methods": [ + "update_instance" + ] + }, + "UpdateInstanceConfig": { + "methods": [ + "update_instance_config" + ] + }, + "UpdateInstancePartition": { + "methods": [ + "update_instance_partition" + ] + } + } + }, + "grpc-async": { + "libraryClient": "InstanceAdminAsyncClient", + "rpcs": { + "CreateInstance": { + "methods": [ + "create_instance" + ] + }, + "CreateInstanceConfig": { + "methods": [ + "create_instance_config" + ] + }, + "CreateInstancePartition": { + "methods": [ + "create_instance_partition" + ] + }, + "DeleteInstance": { + "methods": [ + "delete_instance" + ] + }, + "DeleteInstanceConfig": { + "methods": [ + "delete_instance_config" + ] + }, + "DeleteInstancePartition": { + "methods": [ + "delete_instance_partition" + ] + }, + "GetIamPolicy": { + "methods": [ + "get_iam_policy" + ] + }, + "GetInstance": { + "methods": [ + "get_instance" + ] + }, + "GetInstanceConfig": { + "methods": [ + "get_instance_config" + ] + }, + "GetInstancePartition": { + "methods": [ + "get_instance_partition" + ] + }, + "ListInstanceConfigOperations": { + "methods": [ + "list_instance_config_operations" + ] + }, + "ListInstanceConfigs": { + "methods": [ + "list_instance_configs" + ] + }, + "ListInstancePartitionOperations": { + "methods": [ + "list_instance_partition_operations" + ] + }, + "ListInstancePartitions": { + "methods": [ + "list_instance_partitions" + ] + }, + "ListInstances": { + "methods": [ + "list_instances" + ] + }, + "MoveInstance": { + "methods": [ + "move_instance" + ] + }, + "SetIamPolicy": { + "methods": [ + "set_iam_policy" + ] + }, + "TestIamPermissions": { + "methods": [ + "test_iam_permissions" + ] + }, + "UpdateInstance": { + "methods": [ + "update_instance" + ] + }, + "UpdateInstanceConfig": { + "methods": [ + "update_instance_config" + ] + }, + "UpdateInstancePartition": { + "methods": [ + "update_instance_partition" + ] + } + } + }, + "rest": { + "libraryClient": "InstanceAdminClient", + "rpcs": { + "CreateInstance": { + "methods": [ + "create_instance" + ] + }, + "CreateInstanceConfig": { + "methods": [ + "create_instance_config" + ] + }, + "CreateInstancePartition": { + "methods": [ + "create_instance_partition" + ] + }, + "DeleteInstance": { + "methods": [ + "delete_instance" + ] + }, + "DeleteInstanceConfig": { + "methods": [ + "delete_instance_config" + ] + }, + "DeleteInstancePartition": { + "methods": [ + "delete_instance_partition" + ] + }, + "GetIamPolicy": { + "methods": [ + "get_iam_policy" + ] + }, + "GetInstance": { + "methods": [ + "get_instance" + ] + }, + "GetInstanceConfig": { + "methods": [ + "get_instance_config" + ] + }, + "GetInstancePartition": { + "methods": [ + "get_instance_partition" + ] + }, + "ListInstanceConfigOperations": { + "methods": [ + "list_instance_config_operations" + ] + }, + "ListInstanceConfigs": { + "methods": [ + "list_instance_configs" + ] + }, + "ListInstancePartitionOperations": { + "methods": [ + "list_instance_partition_operations" + ] + }, + "ListInstancePartitions": { + "methods": [ + "list_instance_partitions" + ] + }, + "ListInstances": { + "methods": [ + "list_instances" + ] + }, + "MoveInstance": { + "methods": [ + "move_instance" + ] + }, + "SetIamPolicy": { + "methods": [ + "set_iam_policy" + ] + }, + "TestIamPermissions": { + "methods": [ + "test_iam_permissions" + ] + }, + "UpdateInstance": { + "methods": [ + "update_instance" + ] + }, + "UpdateInstanceConfig": { + "methods": [ + "update_instance_config" + ] + }, + "UpdateInstancePartition": { + "methods": [ + "update_instance_partition" + ] + } + } + } + } + } + } +} diff --git a/google/cloud/spanner_admin_instance_v1/gapic_version.py b/google/cloud/spanner_admin_instance_v1/gapic_version.py new file mode 100644 index 0000000000..fa3f4c040d --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/gapic_version.py @@ -0,0 +1,16 @@ +# -*- coding: utf-8 -*- +# Copyright 2022 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +__version__ = "3.58.0" # {x-release-please-version} diff --git a/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin.proto b/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin.proto deleted file mode 100644 index 6a068baca2..0000000000 --- a/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin.proto +++ /dev/null @@ -1,603 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.admin.instance.v1; - -import "google/api/annotations.proto"; -import "google/api/client.proto"; -import "google/api/field_behavior.proto"; -import "google/api/resource.proto"; -import "google/iam/v1/iam_policy.proto"; -import "google/iam/v1/policy.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/empty.proto"; -import "google/protobuf/field_mask.proto"; -import "google/protobuf/timestamp.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.Admin.Instance.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/admin/instance/v1;instance"; -option java_multiple_files = true; -option java_outer_classname = "SpannerInstanceAdminProto"; -option java_package = "com.google.spanner.admin.instance.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\Admin\\Instance\\V1"; - -// Cloud Spanner Instance Admin API -// -// The Cloud Spanner Instance Admin API can be used to create, delete, -// modify and list instances. Instances are dedicated Cloud Spanner serving -// and storage resources to be used by Cloud Spanner databases. -// -// Each instance has a "configuration", which dictates where the -// serving resources for the Cloud Spanner instance are located (e.g., -// US-central, Europe). Configurations are created by Google based on -// resource availability. -// -// Cloud Spanner billing is based on the instances that exist and their -// sizes. After an instance exists, there are no additional -// per-database or per-operation charges for use of the instance -// (though there may be additional network bandwidth charges). -// Instances offer isolation: problems with databases in one instance -// will not affect other instances. However, within an instance -// databases can affect each other. For example, if one database in an -// instance receives a lot of requests and consumes most of the -// instance resources, fewer resources are available for other -// databases in that instance, and their performance may suffer. -service InstanceAdmin { - option (google.api.default_host) = "spanner.googleapis.com"; - option (google.api.oauth_scopes) = - "https://www.googleapis.com/auth/cloud-platform," - "https://www.googleapis.com/auth/spanner.admin"; - - // Lists the supported instance configurations for a given project. - rpc ListInstanceConfigs(ListInstanceConfigsRequest) returns (ListInstanceConfigsResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*}/instanceConfigs" - }; - option (google.api.method_signature) = "parent"; - } - - // Gets information about a particular instance configuration. - rpc GetInstanceConfig(GetInstanceConfigRequest) returns (InstanceConfig) { - option (google.api.http) = { - get: "/v1/{name=projects/*/instanceConfigs/*}" - }; - option (google.api.method_signature) = "name"; - } - - // Lists all instances in the given project. - rpc ListInstances(ListInstancesRequest) returns (ListInstancesResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*}/instances" - }; - option (google.api.method_signature) = "parent"; - } - - // Gets information about a particular instance. - rpc GetInstance(GetInstanceRequest) returns (Instance) { - option (google.api.http) = { - get: "/v1/{name=projects/*/instances/*}" - }; - option (google.api.method_signature) = "name"; - } - - // Creates an instance and begins preparing it to begin serving. The - // returned [long-running operation][google.longrunning.Operation] - // can be used to track the progress of preparing the new - // instance. The instance name is assigned by the caller. If the - // named instance already exists, `CreateInstance` returns - // `ALREADY_EXISTS`. - // - // Immediately upon completion of this request: - // - // * The instance is readable via the API, with all requested attributes - // but no allocated resources. Its state is `CREATING`. - // - // Until completion of the returned operation: - // - // * Cancelling the operation renders the instance immediately unreadable - // via the API. - // * The instance can be deleted. - // * All other attempts to modify the instance are rejected. - // - // Upon completion of the returned operation: - // - // * Billing for all successfully-allocated resources begins (some types - // may have lower than the requested levels). - // * Databases can be created in the instance. - // * The instance's allocated resource levels are readable via the API. - // * The instance's state becomes `READY`. - // - // The returned [long-running operation][google.longrunning.Operation] will - // have a name of the format `/operations/` and - // can be used to track creation of the instance. The - // [metadata][google.longrunning.Operation.metadata] field type is - // [CreateInstanceMetadata][google.spanner.admin.instance.v1.CreateInstanceMetadata]. - // The [response][google.longrunning.Operation.response] field type is - // [Instance][google.spanner.admin.instance.v1.Instance], if successful. - rpc CreateInstance(CreateInstanceRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1/{parent=projects/*}/instances" - body: "*" - }; - option (google.api.method_signature) = "parent,instance_id,instance"; - option (google.longrunning.operation_info) = { - response_type: "google.spanner.admin.instance.v1.Instance" - metadata_type: "google.spanner.admin.instance.v1.CreateInstanceMetadata" - }; - } - - // Updates an instance, and begins allocating or releasing resources - // as requested. The returned [long-running - // operation][google.longrunning.Operation] can be used to track the - // progress of updating the instance. If the named instance does not - // exist, returns `NOT_FOUND`. - // - // Immediately upon completion of this request: - // - // * For resource types for which a decrease in the instance's allocation - // has been requested, billing is based on the newly-requested level. - // - // Until completion of the returned operation: - // - // * Cancelling the operation sets its metadata's - // [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceMetadata.cancel_time], and begins - // restoring resources to their pre-request values. The operation - // is guaranteed to succeed at undoing all resource changes, - // after which point it terminates with a `CANCELLED` status. - // * All other attempts to modify the instance are rejected. - // * Reading the instance via the API continues to give the pre-request - // resource levels. - // - // Upon completion of the returned operation: - // - // * Billing begins for all successfully-allocated resources (some types - // may have lower than the requested levels). - // * All newly-reserved resources are available for serving the instance's - // tables. - // * The instance's new resource levels are readable via the API. - // - // The returned [long-running operation][google.longrunning.Operation] will - // have a name of the format `/operations/` and - // can be used to track the instance modification. The - // [metadata][google.longrunning.Operation.metadata] field type is - // [UpdateInstanceMetadata][google.spanner.admin.instance.v1.UpdateInstanceMetadata]. - // The [response][google.longrunning.Operation.response] field type is - // [Instance][google.spanner.admin.instance.v1.Instance], if successful. - // - // Authorization requires `spanner.instances.update` permission on - // resource [name][google.spanner.admin.instance.v1.Instance.name]. - rpc UpdateInstance(UpdateInstanceRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - patch: "/v1/{instance.name=projects/*/instances/*}" - body: "*" - }; - option (google.api.method_signature) = "instance,field_mask"; - option (google.longrunning.operation_info) = { - response_type: "google.spanner.admin.instance.v1.Instance" - metadata_type: "google.spanner.admin.instance.v1.UpdateInstanceMetadata" - }; - } - - // Deletes an instance. - // - // Immediately upon completion of the request: - // - // * Billing ceases for all of the instance's reserved resources. - // - // Soon afterward: - // - // * The instance and *all of its databases* immediately and - // irrevocably disappear from the API. All data in the databases - // is permanently deleted. - rpc DeleteInstance(DeleteInstanceRequest) returns (google.protobuf.Empty) { - option (google.api.http) = { - delete: "/v1/{name=projects/*/instances/*}" - }; - option (google.api.method_signature) = "name"; - } - - // Sets the access control policy on an instance resource. Replaces any - // existing policy. - // - // Authorization requires `spanner.instances.setIamPolicy` on - // [resource][google.iam.v1.SetIamPolicyRequest.resource]. - rpc SetIamPolicy(google.iam.v1.SetIamPolicyRequest) returns (google.iam.v1.Policy) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*}:setIamPolicy" - body: "*" - }; - option (google.api.method_signature) = "resource,policy"; - } - - // Gets the access control policy for an instance resource. Returns an empty - // policy if an instance exists but does not have a policy set. - // - // Authorization requires `spanner.instances.getIamPolicy` on - // [resource][google.iam.v1.GetIamPolicyRequest.resource]. - rpc GetIamPolicy(google.iam.v1.GetIamPolicyRequest) returns (google.iam.v1.Policy) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*}:getIamPolicy" - body: "*" - }; - option (google.api.method_signature) = "resource"; - } - - // Returns permissions that the caller has on the specified instance resource. - // - // Attempting this RPC on a non-existent Cloud Spanner instance resource will - // result in a NOT_FOUND error if the user has `spanner.instances.list` - // permission on the containing Google Cloud Project. Otherwise returns an - // empty set of permissions. - rpc TestIamPermissions(google.iam.v1.TestIamPermissionsRequest) returns (google.iam.v1.TestIamPermissionsResponse) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*}:testIamPermissions" - body: "*" - }; - option (google.api.method_signature) = "resource,permissions"; - } -} - -message ReplicaInfo { - // Indicates the type of replica. See the [replica types - // documentation](https://cloud.google.com/spanner/docs/replication#replica_types) - // for more details. - enum ReplicaType { - // Not specified. - TYPE_UNSPECIFIED = 0; - - // Read-write replicas support both reads and writes. These replicas: - // - // * Maintain a full copy of your data. - // * Serve reads. - // * Can vote whether to commit a write. - // * Participate in leadership election. - // * Are eligible to become a leader. - READ_WRITE = 1; - - // Read-only replicas only support reads (not writes). Read-only replicas: - // - // * Maintain a full copy of your data. - // * Serve reads. - // * Do not participate in voting to commit writes. - // * Are not eligible to become a leader. - READ_ONLY = 2; - - // Witness replicas don't support reads but do participate in voting to - // commit writes. Witness replicas: - // - // * Do not maintain a full copy of data. - // * Do not serve reads. - // * Vote whether to commit writes. - // * Participate in leader election but are not eligible to become leader. - WITNESS = 3; - } - - // The location of the serving resources, e.g. "us-central1". - string location = 1; - - // The type of replica. - ReplicaType type = 2; - - // If true, this location is designated as the default leader location where - // leader replicas are placed. See the [region types - // documentation](https://cloud.google.com/spanner/docs/instances#region_types) - // for more details. - bool default_leader_location = 3; -} - -// A possible configuration for a Cloud Spanner instance. Configurations -// define the geographic placement of nodes and their replication. -message InstanceConfig { - option (google.api.resource) = { - type: "spanner.googleapis.com/InstanceConfig" - pattern: "projects/{project}/instanceConfigs/{instance_config}" - }; - - // A unique identifier for the instance configuration. Values - // are of the form - // `projects//instanceConfigs/[a-z][-a-z0-9]*` - string name = 1; - - // The name of this instance configuration as it appears in UIs. - string display_name = 2; - - // The geographic placement of nodes in this instance configuration and their - // replication properties. - repeated ReplicaInfo replicas = 3; -} - -// An isolated set of Cloud Spanner resources on which databases can be hosted. -message Instance { - option (google.api.resource) = { - type: "spanner.googleapis.com/Instance" - pattern: "projects/{project}/instances/{instance}" - }; - - // Indicates the current state of the instance. - enum State { - // Not specified. - STATE_UNSPECIFIED = 0; - - // The instance is still being created. Resources may not be - // available yet, and operations such as database creation may not - // work. - CREATING = 1; - - // The instance is fully created and ready to do work such as - // creating databases. - READY = 2; - } - - // Required. A unique identifier for the instance, which cannot be changed - // after the instance is created. Values are of the form - // `projects//instances/[a-z][-a-z0-9]*[a-z0-9]`. The final - // segment of the name must be between 2 and 64 characters in length. - string name = 1; - - // Required. The name of the instance's configuration. Values are of the form - // `projects//instanceConfigs/`. See - // also [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] and - // [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. - string config = 2 [(google.api.resource_reference) = { - type: "spanner.googleapis.com/InstanceConfig" - }]; - - // Required. The descriptive name for this instance as it appears in UIs. - // Must be unique per project and between 4 and 30 characters in length. - string display_name = 3; - - // Required. The number of nodes allocated to this instance. This may be zero - // in API responses for instances that are not yet in state `READY`. - // - // See [the - // documentation](https://cloud.google.com/spanner/docs/instances#node_count) - // for more information about nodes. - int32 node_count = 5; - - // Output only. The current instance state. For - // [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance], the state must be - // either omitted or set to `CREATING`. For - // [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance], the state must be - // either omitted or set to `READY`. - State state = 6; - - // Cloud Labels are a flexible and lightweight mechanism for organizing cloud - // resources into groups that reflect a customer's organizational needs and - // deployment strategies. Cloud Labels can be used to filter collections of - // resources. They can be used to control how resource metrics are aggregated. - // And they can be used as arguments to policy management rules (e.g. route, - // firewall, load balancing, etc.). - // - // * Label keys must be between 1 and 63 characters long and must conform to - // the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`. - // * Label values must be between 0 and 63 characters long and must conform - // to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`. - // * No more than 64 labels can be associated with a given resource. - // - // See https://goo.gl/xmQnxf for more information on and examples of labels. - // - // If you plan to use labels in your own code, please note that additional - // characters may be allowed in the future. And so you are advised to use an - // internal label representation, such as JSON, which doesn't rely upon - // specific characters being disallowed. For example, representing labels - // as the string: name + "_" + value would prove problematic if we were to - // allow "_" in a future release. - map labels = 7; - - // Deprecated. This field is not populated. - repeated string endpoint_uris = 8; -} - -// The request for [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. -message ListInstanceConfigsRequest { - // Required. The name of the project for which a list of supported instance - // configurations is requested. Values are of the form - // `projects/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "cloudresourcemanager.googleapis.com/Project" - } - ]; - - // Number of instance configurations to be returned in the response. If 0 or - // less, defaults to the server's maximum allowed page size. - int32 page_size = 2; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.instance.v1.ListInstanceConfigsResponse.next_page_token] - // from a previous [ListInstanceConfigsResponse][google.spanner.admin.instance.v1.ListInstanceConfigsResponse]. - string page_token = 3; -} - -// The response for [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. -message ListInstanceConfigsResponse { - // The list of requested instance configurations. - repeated InstanceConfig instance_configs = 1; - - // `next_page_token` can be sent in a subsequent - // [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs] call to - // fetch more of the matching instance configurations. - string next_page_token = 2; -} - -// The request for -// [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. -message GetInstanceConfigRequest { - // Required. The name of the requested instance configuration. Values are of - // the form `projects//instanceConfigs/`. - string name = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/InstanceConfig" - } - ]; -} - -// The request for [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. -message GetInstanceRequest { - // Required. The name of the requested instance. Values are of the form - // `projects//instances/`. - string name = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; - - // If field_mask is present, specifies the subset of [Instance][google.spanner.admin.instance.v1.Instance] fields that - // should be returned. - // If absent, all [Instance][google.spanner.admin.instance.v1.Instance] fields are returned. - google.protobuf.FieldMask field_mask = 2; -} - -// The request for [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. -message CreateInstanceRequest { - // Required. The name of the project in which to create the instance. Values - // are of the form `projects/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "cloudresourcemanager.googleapis.com/Project" - } - ]; - - // Required. The ID of the instance to create. Valid identifiers are of the - // form `[a-z][-a-z0-9]*[a-z0-9]` and must be between 2 and 64 characters in - // length. - string instance_id = 2 [(google.api.field_behavior) = REQUIRED]; - - // Required. The instance to create. The name may be omitted, but if - // specified must be `/instances/`. - Instance instance = 3 [(google.api.field_behavior) = REQUIRED]; -} - -// The request for [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. -message ListInstancesRequest { - // Required. The name of the project for which a list of instances is - // requested. Values are of the form `projects/`. - string parent = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "cloudresourcemanager.googleapis.com/Project" - } - ]; - - // Number of instances to be returned in the response. If 0 or less, defaults - // to the server's maximum allowed page size. - int32 page_size = 2; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.instance.v1.ListInstancesResponse.next_page_token] from a - // previous [ListInstancesResponse][google.spanner.admin.instance.v1.ListInstancesResponse]. - string page_token = 3; - - // An expression for filtering the results of the request. Filter rules are - // case insensitive. The fields eligible for filtering are: - // - // * `name` - // * `display_name` - // * `labels.key` where key is the name of a label - // - // Some examples of using filters are: - // - // * `name:*` --> The instance has a name. - // * `name:Howl` --> The instance's name contains the string "howl". - // * `name:HOWL` --> Equivalent to above. - // * `NAME:howl` --> Equivalent to above. - // * `labels.env:*` --> The instance has the label "env". - // * `labels.env:dev` --> The instance has the label "env" and the value of - // the label contains the string "dev". - // * `name:howl labels.env:dev` --> The instance's name contains "howl" and - // it has the label "env" with its value - // containing "dev". - string filter = 4; -} - -// The response for [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. -message ListInstancesResponse { - // The list of requested instances. - repeated Instance instances = 1; - - // `next_page_token` can be sent in a subsequent - // [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances] call to fetch more - // of the matching instances. - string next_page_token = 2; -} - -// The request for [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. -message UpdateInstanceRequest { - // Required. The instance to update, which must always include the instance - // name. Otherwise, only fields mentioned in [field_mask][google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask] need be included. - Instance instance = 1 [(google.api.field_behavior) = REQUIRED]; - - // Required. A mask specifying which fields in [Instance][google.spanner.admin.instance.v1.Instance] should be updated. - // The field mask must always be specified; this prevents any future fields in - // [Instance][google.spanner.admin.instance.v1.Instance] from being erased accidentally by clients that do not know - // about them. - google.protobuf.FieldMask field_mask = 2 [(google.api.field_behavior) = REQUIRED]; -} - -// The request for [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. -message DeleteInstanceRequest { - // Required. The name of the instance to be deleted. Values are of the form - // `projects//instances/` - string name = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Instance" - } - ]; -} - -// Metadata type for the operation returned by -// [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. -message CreateInstanceMetadata { - // The instance being created. - Instance instance = 1; - - // The time at which the - // [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance] request was - // received. - google.protobuf.Timestamp start_time = 2; - - // The time at which this operation was cancelled. If set, this operation is - // in the process of undoing itself (which is guaranteed to succeed) and - // cannot be cancelled again. - google.protobuf.Timestamp cancel_time = 3; - - // The time at which this operation failed or was completed successfully. - google.protobuf.Timestamp end_time = 4; -} - -// Metadata type for the operation returned by -// [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. -message UpdateInstanceMetadata { - // The desired end state of the update. - Instance instance = 1; - - // The time at which [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance] - // request was received. - google.protobuf.Timestamp start_time = 2; - - // The time at which this operation was cancelled. If set, this operation is - // in the process of undoing itself (which is guaranteed to succeed) and - // cannot be cancelled again. - google.protobuf.Timestamp cancel_time = 3; - - // The time at which this operation failed or was completed successfully. - google.protobuf.Timestamp end_time = 4; -} diff --git a/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin_pb2.py b/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin_pb2.py deleted file mode 100644 index f00bfbbe0a..0000000000 --- a/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin_pb2.py +++ /dev/null @@ -1,1856 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner/admin/instance_v1/proto/spanner_instance_admin.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 -from google.api import client_pb2 as google_dot_api_dot_client__pb2 -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.api import resource_pb2 as google_dot_api_dot_resource__pb2 -from google.iam.v1 import iam_policy_pb2 as google_dot_iam_dot_v1_dot_iam__policy__pb2 -from google.iam.v1 import policy_pb2 as google_dot_iam_dot_v1_dot_policy__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2 -from google.protobuf import field_mask_pb2 as google_dot_protobuf_dot_field__mask__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner/admin/instance_v1/proto/spanner_instance_admin.proto", - package="google.spanner.admin.instance.v1", - syntax="proto3", - serialized_options=_b( - "\n$com.google.spanner.admin.instance.v1B\031SpannerInstanceAdminProtoP\001ZHgoogle.golang.org/genproto/googleapis/spanner/admin/instance/v1;instance\252\002&Google.Cloud.Spanner.Admin.Instance.V1\312\002&Google\\Cloud\\Spanner\\Admin\\Instance\\V1" - ), - serialized_pb=_b( - '\nIgoogle/cloud/spanner/admin/instance_v1/proto/spanner_instance_admin.proto\x12 google.spanner.admin.instance.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a\x19google/api/resource.proto\x1a\x1egoogle/iam/v1/iam_policy.proto\x1a\x1agoogle/iam/v1/policy.proto\x1a#google/longrunning/operations.proto\x1a\x1bgoogle/protobuf/empty.proto\x1a google/protobuf/field_mask.proto\x1a\x1fgoogle/protobuf/timestamp.proto"\xda\x01\n\x0bReplicaInfo\x12\x10\n\x08location\x18\x01 \x01(\t\x12G\n\x04type\x18\x02 \x01(\x0e\x32\x39.google.spanner.admin.instance.v1.ReplicaInfo.ReplicaType\x12\x1f\n\x17\x64\x65\x66\x61ult_leader_location\x18\x03 \x01(\x08"O\n\x0bReplicaType\x12\x14\n\x10TYPE_UNSPECIFIED\x10\x00\x12\x0e\n\nREAD_WRITE\x10\x01\x12\r\n\tREAD_ONLY\x10\x02\x12\x0b\n\x07WITNESS\x10\x03"\xd7\x01\n\x0eInstanceConfig\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x14\n\x0c\x64isplay_name\x18\x02 \x01(\t\x12?\n\x08replicas\x18\x03 \x03(\x0b\x32-.google.spanner.admin.instance.v1.ReplicaInfo:`\xea\x41]\n%spanner.googleapis.com/InstanceConfig\x12\x34projects/{project}/instanceConfigs/{instance_config}"\xd5\x03\n\x08Instance\x12\x0c\n\x04name\x18\x01 \x01(\t\x12:\n\x06\x63onfig\x18\x02 \x01(\tB*\xfa\x41\'\n%spanner.googleapis.com/InstanceConfig\x12\x14\n\x0c\x64isplay_name\x18\x03 \x01(\t\x12\x12\n\nnode_count\x18\x05 \x01(\x05\x12?\n\x05state\x18\x06 \x01(\x0e\x32\x30.google.spanner.admin.instance.v1.Instance.State\x12\x46\n\x06labels\x18\x07 \x03(\x0b\x32\x36.google.spanner.admin.instance.v1.Instance.LabelsEntry\x12\x15\n\rendpoint_uris\x18\x08 \x03(\t\x1a-\n\x0bLabelsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01"7\n\x05State\x12\x15\n\x11STATE_UNSPECIFIED\x10\x00\x12\x0c\n\x08\x43REATING\x10\x01\x12\t\n\x05READY\x10\x02:M\xea\x41J\n\x1fspanner.googleapis.com/Instance\x12\'projects/{project}/instances/{instance}"\x88\x01\n\x1aListInstanceConfigsRequest\x12\x43\n\x06parent\x18\x01 \x01(\tB3\xe0\x41\x02\xfa\x41-\n+cloudresourcemanager.googleapis.com/Project\x12\x11\n\tpage_size\x18\x02 \x01(\x05\x12\x12\n\npage_token\x18\x03 \x01(\t"\x82\x01\n\x1bListInstanceConfigsResponse\x12J\n\x10instance_configs\x18\x01 \x03(\x0b\x32\x30.google.spanner.admin.instance.v1.InstanceConfig\x12\x17\n\x0fnext_page_token\x18\x02 \x01(\t"W\n\x18GetInstanceConfigRequest\x12;\n\x04name\x18\x01 \x01(\tB-\xe0\x41\x02\xfa\x41\'\n%spanner.googleapis.com/InstanceConfig"{\n\x12GetInstanceRequest\x12\x35\n\x04name\x18\x01 \x01(\tB\'\xe0\x41\x02\xfa\x41!\n\x1fspanner.googleapis.com/Instance\x12.\n\nfield_mask\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.FieldMask"\xb9\x01\n\x15\x43reateInstanceRequest\x12\x43\n\x06parent\x18\x01 \x01(\tB3\xe0\x41\x02\xfa\x41-\n+cloudresourcemanager.googleapis.com/Project\x12\x18\n\x0binstance_id\x18\x02 \x01(\tB\x03\xe0\x41\x02\x12\x41\n\x08instance\x18\x03 \x01(\x0b\x32*.google.spanner.admin.instance.v1.InstanceB\x03\xe0\x41\x02"\x92\x01\n\x14ListInstancesRequest\x12\x43\n\x06parent\x18\x01 \x01(\tB3\xe0\x41\x02\xfa\x41-\n+cloudresourcemanager.googleapis.com/Project\x12\x11\n\tpage_size\x18\x02 \x01(\x05\x12\x12\n\npage_token\x18\x03 \x01(\t\x12\x0e\n\x06\x66ilter\x18\x04 \x01(\t"o\n\x15ListInstancesResponse\x12=\n\tinstances\x18\x01 \x03(\x0b\x32*.google.spanner.admin.instance.v1.Instance\x12\x17\n\x0fnext_page_token\x18\x02 \x01(\t"\x8f\x01\n\x15UpdateInstanceRequest\x12\x41\n\x08instance\x18\x01 \x01(\x0b\x32*.google.spanner.admin.instance.v1.InstanceB\x03\xe0\x41\x02\x12\x33\n\nfield_mask\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.FieldMaskB\x03\xe0\x41\x02"N\n\x15\x44\x65leteInstanceRequest\x12\x35\n\x04name\x18\x01 \x01(\tB\'\xe0\x41\x02\xfa\x41!\n\x1fspanner.googleapis.com/Instance"\xe5\x01\n\x16\x43reateInstanceMetadata\x12<\n\x08instance\x18\x01 \x01(\x0b\x32*.google.spanner.admin.instance.v1.Instance\x12.\n\nstart_time\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12/\n\x0b\x63\x61ncel_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12,\n\x08\x65nd_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp"\xe5\x01\n\x16UpdateInstanceMetadata\x12<\n\x08instance\x18\x01 \x01(\x0b\x32*.google.spanner.admin.instance.v1.Instance\x12.\n\nstart_time\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12/\n\x0b\x63\x61ncel_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12,\n\x08\x65nd_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp2\xbf\x10\n\rInstanceAdmin\x12\xcc\x01\n\x13ListInstanceConfigs\x12<.google.spanner.admin.instance.v1.ListInstanceConfigsRequest\x1a=.google.spanner.admin.instance.v1.ListInstanceConfigsResponse"8\x82\xd3\xe4\x93\x02)\x12\'/v1/{parent=projects/*}/instanceConfigs\xda\x41\x06parent\x12\xb9\x01\n\x11GetInstanceConfig\x12:.google.spanner.admin.instance.v1.GetInstanceConfigRequest\x1a\x30.google.spanner.admin.instance.v1.InstanceConfig"6\x82\xd3\xe4\x93\x02)\x12\'/v1/{name=projects/*/instanceConfigs/*}\xda\x41\x04name\x12\xb4\x01\n\rListInstances\x12\x36.google.spanner.admin.instance.v1.ListInstancesRequest\x1a\x37.google.spanner.admin.instance.v1.ListInstancesResponse"2\x82\xd3\xe4\x93\x02#\x12!/v1/{parent=projects/*}/instances\xda\x41\x06parent\x12\xa1\x01\n\x0bGetInstance\x12\x34.google.spanner.admin.instance.v1.GetInstanceRequest\x1a*.google.spanner.admin.instance.v1.Instance"0\x82\xd3\xe4\x93\x02#\x12!/v1/{name=projects/*/instances/*}\xda\x41\x04name\x12\x9c\x02\n\x0e\x43reateInstance\x12\x37.google.spanner.admin.instance.v1.CreateInstanceRequest\x1a\x1d.google.longrunning.Operation"\xb1\x01\x82\xd3\xe4\x93\x02&"!/v1/{parent=projects/*}/instances:\x01*\xda\x41\x1bparent,instance_id,instance\xca\x41\x64\n)google.spanner.admin.instance.v1.Instance\x12\x37google.spanner.admin.instance.v1.CreateInstanceMetadata\x12\x9d\x02\n\x0eUpdateInstance\x12\x37.google.spanner.admin.instance.v1.UpdateInstanceRequest\x1a\x1d.google.longrunning.Operation"\xb2\x01\x82\xd3\xe4\x93\x02/2*/v1/{instance.name=projects/*/instances/*}:\x01*\xda\x41\x13instance,field_mask\xca\x41\x64\n)google.spanner.admin.instance.v1.Instance\x12\x37google.spanner.admin.instance.v1.UpdateInstanceMetadata\x12\x93\x01\n\x0e\x44\x65leteInstance\x12\x37.google.spanner.admin.instance.v1.DeleteInstanceRequest\x1a\x16.google.protobuf.Empty"0\x82\xd3\xe4\x93\x02#*!/v1/{name=projects/*/instances/*}\xda\x41\x04name\x12\x9a\x01\n\x0cSetIamPolicy\x12".google.iam.v1.SetIamPolicyRequest\x1a\x15.google.iam.v1.Policy"O\x82\xd3\xe4\x93\x02\x37"2/v1/{resource=projects/*/instances/*}:setIamPolicy:\x01*\xda\x41\x0fresource,policy\x12\x93\x01\n\x0cGetIamPolicy\x12".google.iam.v1.GetIamPolicyRequest\x1a\x15.google.iam.v1.Policy"H\x82\xd3\xe4\x93\x02\x37"2/v1/{resource=projects/*/instances/*}:getIamPolicy:\x01*\xda\x41\x08resource\x12\xc5\x01\n\x12TestIamPermissions\x12(.google.iam.v1.TestIamPermissionsRequest\x1a).google.iam.v1.TestIamPermissionsResponse"Z\x82\xd3\xe4\x93\x02="8/v1/{resource=projects/*/instances/*}:testIamPermissions:\x01*\xda\x41\x14resource,permissions\x1ax\xca\x41\x16spanner.googleapis.com\xd2\x41\\https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spanner.adminB\xdf\x01\n$com.google.spanner.admin.instance.v1B\x19SpannerInstanceAdminProtoP\x01ZHgoogle.golang.org/genproto/googleapis/spanner/admin/instance/v1;instance\xaa\x02&Google.Cloud.Spanner.Admin.Instance.V1\xca\x02&Google\\Cloud\\Spanner\\Admin\\Instance\\V1b\x06proto3' - ), - dependencies=[ - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - google_dot_api_dot_client__pb2.DESCRIPTOR, - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_api_dot_resource__pb2.DESCRIPTOR, - google_dot_iam_dot_v1_dot_iam__policy__pb2.DESCRIPTOR, - google_dot_iam_dot_v1_dot_policy__pb2.DESCRIPTOR, - google_dot_longrunning_dot_operations__pb2.DESCRIPTOR, - google_dot_protobuf_dot_empty__pb2.DESCRIPTOR, - google_dot_protobuf_dot_field__mask__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - ], -) - - -_REPLICAINFO_REPLICATYPE = _descriptor.EnumDescriptor( - name="ReplicaType", - full_name="google.spanner.admin.instance.v1.ReplicaInfo.ReplicaType", - filename=None, - file=DESCRIPTOR, - values=[ - _descriptor.EnumValueDescriptor( - name="TYPE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - ), - _descriptor.EnumValueDescriptor( - name="READ_WRITE", index=1, number=1, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="READ_ONLY", index=2, number=2, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="WITNESS", index=3, number=3, serialized_options=None, type=None - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=559, - serialized_end=638, -) -_sym_db.RegisterEnumDescriptor(_REPLICAINFO_REPLICATYPE) - -_INSTANCE_STATE = _descriptor.EnumDescriptor( - name="State", - full_name="google.spanner.admin.instance.v1.Instance.State", - filename=None, - file=DESCRIPTOR, - values=[ - _descriptor.EnumValueDescriptor( - name="STATE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - ), - _descriptor.EnumValueDescriptor( - name="CREATING", index=1, number=1, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="READY", index=2, number=2, serialized_options=None, type=None - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=1194, - serialized_end=1249, -) -_sym_db.RegisterEnumDescriptor(_INSTANCE_STATE) - - -_REPLICAINFO = _descriptor.Descriptor( - name="ReplicaInfo", - full_name="google.spanner.admin.instance.v1.ReplicaInfo", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="location", - full_name="google.spanner.admin.instance.v1.ReplicaInfo.location", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="type", - full_name="google.spanner.admin.instance.v1.ReplicaInfo.type", - index=1, - number=2, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="default_leader_location", - full_name="google.spanner.admin.instance.v1.ReplicaInfo.default_leader_location", - index=2, - number=3, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[_REPLICAINFO_REPLICATYPE], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=420, - serialized_end=638, -) - - -_INSTANCECONFIG = _descriptor.Descriptor( - name="InstanceConfig", - full_name="google.spanner.admin.instance.v1.InstanceConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.instance.v1.InstanceConfig.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="display_name", - full_name="google.spanner.admin.instance.v1.InstanceConfig.display_name", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="replicas", - full_name="google.spanner.admin.instance.v1.InstanceConfig.replicas", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=_b( - "\352A]\n%spanner.googleapis.com/InstanceConfig\0224projects/{project}/instanceConfigs/{instance_config}" - ), - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=641, - serialized_end=856, -) - - -_INSTANCE_LABELSENTRY = _descriptor.Descriptor( - name="LabelsEntry", - full_name="google.spanner.admin.instance.v1.Instance.LabelsEntry", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="key", - full_name="google.spanner.admin.instance.v1.Instance.LabelsEntry.key", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="value", - full_name="google.spanner.admin.instance.v1.Instance.LabelsEntry.value", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=_b("8\001"), - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1147, - serialized_end=1192, -) - -_INSTANCE = _descriptor.Descriptor( - name="Instance", - full_name="google.spanner.admin.instance.v1.Instance", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.instance.v1.Instance.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="config", - full_name="google.spanner.admin.instance.v1.Instance.config", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\372A'\n%spanner.googleapis.com/InstanceConfig"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="display_name", - full_name="google.spanner.admin.instance.v1.Instance.display_name", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="node_count", - full_name="google.spanner.admin.instance.v1.Instance.node_count", - index=3, - number=5, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="state", - full_name="google.spanner.admin.instance.v1.Instance.state", - index=4, - number=6, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="labels", - full_name="google.spanner.admin.instance.v1.Instance.labels", - index=5, - number=7, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="endpoint_uris", - full_name="google.spanner.admin.instance.v1.Instance.endpoint_uris", - index=6, - number=8, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[_INSTANCE_LABELSENTRY], - enum_types=[_INSTANCE_STATE], - serialized_options=_b( - "\352AJ\n\037spanner.googleapis.com/Instance\022'projects/{project}/instances/{instance}" - ), - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=859, - serialized_end=1328, -) - - -_LISTINSTANCECONFIGSREQUEST = _descriptor.Descriptor( - name="ListInstanceConfigsRequest", - full_name="google.spanner.admin.instance.v1.ListInstanceConfigsRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="parent", - full_name="google.spanner.admin.instance.v1.ListInstanceConfigsRequest.parent", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A-\n+cloudresourcemanager.googleapis.com/Project" - ), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="page_size", - full_name="google.spanner.admin.instance.v1.ListInstanceConfigsRequest.page_size", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="page_token", - full_name="google.spanner.admin.instance.v1.ListInstanceConfigsRequest.page_token", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1331, - serialized_end=1467, -) - - -_LISTINSTANCECONFIGSRESPONSE = _descriptor.Descriptor( - name="ListInstanceConfigsResponse", - full_name="google.spanner.admin.instance.v1.ListInstanceConfigsResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="instance_configs", - full_name="google.spanner.admin.instance.v1.ListInstanceConfigsResponse.instance_configs", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="next_page_token", - full_name="google.spanner.admin.instance.v1.ListInstanceConfigsResponse.next_page_token", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1470, - serialized_end=1600, -) - - -_GETINSTANCECONFIGREQUEST = _descriptor.Descriptor( - name="GetInstanceConfigRequest", - full_name="google.spanner.admin.instance.v1.GetInstanceConfigRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.instance.v1.GetInstanceConfigRequest.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A'\n%spanner.googleapis.com/InstanceConfig" - ), - file=DESCRIPTOR, - ) - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1602, - serialized_end=1689, -) - - -_GETINSTANCEREQUEST = _descriptor.Descriptor( - name="GetInstanceRequest", - full_name="google.spanner.admin.instance.v1.GetInstanceRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.instance.v1.GetInstanceRequest.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A!\n\037spanner.googleapis.com/Instance" - ), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="field_mask", - full_name="google.spanner.admin.instance.v1.GetInstanceRequest.field_mask", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1691, - serialized_end=1814, -) - - -_CREATEINSTANCEREQUEST = _descriptor.Descriptor( - name="CreateInstanceRequest", - full_name="google.spanner.admin.instance.v1.CreateInstanceRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="parent", - full_name="google.spanner.admin.instance.v1.CreateInstanceRequest.parent", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A-\n+cloudresourcemanager.googleapis.com/Project" - ), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="instance_id", - full_name="google.spanner.admin.instance.v1.CreateInstanceRequest.instance_id", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="instance", - full_name="google.spanner.admin.instance.v1.CreateInstanceRequest.instance", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1817, - serialized_end=2002, -) - - -_LISTINSTANCESREQUEST = _descriptor.Descriptor( - name="ListInstancesRequest", - full_name="google.spanner.admin.instance.v1.ListInstancesRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="parent", - full_name="google.spanner.admin.instance.v1.ListInstancesRequest.parent", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A-\n+cloudresourcemanager.googleapis.com/Project" - ), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="page_size", - full_name="google.spanner.admin.instance.v1.ListInstancesRequest.page_size", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="page_token", - full_name="google.spanner.admin.instance.v1.ListInstancesRequest.page_token", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="filter", - full_name="google.spanner.admin.instance.v1.ListInstancesRequest.filter", - index=3, - number=4, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2005, - serialized_end=2151, -) - - -_LISTINSTANCESRESPONSE = _descriptor.Descriptor( - name="ListInstancesResponse", - full_name="google.spanner.admin.instance.v1.ListInstancesResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="instances", - full_name="google.spanner.admin.instance.v1.ListInstancesResponse.instances", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="next_page_token", - full_name="google.spanner.admin.instance.v1.ListInstancesResponse.next_page_token", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2153, - serialized_end=2264, -) - - -_UPDATEINSTANCEREQUEST = _descriptor.Descriptor( - name="UpdateInstanceRequest", - full_name="google.spanner.admin.instance.v1.UpdateInstanceRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="instance", - full_name="google.spanner.admin.instance.v1.UpdateInstanceRequest.instance", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="field_mask", - full_name="google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b("\340A\002"), - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2267, - serialized_end=2410, -) - - -_DELETEINSTANCEREQUEST = _descriptor.Descriptor( - name="DeleteInstanceRequest", - full_name="google.spanner.admin.instance.v1.DeleteInstanceRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.admin.instance.v1.DeleteInstanceRequest.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=_b( - "\340A\002\372A!\n\037spanner.googleapis.com/Instance" - ), - file=DESCRIPTOR, - ) - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2412, - serialized_end=2490, -) - - -_CREATEINSTANCEMETADATA = _descriptor.Descriptor( - name="CreateInstanceMetadata", - full_name="google.spanner.admin.instance.v1.CreateInstanceMetadata", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="instance", - full_name="google.spanner.admin.instance.v1.CreateInstanceMetadata.instance", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.spanner.admin.instance.v1.CreateInstanceMetadata.start_time", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="cancel_time", - full_name="google.spanner.admin.instance.v1.CreateInstanceMetadata.cancel_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="end_time", - full_name="google.spanner.admin.instance.v1.CreateInstanceMetadata.end_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2493, - serialized_end=2722, -) - - -_UPDATEINSTANCEMETADATA = _descriptor.Descriptor( - name="UpdateInstanceMetadata", - full_name="google.spanner.admin.instance.v1.UpdateInstanceMetadata", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="instance", - full_name="google.spanner.admin.instance.v1.UpdateInstanceMetadata.instance", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.spanner.admin.instance.v1.UpdateInstanceMetadata.start_time", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="cancel_time", - full_name="google.spanner.admin.instance.v1.UpdateInstanceMetadata.cancel_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="end_time", - full_name="google.spanner.admin.instance.v1.UpdateInstanceMetadata.end_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2725, - serialized_end=2954, -) - -_REPLICAINFO.fields_by_name["type"].enum_type = _REPLICAINFO_REPLICATYPE -_REPLICAINFO_REPLICATYPE.containing_type = _REPLICAINFO -_INSTANCECONFIG.fields_by_name["replicas"].message_type = _REPLICAINFO -_INSTANCE_LABELSENTRY.containing_type = _INSTANCE -_INSTANCE.fields_by_name["state"].enum_type = _INSTANCE_STATE -_INSTANCE.fields_by_name["labels"].message_type = _INSTANCE_LABELSENTRY -_INSTANCE_STATE.containing_type = _INSTANCE -_LISTINSTANCECONFIGSRESPONSE.fields_by_name[ - "instance_configs" -].message_type = _INSTANCECONFIG -_GETINSTANCEREQUEST.fields_by_name[ - "field_mask" -].message_type = google_dot_protobuf_dot_field__mask__pb2._FIELDMASK -_CREATEINSTANCEREQUEST.fields_by_name["instance"].message_type = _INSTANCE -_LISTINSTANCESRESPONSE.fields_by_name["instances"].message_type = _INSTANCE -_UPDATEINSTANCEREQUEST.fields_by_name["instance"].message_type = _INSTANCE -_UPDATEINSTANCEREQUEST.fields_by_name[ - "field_mask" -].message_type = google_dot_protobuf_dot_field__mask__pb2._FIELDMASK -_CREATEINSTANCEMETADATA.fields_by_name["instance"].message_type = _INSTANCE -_CREATEINSTANCEMETADATA.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_CREATEINSTANCEMETADATA.fields_by_name[ - "cancel_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_CREATEINSTANCEMETADATA.fields_by_name[ - "end_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_UPDATEINSTANCEMETADATA.fields_by_name["instance"].message_type = _INSTANCE -_UPDATEINSTANCEMETADATA.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_UPDATEINSTANCEMETADATA.fields_by_name[ - "cancel_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_UPDATEINSTANCEMETADATA.fields_by_name[ - "end_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -DESCRIPTOR.message_types_by_name["ReplicaInfo"] = _REPLICAINFO -DESCRIPTOR.message_types_by_name["InstanceConfig"] = _INSTANCECONFIG -DESCRIPTOR.message_types_by_name["Instance"] = _INSTANCE -DESCRIPTOR.message_types_by_name[ - "ListInstanceConfigsRequest" -] = _LISTINSTANCECONFIGSREQUEST -DESCRIPTOR.message_types_by_name[ - "ListInstanceConfigsResponse" -] = _LISTINSTANCECONFIGSRESPONSE -DESCRIPTOR.message_types_by_name["GetInstanceConfigRequest"] = _GETINSTANCECONFIGREQUEST -DESCRIPTOR.message_types_by_name["GetInstanceRequest"] = _GETINSTANCEREQUEST -DESCRIPTOR.message_types_by_name["CreateInstanceRequest"] = _CREATEINSTANCEREQUEST -DESCRIPTOR.message_types_by_name["ListInstancesRequest"] = _LISTINSTANCESREQUEST -DESCRIPTOR.message_types_by_name["ListInstancesResponse"] = _LISTINSTANCESRESPONSE -DESCRIPTOR.message_types_by_name["UpdateInstanceRequest"] = _UPDATEINSTANCEREQUEST -DESCRIPTOR.message_types_by_name["DeleteInstanceRequest"] = _DELETEINSTANCEREQUEST -DESCRIPTOR.message_types_by_name["CreateInstanceMetadata"] = _CREATEINSTANCEMETADATA -DESCRIPTOR.message_types_by_name["UpdateInstanceMetadata"] = _UPDATEINSTANCEMETADATA -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -ReplicaInfo = _reflection.GeneratedProtocolMessageType( - "ReplicaInfo", - (_message.Message,), - dict( - DESCRIPTOR=_REPLICAINFO, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""Protocol buffer. - - Attributes: - location: - The location of the serving resources, e.g. "us-central1". - type: - The type of replica. - default_leader_location: - If true, this location is designated as the default leader - location where leader replicas are placed. See the `region - types documentation `__ for more details. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.ReplicaInfo) - ), -) -_sym_db.RegisterMessage(ReplicaInfo) - -InstanceConfig = _reflection.GeneratedProtocolMessageType( - "InstanceConfig", - (_message.Message,), - dict( - DESCRIPTOR=_INSTANCECONFIG, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""A possible configuration for a Cloud Spanner instance. - Configurations define the geographic placement of nodes and their - replication. - - - Attributes: - name: - A unique identifier for the instance configuration. Values are - of the form - ``projects//instanceConfigs/[a-z][-a-z0-9]*`` - display_name: - The name of this instance configuration as it appears in UIs. - replicas: - The geographic placement of nodes in this instance - configuration and their replication properties. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.InstanceConfig) - ), -) -_sym_db.RegisterMessage(InstanceConfig) - -Instance = _reflection.GeneratedProtocolMessageType( - "Instance", - (_message.Message,), - dict( - LabelsEntry=_reflection.GeneratedProtocolMessageType( - "LabelsEntry", - (_message.Message,), - dict( - DESCRIPTOR=_INSTANCE_LABELSENTRY, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2" - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.Instance.LabelsEntry) - ), - ), - DESCRIPTOR=_INSTANCE, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""An isolated set of Cloud Spanner resources on which - databases can be hosted. - - - Attributes: - name: - Required. A unique identifier for the instance, which cannot - be changed after the instance is created. Values are of the - form ``projects//instances/[a-z][-a-z0-9]*[a-z0-9]``. - The final segment of the name must be between 2 and 64 - characters in length. - config: - Required. The name of the instance's configuration. Values are - of the form - ``projects//instanceConfigs/``. See - also [InstanceConfig][google.spanner.admin.instance.v1.Instanc - eConfig] and [ListInstanceConfigs][google.spanner.admin.instan - ce.v1.InstanceAdmin.ListInstanceConfigs]. - display_name: - Required. The descriptive name for this instance as it appears - in UIs. Must be unique per project and between 4 and 30 - characters in length. - node_count: - Required. The number of nodes allocated to this instance. This - may be zero in API responses for instances that are not yet in - state ``READY``. See `the documentation `__ for more - information about nodes. - state: - Output only. The current instance state. For [CreateInstance][ - google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance] - , the state must be either omitted or set to ``CREATING``. For - [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmi - n.UpdateInstance], the state must be either omitted or set to - ``READY``. - labels: - Cloud Labels are a flexible and lightweight mechanism for - organizing cloud resources into groups that reflect a - customer's organizational needs and deployment strategies. - Cloud Labels can be used to filter collections of resources. - They can be used to control how resource metrics are - aggregated. And they can be used as arguments to policy - management rules (e.g. route, firewall, load balancing, etc.). - - Label keys must be between 1 and 63 characters long and - must conform to the following regular expression: - ``[a-z]([-a-z0-9]*[a-z0-9])?``. - Label values must be - between 0 and 63 characters long and must conform to the - regular expression ``([a-z]([-a-z0-9]*[a-z0-9])?)?``. - No - more than 64 labels can be associated with a given resource. - See https://goo.gl/xmQnxf for more information on and examples - of labels. If you plan to use labels in your own code, please - note that additional characters may be allowed in the future. - And so you are advised to use an internal label - representation, such as JSON, which doesn't rely upon specific - characters being disallowed. For example, representing labels - as the string: name + "*" + value would prove problematic if - we were to allow "*" in a future release. - endpoint_uris: - Deprecated. This field is not populated. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.Instance) - ), -) -_sym_db.RegisterMessage(Instance) -_sym_db.RegisterMessage(Instance.LabelsEntry) - -ListInstanceConfigsRequest = _reflection.GeneratedProtocolMessageType( - "ListInstanceConfigsRequest", - (_message.Message,), - dict( - DESCRIPTOR=_LISTINSTANCECONFIGSREQUEST, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The request for - [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. - - - Attributes: - parent: - Required. The name of the project for which a list of - supported instance configurations is requested. Values are of - the form ``projects/``. - page_size: - Number of instance configurations to be returned in the - response. If 0 or less, defaults to the server's maximum - allowed page size. - page_token: - If non-empty, ``page_token`` should contain a [next\_page\_tok - en][google.spanner.admin.instance.v1.ListInstanceConfigsRespon - se.next\_page\_token] from a previous [ListInstanceConfigsResp - onse][google.spanner.admin.instance.v1.ListInstanceConfigsResp - onse]. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.ListInstanceConfigsRequest) - ), -) -_sym_db.RegisterMessage(ListInstanceConfigsRequest) - -ListInstanceConfigsResponse = _reflection.GeneratedProtocolMessageType( - "ListInstanceConfigsResponse", - (_message.Message,), - dict( - DESCRIPTOR=_LISTINSTANCECONFIGSRESPONSE, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The response for - [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. - - - Attributes: - instance_configs: - The list of requested instance configurations. - next_page_token: - \ ``next_page_token`` can be sent in a subsequent [ListInstanc - eConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListI - nstanceConfigs] call to fetch more of the matching instance - configurations. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.ListInstanceConfigsResponse) - ), -) -_sym_db.RegisterMessage(ListInstanceConfigsResponse) - -GetInstanceConfigRequest = _reflection.GeneratedProtocolMessageType( - "GetInstanceConfigRequest", - (_message.Message,), - dict( - DESCRIPTOR=_GETINSTANCECONFIGREQUEST, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The request for - [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. - - - Attributes: - name: - Required. The name of the requested instance configuration. - Values are of the form - ``projects//instanceConfigs/``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.GetInstanceConfigRequest) - ), -) -_sym_db.RegisterMessage(GetInstanceConfigRequest) - -GetInstanceRequest = _reflection.GeneratedProtocolMessageType( - "GetInstanceRequest", - (_message.Message,), - dict( - DESCRIPTOR=_GETINSTANCEREQUEST, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The request for - [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. - - - Attributes: - name: - Required. The name of the requested instance. Values are of - the form ``projects//instances/``. - field_mask: - If field\_mask is present, specifies the subset of - [Instance][google.spanner.admin.instance.v1.Instance] fields - that should be returned. If absent, all - [Instance][google.spanner.admin.instance.v1.Instance] fields - are returned. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.GetInstanceRequest) - ), -) -_sym_db.RegisterMessage(GetInstanceRequest) - -CreateInstanceRequest = _reflection.GeneratedProtocolMessageType( - "CreateInstanceRequest", - (_message.Message,), - dict( - DESCRIPTOR=_CREATEINSTANCEREQUEST, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The request for - [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. - - - Attributes: - parent: - Required. The name of the project in which to create the - instance. Values are of the form ``projects/``. - instance_id: - Required. The ID of the instance to create. Valid identifiers - are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` and must be - between 2 and 64 characters in length. - instance: - Required. The instance to create. The name may be omitted, but - if specified must be ``/instances/``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.CreateInstanceRequest) - ), -) -_sym_db.RegisterMessage(CreateInstanceRequest) - -ListInstancesRequest = _reflection.GeneratedProtocolMessageType( - "ListInstancesRequest", - (_message.Message,), - dict( - DESCRIPTOR=_LISTINSTANCESREQUEST, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The request for - [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. - - - Attributes: - parent: - Required. The name of the project for which a list of - instances is requested. Values are of the form - ``projects/``. - page_size: - Number of instances to be returned in the response. If 0 or - less, defaults to the server's maximum allowed page size. - page_token: - If non-empty, ``page_token`` should contain a [next\_page\_tok - en][google.spanner.admin.instance.v1.ListInstancesResponse.nex - t\_page\_token] from a previous [ListInstancesResponse][google - .spanner.admin.instance.v1.ListInstancesResponse]. - filter: - An expression for filtering the results of the request. Filter - rules are case insensitive. The fields eligible for filtering - are: - ``name`` - ``display_name`` - ``labels.key`` where - key is the name of a label Some examples of using filters - are: - ``name:*`` --> The instance has a name. - - ``name:Howl`` --> The instance's name contains the string - "howl". - ``name:HOWL`` --> Equivalent to above. - - ``NAME:howl`` --> Equivalent to above. - ``labels.env:*`` --> - The instance has the label "env". - ``labels.env:dev`` --> - The instance has the label "env" and the value of the label - contains the string "dev". - ``name:howl labels.env:dev`` --> - The instance's name contains "howl" and it has the label - "env" with its value containing "dev". - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.ListInstancesRequest) - ), -) -_sym_db.RegisterMessage(ListInstancesRequest) - -ListInstancesResponse = _reflection.GeneratedProtocolMessageType( - "ListInstancesResponse", - (_message.Message,), - dict( - DESCRIPTOR=_LISTINSTANCESRESPONSE, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The response for - [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. - - - Attributes: - instances: - The list of requested instances. - next_page_token: - \ ``next_page_token`` can be sent in a subsequent [ListInstanc - es][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanc - es] call to fetch more of the matching instances. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.ListInstancesResponse) - ), -) -_sym_db.RegisterMessage(ListInstancesResponse) - -UpdateInstanceRequest = _reflection.GeneratedProtocolMessageType( - "UpdateInstanceRequest", - (_message.Message,), - dict( - DESCRIPTOR=_UPDATEINSTANCEREQUEST, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The request for - [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. - - - Attributes: - instance: - Required. The instance to update, which must always include - the instance name. Otherwise, only fields mentioned in [field\ - _mask][google.spanner.admin.instance.v1.UpdateInstanceRequest. - field\_mask] need be included. - field_mask: - Required. A mask specifying which fields in - [Instance][google.spanner.admin.instance.v1.Instance] should - be updated. The field mask must always be specified; this - prevents any future fields in - [Instance][google.spanner.admin.instance.v1.Instance] from - being erased accidentally by clients that do not know about - them. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.UpdateInstanceRequest) - ), -) -_sym_db.RegisterMessage(UpdateInstanceRequest) - -DeleteInstanceRequest = _reflection.GeneratedProtocolMessageType( - "DeleteInstanceRequest", - (_message.Message,), - dict( - DESCRIPTOR=_DELETEINSTANCEREQUEST, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""The request for - [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. - - - Attributes: - name: - Required. The name of the instance to be deleted. Values are - of the form ``projects//instances/`` - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.DeleteInstanceRequest) - ), -) -_sym_db.RegisterMessage(DeleteInstanceRequest) - -CreateInstanceMetadata = _reflection.GeneratedProtocolMessageType( - "CreateInstanceMetadata", - (_message.Message,), - dict( - DESCRIPTOR=_CREATEINSTANCEMETADATA, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""Metadata type for the operation returned by - [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. - - - Attributes: - instance: - The instance being created. - start_time: - The time at which the [CreateInstance][google.spanner.admin.in - stance.v1.InstanceAdmin.CreateInstance] request was received. - cancel_time: - The time at which this operation was cancelled. If set, this - operation is in the process of undoing itself (which is - guaranteed to succeed) and cannot be cancelled again. - end_time: - The time at which this operation failed or was completed - successfully. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.CreateInstanceMetadata) - ), -) -_sym_db.RegisterMessage(CreateInstanceMetadata) - -UpdateInstanceMetadata = _reflection.GeneratedProtocolMessageType( - "UpdateInstanceMetadata", - (_message.Message,), - dict( - DESCRIPTOR=_UPDATEINSTANCEMETADATA, - __module__="google.cloud.spanner.admin.instance_v1.proto.spanner_instance_admin_pb2", - __doc__="""Metadata type for the operation returned by - [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. - - - Attributes: - instance: - The desired end state of the update. - start_time: - The time at which [UpdateInstance][google.spanner.admin.instan - ce.v1.InstanceAdmin.UpdateInstance] request was received. - cancel_time: - The time at which this operation was cancelled. If set, this - operation is in the process of undoing itself (which is - guaranteed to succeed) and cannot be cancelled again. - end_time: - The time at which this operation failed or was completed - successfully. - """, - # @@protoc_insertion_point(class_scope:google.spanner.admin.instance.v1.UpdateInstanceMetadata) - ), -) -_sym_db.RegisterMessage(UpdateInstanceMetadata) - - -DESCRIPTOR._options = None -_INSTANCECONFIG._options = None -_INSTANCE_LABELSENTRY._options = None -_INSTANCE.fields_by_name["config"]._options = None -_INSTANCE._options = None -_LISTINSTANCECONFIGSREQUEST.fields_by_name["parent"]._options = None -_GETINSTANCECONFIGREQUEST.fields_by_name["name"]._options = None -_GETINSTANCEREQUEST.fields_by_name["name"]._options = None -_CREATEINSTANCEREQUEST.fields_by_name["parent"]._options = None -_CREATEINSTANCEREQUEST.fields_by_name["instance_id"]._options = None -_CREATEINSTANCEREQUEST.fields_by_name["instance"]._options = None -_LISTINSTANCESREQUEST.fields_by_name["parent"]._options = None -_UPDATEINSTANCEREQUEST.fields_by_name["instance"]._options = None -_UPDATEINSTANCEREQUEST.fields_by_name["field_mask"]._options = None -_DELETEINSTANCEREQUEST.fields_by_name["name"]._options = None - -_INSTANCEADMIN = _descriptor.ServiceDescriptor( - name="InstanceAdmin", - full_name="google.spanner.admin.instance.v1.InstanceAdmin", - file=DESCRIPTOR, - index=0, - serialized_options=_b( - "\312A\026spanner.googleapis.com\322A\\https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spanner.admin" - ), - serialized_start=2957, - serialized_end=5068, - methods=[ - _descriptor.MethodDescriptor( - name="ListInstanceConfigs", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs", - index=0, - containing_service=None, - input_type=_LISTINSTANCECONFIGSREQUEST, - output_type=_LISTINSTANCECONFIGSRESPONSE, - serialized_options=_b( - "\202\323\344\223\002)\022'/v1/{parent=projects/*}/instanceConfigs\332A\006parent" - ), - ), - _descriptor.MethodDescriptor( - name="GetInstanceConfig", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig", - index=1, - containing_service=None, - input_type=_GETINSTANCECONFIGREQUEST, - output_type=_INSTANCECONFIG, - serialized_options=_b( - "\202\323\344\223\002)\022'/v1/{name=projects/*/instanceConfigs/*}\332A\004name" - ), - ), - _descriptor.MethodDescriptor( - name="ListInstances", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.ListInstances", - index=2, - containing_service=None, - input_type=_LISTINSTANCESREQUEST, - output_type=_LISTINSTANCESRESPONSE, - serialized_options=_b( - "\202\323\344\223\002#\022!/v1/{parent=projects/*}/instances\332A\006parent" - ), - ), - _descriptor.MethodDescriptor( - name="GetInstance", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.GetInstance", - index=3, - containing_service=None, - input_type=_GETINSTANCEREQUEST, - output_type=_INSTANCE, - serialized_options=_b( - "\202\323\344\223\002#\022!/v1/{name=projects/*/instances/*}\332A\004name" - ), - ), - _descriptor.MethodDescriptor( - name="CreateInstance", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance", - index=4, - containing_service=None, - input_type=_CREATEINSTANCEREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=_b( - '\202\323\344\223\002&"!/v1/{parent=projects/*}/instances:\001*\332A\033parent,instance_id,instance\312Ad\n)google.spanner.admin.instance.v1.Instance\0227google.spanner.admin.instance.v1.CreateInstanceMetadata' - ), - ), - _descriptor.MethodDescriptor( - name="UpdateInstance", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance", - index=5, - containing_service=None, - input_type=_UPDATEINSTANCEREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=_b( - "\202\323\344\223\002/2*/v1/{instance.name=projects/*/instances/*}:\001*\332A\023instance,field_mask\312Ad\n)google.spanner.admin.instance.v1.Instance\0227google.spanner.admin.instance.v1.UpdateInstanceMetadata" - ), - ), - _descriptor.MethodDescriptor( - name="DeleteInstance", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance", - index=6, - containing_service=None, - input_type=_DELETEINSTANCEREQUEST, - output_type=google_dot_protobuf_dot_empty__pb2._EMPTY, - serialized_options=_b( - "\202\323\344\223\002#*!/v1/{name=projects/*/instances/*}\332A\004name" - ), - ), - _descriptor.MethodDescriptor( - name="SetIamPolicy", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.SetIamPolicy", - index=7, - containing_service=None, - input_type=google_dot_iam_dot_v1_dot_iam__policy__pb2._SETIAMPOLICYREQUEST, - output_type=google_dot_iam_dot_v1_dot_policy__pb2._POLICY, - serialized_options=_b( - '\202\323\344\223\0027"2/v1/{resource=projects/*/instances/*}:setIamPolicy:\001*\332A\017resource,policy' - ), - ), - _descriptor.MethodDescriptor( - name="GetIamPolicy", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.GetIamPolicy", - index=8, - containing_service=None, - input_type=google_dot_iam_dot_v1_dot_iam__policy__pb2._GETIAMPOLICYREQUEST, - output_type=google_dot_iam_dot_v1_dot_policy__pb2._POLICY, - serialized_options=_b( - '\202\323\344\223\0027"2/v1/{resource=projects/*/instances/*}:getIamPolicy:\001*\332A\010resource' - ), - ), - _descriptor.MethodDescriptor( - name="TestIamPermissions", - full_name="google.spanner.admin.instance.v1.InstanceAdmin.TestIamPermissions", - index=9, - containing_service=None, - input_type=google_dot_iam_dot_v1_dot_iam__policy__pb2._TESTIAMPERMISSIONSREQUEST, - output_type=google_dot_iam_dot_v1_dot_iam__policy__pb2._TESTIAMPERMISSIONSRESPONSE, - serialized_options=_b( - '\202\323\344\223\002="8/v1/{resource=projects/*/instances/*}:testIamPermissions:\001*\332A\024resource,permissions' - ), - ), - ], -) -_sym_db.RegisterServiceDescriptor(_INSTANCEADMIN) - -DESCRIPTOR.services_by_name["InstanceAdmin"] = _INSTANCEADMIN - -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin_pb2_grpc.py b/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin_pb2_grpc.py deleted file mode 100644 index b7276a9f92..0000000000 --- a/google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin_pb2_grpc.py +++ /dev/null @@ -1,343 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc - -from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2, -) -from google.iam.v1 import iam_policy_pb2 as google_dot_iam_dot_v1_dot_iam__policy__pb2 -from google.iam.v1 import policy_pb2 as google_dot_iam_dot_v1_dot_policy__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2 - - -class InstanceAdminStub(object): - """Cloud Spanner Instance Admin API - - The Cloud Spanner Instance Admin API can be used to create, delete, - modify and list instances. Instances are dedicated Cloud Spanner serving - and storage resources to be used by Cloud Spanner databases. - - Each instance has a "configuration", which dictates where the - serving resources for the Cloud Spanner instance are located (e.g., - US-central, Europe). Configurations are created by Google based on - resource availability. - - Cloud Spanner billing is based on the instances that exist and their - sizes. After an instance exists, there are no additional - per-database or per-operation charges for use of the instance - (though there may be additional network bandwidth charges). - Instances offer isolation: problems with databases in one instance - will not affect other instances. However, within an instance - databases can affect each other. For example, if one database in an - instance receives a lot of requests and consumes most of the - instance resources, fewer resources are available for other - databases in that instance, and their performance may suffer. - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.ListInstanceConfigs = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstanceConfigs", - request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstanceConfigsRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstanceConfigsResponse.FromString, - ) - self.GetInstanceConfig = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstanceConfig", - request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.GetInstanceConfigRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.InstanceConfig.FromString, - ) - self.ListInstances = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstances", - request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstancesRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstancesResponse.FromString, - ) - self.GetInstance = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstance", - request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.GetInstanceRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.Instance.FromString, - ) - self.CreateInstance = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/CreateInstance", - request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.CreateInstanceRequest.SerializeToString, - response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, - ) - self.UpdateInstance = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/UpdateInstance", - request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.UpdateInstanceRequest.SerializeToString, - response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, - ) - self.DeleteInstance = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/DeleteInstance", - request_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.DeleteInstanceRequest.SerializeToString, - response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, - ) - self.SetIamPolicy = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/SetIamPolicy", - request_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.SetIamPolicyRequest.SerializeToString, - response_deserializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.FromString, - ) - self.GetIamPolicy = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/GetIamPolicy", - request_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.GetIamPolicyRequest.SerializeToString, - response_deserializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.FromString, - ) - self.TestIamPermissions = channel.unary_unary( - "/google.spanner.admin.instance.v1.InstanceAdmin/TestIamPermissions", - request_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsRequest.SerializeToString, - response_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsResponse.FromString, - ) - - -class InstanceAdminServicer(object): - """Cloud Spanner Instance Admin API - - The Cloud Spanner Instance Admin API can be used to create, delete, - modify and list instances. Instances are dedicated Cloud Spanner serving - and storage resources to be used by Cloud Spanner databases. - - Each instance has a "configuration", which dictates where the - serving resources for the Cloud Spanner instance are located (e.g., - US-central, Europe). Configurations are created by Google based on - resource availability. - - Cloud Spanner billing is based on the instances that exist and their - sizes. After an instance exists, there are no additional - per-database or per-operation charges for use of the instance - (though there may be additional network bandwidth charges). - Instances offer isolation: problems with databases in one instance - will not affect other instances. However, within an instance - databases can affect each other. For example, if one database in an - instance receives a lot of requests and consumes most of the - instance resources, fewer resources are available for other - databases in that instance, and their performance may suffer. - """ - - def ListInstanceConfigs(self, request, context): - """Lists the supported instance configurations for a given project. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def GetInstanceConfig(self, request, context): - """Gets information about a particular instance configuration. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def ListInstances(self, request, context): - """Lists all instances in the given project. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def GetInstance(self, request, context): - """Gets information about a particular instance. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def CreateInstance(self, request, context): - """Creates an instance and begins preparing it to begin serving. The - returned [long-running operation][google.longrunning.Operation] - can be used to track the progress of preparing the new - instance. The instance name is assigned by the caller. If the - named instance already exists, `CreateInstance` returns - `ALREADY_EXISTS`. - - Immediately upon completion of this request: - - * The instance is readable via the API, with all requested attributes - but no allocated resources. Its state is `CREATING`. - - Until completion of the returned operation: - - * Cancelling the operation renders the instance immediately unreadable - via the API. - * The instance can be deleted. - * All other attempts to modify the instance are rejected. - - Upon completion of the returned operation: - - * Billing for all successfully-allocated resources begins (some types - may have lower than the requested levels). - * Databases can be created in the instance. - * The instance's allocated resource levels are readable via the API. - * The instance's state becomes `READY`. - - The returned [long-running operation][google.longrunning.Operation] will - have a name of the format `/operations/` and - can be used to track creation of the instance. The - [metadata][google.longrunning.Operation.metadata] field type is - [CreateInstanceMetadata][google.spanner.admin.instance.v1.CreateInstanceMetadata]. - The [response][google.longrunning.Operation.response] field type is - [Instance][google.spanner.admin.instance.v1.Instance], if successful. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def UpdateInstance(self, request, context): - """Updates an instance, and begins allocating or releasing resources - as requested. The returned [long-running - operation][google.longrunning.Operation] can be used to track the - progress of updating the instance. If the named instance does not - exist, returns `NOT_FOUND`. - - Immediately upon completion of this request: - - * For resource types for which a decrease in the instance's allocation - has been requested, billing is based on the newly-requested level. - - Until completion of the returned operation: - - * Cancelling the operation sets its metadata's - [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceMetadata.cancel_time], and begins - restoring resources to their pre-request values. The operation - is guaranteed to succeed at undoing all resource changes, - after which point it terminates with a `CANCELLED` status. - * All other attempts to modify the instance are rejected. - * Reading the instance via the API continues to give the pre-request - resource levels. - - Upon completion of the returned operation: - - * Billing begins for all successfully-allocated resources (some types - may have lower than the requested levels). - * All newly-reserved resources are available for serving the instance's - tables. - * The instance's new resource levels are readable via the API. - - The returned [long-running operation][google.longrunning.Operation] will - have a name of the format `/operations/` and - can be used to track the instance modification. The - [metadata][google.longrunning.Operation.metadata] field type is - [UpdateInstanceMetadata][google.spanner.admin.instance.v1.UpdateInstanceMetadata]. - The [response][google.longrunning.Operation.response] field type is - [Instance][google.spanner.admin.instance.v1.Instance], if successful. - - Authorization requires `spanner.instances.update` permission on - resource [name][google.spanner.admin.instance.v1.Instance.name]. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def DeleteInstance(self, request, context): - """Deletes an instance. - - Immediately upon completion of the request: - - * Billing ceases for all of the instance's reserved resources. - - Soon afterward: - - * The instance and *all of its databases* immediately and - irrevocably disappear from the API. All data in the databases - is permanently deleted. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def SetIamPolicy(self, request, context): - """Sets the access control policy on an instance resource. Replaces any - existing policy. - - Authorization requires `spanner.instances.setIamPolicy` on - [resource][google.iam.v1.SetIamPolicyRequest.resource]. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def GetIamPolicy(self, request, context): - """Gets the access control policy for an instance resource. Returns an empty - policy if an instance exists but does not have a policy set. - - Authorization requires `spanner.instances.getIamPolicy` on - [resource][google.iam.v1.GetIamPolicyRequest.resource]. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def TestIamPermissions(self, request, context): - """Returns permissions that the caller has on the specified instance resource. - - Attempting this RPC on a non-existent Cloud Spanner instance resource will - result in a NOT_FOUND error if the user has `spanner.instances.list` - permission on the containing Google Cloud Project. Otherwise returns an - empty set of permissions. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_InstanceAdminServicer_to_server(servicer, server): - rpc_method_handlers = { - "ListInstanceConfigs": grpc.unary_unary_rpc_method_handler( - servicer.ListInstanceConfigs, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstanceConfigsRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstanceConfigsResponse.SerializeToString, - ), - "GetInstanceConfig": grpc.unary_unary_rpc_method_handler( - servicer.GetInstanceConfig, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.GetInstanceConfigRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.InstanceConfig.SerializeToString, - ), - "ListInstances": grpc.unary_unary_rpc_method_handler( - servicer.ListInstances, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstancesRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.ListInstancesResponse.SerializeToString, - ), - "GetInstance": grpc.unary_unary_rpc_method_handler( - servicer.GetInstance, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.GetInstanceRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.Instance.SerializeToString, - ), - "CreateInstance": grpc.unary_unary_rpc_method_handler( - servicer.CreateInstance, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.CreateInstanceRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - "UpdateInstance": grpc.unary_unary_rpc_method_handler( - servicer.UpdateInstance, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.UpdateInstanceRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - "DeleteInstance": grpc.unary_unary_rpc_method_handler( - servicer.DeleteInstance, - request_deserializer=google_dot_cloud_dot_spanner_dot_admin_dot_instance__v1_dot_proto_dot_spanner__instance__admin__pb2.DeleteInstanceRequest.FromString, - response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, - ), - "SetIamPolicy": grpc.unary_unary_rpc_method_handler( - servicer.SetIamPolicy, - request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.SetIamPolicyRequest.FromString, - response_serializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.SerializeToString, - ), - "GetIamPolicy": grpc.unary_unary_rpc_method_handler( - servicer.GetIamPolicy, - request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.GetIamPolicyRequest.FromString, - response_serializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.SerializeToString, - ), - "TestIamPermissions": grpc.unary_unary_rpc_method_handler( - servicer.TestIamPermissions, - request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsRequest.FromString, - response_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsResponse.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.spanner.admin.instance.v1.InstanceAdmin", rpc_method_handlers - ) - server.add_generic_rpc_handlers((generic_handler,)) diff --git a/google/cloud/spanner_admin_instance_v1/py.typed b/google/cloud/spanner_admin_instance_v1/py.typed new file mode 100644 index 0000000000..915a8e55e3 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-spanner-admin-instance package uses inline types. diff --git a/google/cloud/spanner_admin_instance_v1/services/__init__.py b/google/cloud/spanner_admin_instance_v1/services/__init__.py new file mode 100644 index 0000000000..cbf94b283c --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/__init__.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/__init__.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/__init__.py new file mode 100644 index 0000000000..51df22ca2e --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/__init__.py @@ -0,0 +1,22 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from .client import InstanceAdminClient +from .async_client import InstanceAdminAsyncClient + +__all__ = ( + "InstanceAdminClient", + "InstanceAdminAsyncClient", +) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/async_client.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/async_client.py new file mode 100644 index 0000000000..1e87fc5a63 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/async_client.py @@ -0,0 +1,3688 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import logging as std_logging +from collections import OrderedDict +import re +from typing import ( + Dict, + Callable, + Mapping, + MutableMapping, + MutableSequence, + Optional, + Sequence, + Tuple, + Type, + Union, +) +import uuid + +from google.cloud.spanner_admin_instance_v1 import gapic_version as package_version + +from google.api_core.client_options import ClientOptions +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry_async as retries +from google.auth import credentials as ga_credentials # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + + +try: + OptionalRetry = Union[retries.AsyncRetry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.AsyncRetry, object, None] # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.spanner_admin_instance_v1.services.instance_admin import pagers +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from .transports.base import InstanceAdminTransport, DEFAULT_CLIENT_INFO +from .transports.grpc_asyncio import InstanceAdminGrpcAsyncIOTransport +from .client import InstanceAdminClient + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class InstanceAdminAsyncClient: + """Cloud Spanner Instance Admin API + + The Cloud Spanner Instance Admin API can be used to create, + delete, modify and list instances. Instances are dedicated Cloud + Spanner serving and storage resources to be used by Cloud + Spanner databases. + + Each instance has a "configuration", which dictates where the + serving resources for the Cloud Spanner instance are located + (e.g., US-central, Europe). Configurations are created by Google + based on resource availability. + + Cloud Spanner billing is based on the instances that exist and + their sizes. After an instance exists, there are no additional + per-database or per-operation charges for use of the instance + (though there may be additional network bandwidth charges). + Instances offer isolation: problems with databases in one + instance will not affect other instances. However, within an + instance databases can affect each other. For example, if one + database in an instance receives a lot of requests and consumes + most of the instance resources, fewer resources are available + for other databases in that instance, and their performance may + suffer. + """ + + _client: InstanceAdminClient + + # Copy defaults from the synchronous client for use here. + # Note: DEFAULT_ENDPOINT is deprecated. Use _DEFAULT_ENDPOINT_TEMPLATE instead. + DEFAULT_ENDPOINT = InstanceAdminClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = InstanceAdminClient.DEFAULT_MTLS_ENDPOINT + _DEFAULT_ENDPOINT_TEMPLATE = InstanceAdminClient._DEFAULT_ENDPOINT_TEMPLATE + _DEFAULT_UNIVERSE = InstanceAdminClient._DEFAULT_UNIVERSE + + instance_path = staticmethod(InstanceAdminClient.instance_path) + parse_instance_path = staticmethod(InstanceAdminClient.parse_instance_path) + instance_config_path = staticmethod(InstanceAdminClient.instance_config_path) + parse_instance_config_path = staticmethod( + InstanceAdminClient.parse_instance_config_path + ) + instance_partition_path = staticmethod(InstanceAdminClient.instance_partition_path) + parse_instance_partition_path = staticmethod( + InstanceAdminClient.parse_instance_partition_path + ) + common_billing_account_path = staticmethod( + InstanceAdminClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + InstanceAdminClient.parse_common_billing_account_path + ) + common_folder_path = staticmethod(InstanceAdminClient.common_folder_path) + parse_common_folder_path = staticmethod( + InstanceAdminClient.parse_common_folder_path + ) + common_organization_path = staticmethod( + InstanceAdminClient.common_organization_path + ) + parse_common_organization_path = staticmethod( + InstanceAdminClient.parse_common_organization_path + ) + common_project_path = staticmethod(InstanceAdminClient.common_project_path) + parse_common_project_path = staticmethod( + InstanceAdminClient.parse_common_project_path + ) + common_location_path = staticmethod(InstanceAdminClient.common_location_path) + parse_common_location_path = staticmethod( + InstanceAdminClient.parse_common_location_path + ) + + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials + info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + InstanceAdminAsyncClient: The constructed client. + """ + return InstanceAdminClient.from_service_account_info.__func__(InstanceAdminAsyncClient, info, *args, **kwargs) # type: ignore + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + InstanceAdminAsyncClient: The constructed client. + """ + return InstanceAdminClient.from_service_account_file.__func__(InstanceAdminAsyncClient, filename, *args, **kwargs) # type: ignore + + from_service_account_json = from_service_account_file + + @classmethod + def get_mtls_endpoint_and_cert_source( + cls, client_options: Optional[ClientOptions] = None + ): + """Return the API endpoint and client cert source for mutual TLS. + + The client cert source is determined in the following order: + (1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the + client cert source is None. + (2) if `client_options.client_cert_source` is provided, use the provided one; if the + default client cert source exists, use the default one; otherwise the client cert + source is None. + + The API endpoint is determined in the following order: + (1) if `client_options.api_endpoint` if provided, use the provided one. + (2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the + default mTLS endpoint; if the environment variable is "never", use the default API + endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise + use the default API endpoint. + + More details can be found at https://google.aip.dev/auth/4114. + + Args: + client_options (google.api_core.client_options.ClientOptions): Custom options for the + client. Only the `api_endpoint` and `client_cert_source` properties may be used + in this method. + + Returns: + Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the + client cert source to use. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If any errors happen. + """ + return InstanceAdminClient.get_mtls_endpoint_and_cert_source(client_options) # type: ignore + + @property + def transport(self) -> InstanceAdminTransport: + """Returns the transport used by the client instance. + + Returns: + InstanceAdminTransport: The transport used by the client instance. + """ + return self._client.transport + + @property + def api_endpoint(self): + """Return the API endpoint used by the client instance. + + Returns: + str: The API endpoint used by the client instance. + """ + return self._client._api_endpoint + + @property + def universe_domain(self) -> str: + """Return the universe domain used by the client instance. + + Returns: + str: The universe domain used + by the client instance. + """ + return self._client._universe_domain + + get_transport_class = InstanceAdminClient.get_transport_class + + def __init__( + self, + *, + credentials: Optional[ga_credentials.Credentials] = None, + transport: Optional[ + Union[str, InstanceAdminTransport, Callable[..., InstanceAdminTransport]] + ] = "grpc_asyncio", + client_options: Optional[ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiates the instance admin async client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Optional[Union[str,InstanceAdminTransport,Callable[..., InstanceAdminTransport]]]): + The transport to use, or a Callable that constructs and returns a new transport to use. + If a Callable is given, it will be called with the same set of initialization + arguments as used in the InstanceAdminTransport constructor. + If set to None, a transport is chosen automatically. + client_options (Optional[Union[google.api_core.client_options.ClientOptions, dict]]): + Custom options for the client. + + 1. The ``api_endpoint`` property can be used to override the + default endpoint provided by the client when ``transport`` is + not explicitly provided. Only if this property is not set and + ``transport`` was not explicitly provided, the endpoint is + determined by the GOOGLE_API_USE_MTLS_ENDPOINT environment + variable, which have one of the following values: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto-switch to the + default mTLS endpoint if client certificate is present; this is + the default value). + + 2. If the GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide a client certificate for mTLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + 3. The ``universe_domain`` property can be used to override the + default "googleapis.com" universe. Note that ``api_endpoint`` + property still takes precedence; and ``universe_domain`` is + currently not supported for mTLS. + + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + self._client = InstanceAdminClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ): # pragma: NO COVER + _LOGGER.debug( + "Created client `google.spanner.admin.instance_v1.InstanceAdminAsyncClient`.", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "universeDomain": getattr( + self._client._transport._credentials, "universe_domain", "" + ), + "credentialsType": f"{type(self._client._transport._credentials).__module__}.{type(self._client._transport._credentials).__qualname__}", + "credentialsInfo": getattr( + self.transport._credentials, "get_cred_info", lambda: None + )(), + } + if hasattr(self._client._transport, "_credentials") + else { + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "credentialsType": None, + }, + ) + + async def list_instance_configs( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstanceConfigsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstanceConfigsAsyncPager: + r"""Lists the supported instance configurations for a + given project. + Returns both Google-managed configurations and + user-managed configurations. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_list_instance_configs(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstanceConfigsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_configs(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest, dict]]): + The request object. The request for + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + parent (:class:`str`): + Required. The name of the project for which a list of + supported instance configurations is requested. Values + are of the form ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigsAsyncPager: + The response for + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.ListInstanceConfigsRequest): + request = spanner_instance_admin.ListInstanceConfigsRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_instance_configs + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListInstanceConfigsAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_instance_config( + self, + request: Optional[ + Union[spanner_instance_admin.GetInstanceConfigRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.InstanceConfig: + r"""Gets information about a particular instance + configuration. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_get_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstanceConfigRequest( + name="name_value", + ) + + # Make the request + response = await client.get_instance_config(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.GetInstanceConfigRequest, dict]]): + The request object. The request for + [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. + name (:class:`str`): + Required. The name of the requested instance + configuration. Values are of the form + ``projects//instanceConfigs/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.types.InstanceConfig: + A possible configuration for a Cloud + Spanner instance. Configurations define + the geographic placement of nodes and + their replication. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.GetInstanceConfigRequest): + request = spanner_instance_admin.GetInstanceConfigRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_instance_config + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def create_instance_config( + self, + request: Optional[ + Union[spanner_instance_admin.CreateInstanceConfigRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + instance_config: Optional[spanner_instance_admin.InstanceConfig] = None, + instance_config_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Creates an instance configuration and begins preparing it to be + used. The returned long-running operation can be used to track + the progress of preparing the new instance configuration. The + instance configuration name is assigned by the caller. If the + named instance configuration already exists, + ``CreateInstanceConfig`` returns ``ALREADY_EXISTS``. + + Immediately after the request returns: + + - The instance configuration is readable via the API, with all + requested attributes. The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field is set to true. Its state is ``CREATING``. + + While the operation is pending: + + - Cancelling the operation renders the instance configuration + immediately unreadable via the API. + - Except for deleting the creating resource, all other attempts + to modify the instance configuration are rejected. + + Upon completion of the returned operation: + + - Instances can be created using the instance configuration. + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field becomes false. Its state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` and + can be used to track creation of the instance configuration. The + metadata field type is + [CreateInstanceConfigMetadata][google.spanner.admin.instance.v1.CreateInstanceConfigMetadata]. + The response field type is + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig], + if successful. + + Authorization requires ``spanner.instanceConfigs.create`` + permission on the resource + [parent][google.spanner.admin.instance.v1.CreateInstanceConfigRequest.parent]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_create_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.CreateInstanceConfigRequest( + parent="parent_value", + instance_config_id="instance_config_id_value", + ) + + # Make the request + operation = client.create_instance_config(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.CreateInstanceConfigRequest, dict]]): + The request object. The request for + [CreateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstanceConfig]. + parent (:class:`str`): + Required. The name of the project in which to create the + instance configuration. Values are of the form + ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_config (:class:`google.cloud.spanner_admin_instance_v1.types.InstanceConfig`): + Required. The ``InstanceConfig`` proto of the + configuration to create. ``instance_config.name`` must + be ``/instanceConfigs/``. + ``instance_config.base_config`` must be a Google-managed + configuration name, e.g. /instanceConfigs/us-east1, + /instanceConfigs/nam3. + + This corresponds to the ``instance_config`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_config_id (:class:`str`): + Required. The ID of the instance configuration to + create. Valid identifiers are of the form + ``custom-[-a-z0-9]*[a-z0-9]`` and must be between 2 and + 64 characters in length. The ``custom-`` prefix is + required to avoid name conflicts with Google-managed + configurations. + + This corresponds to the ``instance_config_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.InstanceConfig` A possible configuration for a Cloud Spanner instance. Configurations + define the geographic placement of nodes and their + replication. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, instance_config, instance_config_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.CreateInstanceConfigRequest): + request = spanner_instance_admin.CreateInstanceConfigRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if instance_config is not None: + request.instance_config = instance_config + if instance_config_id is not None: + request.instance_config_id = instance_config_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.create_instance_config + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_instance_admin.InstanceConfig, + metadata_type=spanner_instance_admin.CreateInstanceConfigMetadata, + ) + + # Done; return the response. + return response + + async def update_instance_config( + self, + request: Optional[ + Union[spanner_instance_admin.UpdateInstanceConfigRequest, dict] + ] = None, + *, + instance_config: Optional[spanner_instance_admin.InstanceConfig] = None, + update_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Updates an instance configuration. The returned long-running + operation can be used to track the progress of updating the + instance. If the named instance configuration does not exist, + returns ``NOT_FOUND``. + + Only user-managed configurations can be updated. + + Immediately after the request returns: + + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field is set to true. + + While the operation is pending: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceConfigMetadata.cancel_time]. + The operation is guaranteed to succeed at undoing all changes, + after which point it terminates with a ``CANCELLED`` status. + - All other attempts to modify the instance configuration are + rejected. + - Reading the instance configuration via the API continues to + give the pre-request values. + + Upon completion of the returned operation: + + - Creating instances using the instance configuration uses the + new values. + - The new values of the instance configuration are readable via + the API. + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field becomes false. + + The returned long-running operation will have a name of the + format ``/operations/`` and + can be used to track the instance configuration modification. + The metadata field type is + [UpdateInstanceConfigMetadata][google.spanner.admin.instance.v1.UpdateInstanceConfigMetadata]. + The response field type is + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig], + if successful. + + Authorization requires ``spanner.instanceConfigs.update`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstanceConfig.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_update_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.UpdateInstanceConfigRequest( + ) + + # Make the request + operation = client.update_instance_config(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.UpdateInstanceConfigRequest, dict]]): + The request object. The request for + [UpdateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstanceConfig]. + instance_config (:class:`google.cloud.spanner_admin_instance_v1.types.InstanceConfig`): + Required. The user instance configuration to update, + which must always include the instance configuration + name. Otherwise, only fields mentioned in + [update_mask][google.spanner.admin.instance.v1.UpdateInstanceConfigRequest.update_mask] + need be included. To prevent conflicts of concurrent + updates, + [etag][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + can be used. + + This corresponds to the ``instance_config`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + update_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`): + Required. A mask specifying which fields in + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] + should be updated. The field mask must always be + specified; this prevents any future fields in + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] + from being erased accidentally by clients that do not + know about them. Only display_name and labels can be + updated. + + This corresponds to the ``update_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.InstanceConfig` A possible configuration for a Cloud Spanner instance. Configurations + define the geographic placement of nodes and their + replication. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [instance_config, update_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.UpdateInstanceConfigRequest): + request = spanner_instance_admin.UpdateInstanceConfigRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if instance_config is not None: + request.instance_config = instance_config + if update_mask is not None: + request.update_mask = update_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.update_instance_config + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("instance_config.name", request.instance_config.name),) + ), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_instance_admin.InstanceConfig, + metadata_type=spanner_instance_admin.UpdateInstanceConfigMetadata, + ) + + # Done; return the response. + return response + + async def delete_instance_config( + self, + request: Optional[ + Union[spanner_instance_admin.DeleteInstanceConfigRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes the instance configuration. Deletion is only allowed + when no instances are using the configuration. If any instances + are using the configuration, returns ``FAILED_PRECONDITION``. + + Only user-managed configurations can be deleted. + + Authorization requires ``spanner.instanceConfigs.delete`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstanceConfig.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_delete_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstanceConfigRequest( + name="name_value", + ) + + # Make the request + await client.delete_instance_config(request=request) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.DeleteInstanceConfigRequest, dict]]): + The request object. The request for + [DeleteInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstanceConfig]. + name (:class:`str`): + Required. The name of the instance configuration to be + deleted. Values are of the form + ``projects//instanceConfigs/`` + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.DeleteInstanceConfigRequest): + request = spanner_instance_admin.DeleteInstanceConfigRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.delete_instance_config + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def list_instance_config_operations( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstanceConfigOperationsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstanceConfigOperationsAsyncPager: + r"""Lists the user-managed instance configuration long-running + operations in the given project. An instance configuration + operation has a name of the form + ``projects//instanceConfigs//operations/``. + The long-running operation metadata field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.start_time`` in descending order + starting from the most recently started operation. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_list_instance_config_operations(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstanceConfigOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_config_operations(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsRequest, dict]]): + The request object. The request for + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations]. + parent (:class:`str`): + Required. The project of the instance configuration + operations. Values are of the form + ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigOperationsAsyncPager: + The response for + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.ListInstanceConfigOperationsRequest + ): + request = spanner_instance_admin.ListInstanceConfigOperationsRequest( + request + ) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_instance_config_operations + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListInstanceConfigOperationsAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def list_instances( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstancesRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstancesAsyncPager: + r"""Lists all instances in the given project. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_list_instances(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instances(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest, dict]]): + The request object. The request for + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. + parent (:class:`str`): + Required. The name of the project for which a list of + instances is requested. Values are of the form + ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancesAsyncPager: + The response for + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.ListInstancesRequest): + request = spanner_instance_admin.ListInstancesRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_instances + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListInstancesAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def list_instance_partitions( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstancePartitionsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstancePartitionsAsyncPager: + r"""Lists all instance partitions for the given instance. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_list_instance_partitions(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancePartitionsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_partitions(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsRequest, dict]]): + The request object. The request for + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions]. + parent (:class:`str`): + Required. The instance whose instance partitions should + be listed. Values are of the form + ``projects//instances/``. Use + ``{instance} = '-'`` to list instance partitions for all + Instances in a project, e.g., + ``projects/myproject/instances/-``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancePartitionsAsyncPager: + The response for + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.ListInstancePartitionsRequest + ): + request = spanner_instance_admin.ListInstancePartitionsRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_instance_partitions + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListInstancePartitionsAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_instance( + self, + request: Optional[ + Union[spanner_instance_admin.GetInstanceRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.Instance: + r"""Gets information about a particular instance. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_get_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstanceRequest( + name="name_value", + ) + + # Make the request + response = await client.get_instance(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.GetInstanceRequest, dict]]): + The request object. The request for + [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. + name (:class:`str`): + Required. The name of the requested instance. Values are + of the form ``projects//instances/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.types.Instance: + An isolated set of Cloud Spanner + resources on which databases can be + hosted. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.GetInstanceRequest): + request = spanner_instance_admin.GetInstanceRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_instance + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def create_instance( + self, + request: Optional[ + Union[spanner_instance_admin.CreateInstanceRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + instance_id: Optional[str] = None, + instance: Optional[spanner_instance_admin.Instance] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Creates an instance and begins preparing it to begin serving. + The returned long-running operation can be used to track the + progress of preparing the new instance. The instance name is + assigned by the caller. If the named instance already exists, + ``CreateInstance`` returns ``ALREADY_EXISTS``. + + Immediately upon completion of this request: + + - The instance is readable via the API, with all requested + attributes but no allocated resources. Its state is + ``CREATING``. + + Until completion of the returned operation: + + - Cancelling the operation renders the instance immediately + unreadable via the API. + - The instance can be deleted. + - All other attempts to modify the instance are rejected. + + Upon completion of the returned operation: + + - Billing for all successfully-allocated resources begins (some + types may have lower than the requested levels). + - Databases can be created in the instance. + - The instance's allocated resource levels are readable via the + API. + - The instance's state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` and can be + used to track creation of the instance. The metadata field type + is + [CreateInstanceMetadata][google.spanner.admin.instance.v1.CreateInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_create_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + instance = spanner_admin_instance_v1.Instance() + instance.name = "name_value" + instance.config = "config_value" + instance.display_name = "display_name_value" + + request = spanner_admin_instance_v1.CreateInstanceRequest( + parent="parent_value", + instance_id="instance_id_value", + instance=instance, + ) + + # Make the request + operation = client.create_instance(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.CreateInstanceRequest, dict]]): + The request object. The request for + [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. + parent (:class:`str`): + Required. The name of the project in which to create the + instance. Values are of the form ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_id (:class:`str`): + Required. The ID of the instance to create. Valid + identifiers are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` + and must be between 2 and 64 characters in length. + + This corresponds to the ``instance_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance (:class:`google.cloud.spanner_admin_instance_v1.types.Instance`): + Required. The instance to create. The name may be + omitted, but if specified must be + ``/instances/``. + + This corresponds to the ``instance`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_instance_v1.types.Instance` + An isolated set of Cloud Spanner resources on which + databases can be hosted. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, instance_id, instance] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.CreateInstanceRequest): + request = spanner_instance_admin.CreateInstanceRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if instance_id is not None: + request.instance_id = instance_id + if instance is not None: + request.instance = instance + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.create_instance + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_instance_admin.Instance, + metadata_type=spanner_instance_admin.CreateInstanceMetadata, + ) + + # Done; return the response. + return response + + async def update_instance( + self, + request: Optional[ + Union[spanner_instance_admin.UpdateInstanceRequest, dict] + ] = None, + *, + instance: Optional[spanner_instance_admin.Instance] = None, + field_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Updates an instance, and begins allocating or releasing + resources as requested. The returned long-running operation can + be used to track the progress of updating the instance. If the + named instance does not exist, returns ``NOT_FOUND``. + + Immediately upon completion of this request: + + - For resource types for which a decrease in the instance's + allocation has been requested, billing is based on the + newly-requested level. + + Until completion of the returned operation: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceMetadata.cancel_time], + and begins restoring resources to their pre-request values. + The operation is guaranteed to succeed at undoing all resource + changes, after which point it terminates with a ``CANCELLED`` + status. + - All other attempts to modify the instance are rejected. + - Reading the instance via the API continues to give the + pre-request resource levels. + + Upon completion of the returned operation: + + - Billing begins for all successfully-allocated resources (some + types may have lower than the requested levels). + - All newly-reserved resources are available for serving the + instance's tables. + - The instance's new resource levels are readable via the API. + + The returned long-running operation will have a name of the + format ``/operations/`` and can be + used to track the instance modification. The metadata field type + is + [UpdateInstanceMetadata][google.spanner.admin.instance.v1.UpdateInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. + + Authorization requires ``spanner.instances.update`` permission + on the resource + [name][google.spanner.admin.instance.v1.Instance.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_update_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + instance = spanner_admin_instance_v1.Instance() + instance.name = "name_value" + instance.config = "config_value" + instance.display_name = "display_name_value" + + request = spanner_admin_instance_v1.UpdateInstanceRequest( + instance=instance, + ) + + # Make the request + operation = client.update_instance(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.UpdateInstanceRequest, dict]]): + The request object. The request for + [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. + instance (:class:`google.cloud.spanner_admin_instance_v1.types.Instance`): + Required. The instance to update, which must always + include the instance name. Otherwise, only fields + mentioned in + [field_mask][google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask] + need be included. + + This corresponds to the ``instance`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + field_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`): + Required. A mask specifying which fields in + [Instance][google.spanner.admin.instance.v1.Instance] + should be updated. The field mask must always be + specified; this prevents any future fields in + [Instance][google.spanner.admin.instance.v1.Instance] + from being erased accidentally by clients that do not + know about them. + + This corresponds to the ``field_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_instance_v1.types.Instance` + An isolated set of Cloud Spanner resources on which + databases can be hosted. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [instance, field_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.UpdateInstanceRequest): + request = spanner_instance_admin.UpdateInstanceRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if instance is not None: + request.instance = instance + if field_mask is not None: + request.field_mask = field_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.update_instance + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("instance.name", request.instance.name),) + ), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_instance_admin.Instance, + metadata_type=spanner_instance_admin.UpdateInstanceMetadata, + ) + + # Done; return the response. + return response + + async def delete_instance( + self, + request: Optional[ + Union[spanner_instance_admin.DeleteInstanceRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes an instance. + + Immediately upon completion of the request: + + - Billing ceases for all of the instance's reserved resources. + + Soon afterward: + + - The instance and *all of its databases* immediately and + irrevocably disappear from the API. All data in the databases + is permanently deleted. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_delete_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstanceRequest( + name="name_value", + ) + + # Make the request + await client.delete_instance(request=request) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.DeleteInstanceRequest, dict]]): + The request object. The request for + [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. + name (:class:`str`): + Required. The name of the instance to be deleted. Values + are of the form + ``projects//instances/`` + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.DeleteInstanceRequest): + request = spanner_instance_admin.DeleteInstanceRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.delete_instance + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def set_iam_policy( + self, + request: Optional[Union[iam_policy_pb2.SetIamPolicyRequest, dict]] = None, + *, + resource: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Sets the access control policy on an instance resource. Replaces + any existing policy. + + Authorization requires ``spanner.instances.setIamPolicy`` on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + async def sample_set_iam_policy(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = await client.set_iam_policy(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.iam.v1.iam_policy_pb2.SetIamPolicyRequest, dict]]): + The request object. Request message for ``SetIamPolicy`` method. + resource (:class:`str`): + REQUIRED: The resource for which the + policy is being specified. See the + operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which specifies access + controls for Google Cloud resources. + + A Policy is a collection of bindings. A binding binds + one or more members, or principals, to a single role. + Principals can be user accounts, service accounts, + Google groups, and domains (such as G Suite). A role + is a named list of permissions; each role can be an + IAM predefined role or a user-created custom role. + + For some types of Google Cloud resources, a binding + can also specify a condition, which is a logical + expression that allows access to a resource only if + the expression evaluates to true. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the [IAM + documentation](https://cloud.google.com/iam/help/conditions/resource-policies). + + **JSON example:** + + :literal:`` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \` + + **YAML example:** + + :literal:`` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \` + + For a description of IAM and its features, see the + [IAM + documentation](https://cloud.google.com/iam/docs/). + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = iam_policy_pb2.SetIamPolicyRequest(**request) + elif not request: + request = iam_policy_pb2.SetIamPolicyRequest(resource=resource) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.set_iam_policy + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_iam_policy( + self, + request: Optional[Union[iam_policy_pb2.GetIamPolicyRequest, dict]] = None, + *, + resource: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Gets the access control policy for an instance resource. Returns + an empty policy if an instance exists but does not have a policy + set. + + Authorization requires ``spanner.instances.getIamPolicy`` on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + async def sample_get_iam_policy(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = await client.get_iam_policy(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.iam.v1.iam_policy_pb2.GetIamPolicyRequest, dict]]): + The request object. Request message for ``GetIamPolicy`` method. + resource (:class:`str`): + REQUIRED: The resource for which the + policy is being requested. See the + operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which specifies access + controls for Google Cloud resources. + + A Policy is a collection of bindings. A binding binds + one or more members, or principals, to a single role. + Principals can be user accounts, service accounts, + Google groups, and domains (such as G Suite). A role + is a named list of permissions; each role can be an + IAM predefined role or a user-created custom role. + + For some types of Google Cloud resources, a binding + can also specify a condition, which is a logical + expression that allows access to a resource only if + the expression evaluates to true. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the [IAM + documentation](https://cloud.google.com/iam/help/conditions/resource-policies). + + **JSON example:** + + :literal:`` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \` + + **YAML example:** + + :literal:`` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \` + + For a description of IAM and its features, see the + [IAM + documentation](https://cloud.google.com/iam/docs/). + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = iam_policy_pb2.GetIamPolicyRequest(**request) + elif not request: + request = iam_policy_pb2.GetIamPolicyRequest(resource=resource) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_iam_policy + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def test_iam_permissions( + self, + request: Optional[Union[iam_policy_pb2.TestIamPermissionsRequest, dict]] = None, + *, + resource: Optional[str] = None, + permissions: Optional[MutableSequence[str]] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> iam_policy_pb2.TestIamPermissionsResponse: + r"""Returns permissions that the caller has on the specified + instance resource. + + Attempting this RPC on a non-existent Cloud Spanner instance + resource will result in a NOT_FOUND error if the user has + ``spanner.instances.list`` permission on the containing Google + Cloud Project. Otherwise returns an empty set of permissions. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + async def sample_test_iam_permissions(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + permissions=['permissions_value1', 'permissions_value2'], + ) + + # Make the request + response = await client.test_iam_permissions(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest, dict]]): + The request object. Request message for ``TestIamPermissions`` method. + resource (:class:`str`): + REQUIRED: The resource for which the + policy detail is being requested. See + the operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + permissions (:class:`MutableSequence[str]`): + The set of permissions to check for the ``resource``. + Permissions with wildcards (such as '*' or 'storage.*') + are not allowed. For more information see `IAM + Overview `__. + + This corresponds to the ``permissions`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse: + Response message for TestIamPermissions method. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource, permissions] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = iam_policy_pb2.TestIamPermissionsRequest(**request) + elif not request: + request = iam_policy_pb2.TestIamPermissionsRequest( + resource=resource, permissions=permissions + ) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.test_iam_permissions + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_instance_partition( + self, + request: Optional[ + Union[spanner_instance_admin.GetInstancePartitionRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.InstancePartition: + r"""Gets information about a particular instance + partition. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_get_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstancePartitionRequest( + name="name_value", + ) + + # Make the request + response = await client.get_instance_partition(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.GetInstancePartitionRequest, dict]]): + The request object. The request for + [GetInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.GetInstancePartition]. + name (:class:`str`): + Required. The name of the requested instance partition. + Values are of the form + ``projects/{project}/instances/{instance}/instancePartitions/{instance_partition}``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.types.InstancePartition: + An isolated set of Cloud Spanner + resources that databases can define + placements on. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.GetInstancePartitionRequest): + request = spanner_instance_admin.GetInstancePartitionRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_instance_partition + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def create_instance_partition( + self, + request: Optional[ + Union[spanner_instance_admin.CreateInstancePartitionRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + instance_partition: Optional[spanner_instance_admin.InstancePartition] = None, + instance_partition_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Creates an instance partition and begins preparing it to be + used. The returned long-running operation can be used to track + the progress of preparing the new instance partition. The + instance partition name is assigned by the caller. If the named + instance partition already exists, ``CreateInstancePartition`` + returns ``ALREADY_EXISTS``. + + Immediately upon completion of this request: + + - The instance partition is readable via the API, with all + requested attributes but no allocated resources. Its state is + ``CREATING``. + + Until completion of the returned operation: + + - Cancelling the operation renders the instance partition + immediately unreadable via the API. + - The instance partition can be deleted. + - All other attempts to modify the instance partition are + rejected. + + Upon completion of the returned operation: + + - Billing for all successfully-allocated resources begins (some + types may have lower than the requested levels). + - Databases can start using this instance partition. + - The instance partition's allocated resource levels are + readable via the API. + - The instance partition's state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` + and can be used to track creation of the instance partition. The + metadata field type is + [CreateInstancePartitionMetadata][google.spanner.admin.instance.v1.CreateInstancePartitionMetadata]. + The response field type is + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition], + if successful. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_create_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + instance_partition = spanner_admin_instance_v1.InstancePartition() + instance_partition.node_count = 1070 + instance_partition.name = "name_value" + instance_partition.config = "config_value" + instance_partition.display_name = "display_name_value" + + request = spanner_admin_instance_v1.CreateInstancePartitionRequest( + parent="parent_value", + instance_partition_id="instance_partition_id_value", + instance_partition=instance_partition, + ) + + # Make the request + operation = client.create_instance_partition(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.CreateInstancePartitionRequest, dict]]): + The request object. The request for + [CreateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstancePartition]. + parent (:class:`str`): + Required. The name of the instance in which to create + the instance partition. Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_partition (:class:`google.cloud.spanner_admin_instance_v1.types.InstancePartition`): + Required. The instance partition to create. The + instance_partition.name may be omitted, but if specified + must be + ``/instancePartitions/``. + + This corresponds to the ``instance_partition`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_partition_id (:class:`str`): + Required. The ID of the instance partition to create. + Valid identifiers are of the form + ``[a-z][-a-z0-9]*[a-z0-9]`` and must be between 2 and 64 + characters in length. + + This corresponds to the ``instance_partition_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.InstancePartition` An isolated set of Cloud Spanner resources that databases can define + placements on. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, instance_partition, instance_partition_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.CreateInstancePartitionRequest + ): + request = spanner_instance_admin.CreateInstancePartitionRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if instance_partition is not None: + request.instance_partition = instance_partition + if instance_partition_id is not None: + request.instance_partition_id = instance_partition_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.create_instance_partition + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_instance_admin.InstancePartition, + metadata_type=spanner_instance_admin.CreateInstancePartitionMetadata, + ) + + # Done; return the response. + return response + + async def delete_instance_partition( + self, + request: Optional[ + Union[spanner_instance_admin.DeleteInstancePartitionRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes an existing instance partition. Requires that the + instance partition is not used by any database or backup and is + not the default instance partition of an instance. + + Authorization requires ``spanner.instancePartitions.delete`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstancePartition.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_delete_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstancePartitionRequest( + name="name_value", + ) + + # Make the request + await client.delete_instance_partition(request=request) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.DeleteInstancePartitionRequest, dict]]): + The request object. The request for + [DeleteInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstancePartition]. + name (:class:`str`): + Required. The name of the instance partition to be + deleted. Values are of the form + ``projects/{project}/instances/{instance}/instancePartitions/{instance_partition}`` + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.DeleteInstancePartitionRequest + ): + request = spanner_instance_admin.DeleteInstancePartitionRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.delete_instance_partition + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def update_instance_partition( + self, + request: Optional[ + Union[spanner_instance_admin.UpdateInstancePartitionRequest, dict] + ] = None, + *, + instance_partition: Optional[spanner_instance_admin.InstancePartition] = None, + field_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Updates an instance partition, and begins allocating or + releasing resources as requested. The returned long-running + operation can be used to track the progress of updating the + instance partition. If the named instance partition does not + exist, returns ``NOT_FOUND``. + + Immediately upon completion of this request: + + - For resource types for which a decrease in the instance + partition's allocation has been requested, billing is based on + the newly-requested level. + + Until completion of the returned operation: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstancePartitionMetadata.cancel_time], + and begins restoring resources to their pre-request values. + The operation is guaranteed to succeed at undoing all resource + changes, after which point it terminates with a ``CANCELLED`` + status. + - All other attempts to modify the instance partition are + rejected. + - Reading the instance partition via the API continues to give + the pre-request resource levels. + + Upon completion of the returned operation: + + - Billing begins for all successfully-allocated resources (some + types may have lower than the requested levels). + - All newly-reserved resources are available for serving the + instance partition's tables. + - The instance partition's new resource levels are readable via + the API. + + The returned long-running operation will have a name of the + format ``/operations/`` + and can be used to track the instance partition modification. + The metadata field type is + [UpdateInstancePartitionMetadata][google.spanner.admin.instance.v1.UpdateInstancePartitionMetadata]. + The response field type is + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition], + if successful. + + Authorization requires ``spanner.instancePartitions.update`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstancePartition.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_update_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + instance_partition = spanner_admin_instance_v1.InstancePartition() + instance_partition.node_count = 1070 + instance_partition.name = "name_value" + instance_partition.config = "config_value" + instance_partition.display_name = "display_name_value" + + request = spanner_admin_instance_v1.UpdateInstancePartitionRequest( + instance_partition=instance_partition, + ) + + # Make the request + operation = client.update_instance_partition(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.UpdateInstancePartitionRequest, dict]]): + The request object. The request for + [UpdateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstancePartition]. + instance_partition (:class:`google.cloud.spanner_admin_instance_v1.types.InstancePartition`): + Required. The instance partition to update, which must + always include the instance partition name. Otherwise, + only fields mentioned in + [field_mask][google.spanner.admin.instance.v1.UpdateInstancePartitionRequest.field_mask] + need be included. + + This corresponds to the ``instance_partition`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + field_mask (:class:`google.protobuf.field_mask_pb2.FieldMask`): + Required. A mask specifying which fields in + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition] + should be updated. The field mask must always be + specified; this prevents any future fields in + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition] + from being erased accidentally by clients that do not + know about them. + + This corresponds to the ``field_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.InstancePartition` An isolated set of Cloud Spanner resources that databases can define + placements on. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [instance_partition, field_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.UpdateInstancePartitionRequest + ): + request = spanner_instance_admin.UpdateInstancePartitionRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if instance_partition is not None: + request.instance_partition = instance_partition + if field_mask is not None: + request.field_mask = field_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.update_instance_partition + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("instance_partition.name", request.instance_partition.name),) + ), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_instance_admin.InstancePartition, + metadata_type=spanner_instance_admin.UpdateInstancePartitionMetadata, + ) + + # Done; return the response. + return response + + async def list_instance_partition_operations( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstancePartitionOperationsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstancePartitionOperationsAsyncPager: + r"""Lists instance partition long-running operations in the given + instance. An instance partition operation has a name of the form + ``projects//instances//instancePartitions//operations/``. + The long-running operation metadata field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.start_time`` in descending order + starting from the most recently started operation. + + Authorization requires + ``spanner.instancePartitionOperations.list`` permission on the + resource + [parent][google.spanner.admin.instance.v1.ListInstancePartitionOperationsRequest.parent]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_list_instance_partition_operations(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancePartitionOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_partition_operations(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsRequest, dict]]): + The request object. The request for + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations]. + parent (:class:`str`): + Required. The parent instance of the instance partition + operations. Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancePartitionOperationsAsyncPager: + The response for + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.ListInstancePartitionOperationsRequest + ): + request = spanner_instance_admin.ListInstancePartitionOperationsRequest( + request + ) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_instance_partition_operations + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListInstancePartitionOperationsAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def move_instance( + self, + request: Optional[ + Union[spanner_instance_admin.MoveInstanceRequest, dict] + ] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation_async.AsyncOperation: + r"""Moves an instance to the target instance configuration. You can + use the returned long-running operation to track the progress of + moving the instance. + + ``MoveInstance`` returns ``FAILED_PRECONDITION`` if the instance + meets any of the following criteria: + + - Is undergoing a move to a different instance configuration + - Has backups + - Has an ongoing update + - Contains any CMEK-enabled databases + - Is a free trial instance + + While the operation is pending: + + - All other attempts to modify the instance, including changes + to its compute capacity, are rejected. + + - The following database and backup admin operations are + rejected: + + - ``DatabaseAdmin.CreateDatabase`` + - ``DatabaseAdmin.UpdateDatabaseDdl`` (disabled if + default_leader is specified in the request.) + - ``DatabaseAdmin.RestoreDatabase`` + - ``DatabaseAdmin.CreateBackup`` + - ``DatabaseAdmin.CopyBackup`` + + - Both the source and target instance configurations are subject + to hourly compute and storage charges. + + - The instance might experience higher read-write latencies and + a higher transaction abort rate. However, moving an instance + doesn't cause any downtime. + + The returned long-running operation has a name of the format + ``/operations/`` and can be used to + track the move instance operation. The metadata field type is + [MoveInstanceMetadata][google.spanner.admin.instance.v1.MoveInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.MoveInstanceMetadata.cancel_time]. + Cancellation is not immediate because it involves moving any + data previously moved to the target instance configuration back + to the original instance configuration. You can use this + operation to track the progress of the cancellation. Upon + successful completion of the cancellation, the operation + terminates with ``CANCELLED`` status. + + If not cancelled, upon completion of the returned operation: + + - The instance successfully moves to the target instance + configuration. + - You are billed for compute and storage in target instance + configuration. + + Authorization requires the ``spanner.instances.update`` + permission on the resource + [instance][google.spanner.admin.instance.v1.Instance]. + + For more details, see `Move an + instance `__. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + async def sample_move_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.MoveInstanceRequest( + name="name_value", + target_config="target_config_value", + ) + + # Make the request + operation = client.move_instance(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_admin_instance_v1.types.MoveInstanceRequest, dict]]): + The request object. The request for + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance]. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.MoveInstanceResponse` The response for + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance]. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.MoveInstanceRequest): + request = spanner_instance_admin.MoveInstanceRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.move_instance + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + spanner_instance_admin.MoveInstanceResponse, + metadata_type=spanner_instance_admin.MoveInstanceMetadata, + ) + + # Done; return the response. + return response + + async def list_operations( + self, + request: Optional[operations_pb2.ListOperationsRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.ListOperationsResponse: + r"""Lists operations that match the specified filter in the request. + + Args: + request (:class:`~.operations_pb2.ListOperationsRequest`): + The request object. Request message for + `ListOperations` method. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + ~.operations_pb2.ListOperationsResponse: + Response message for ``ListOperations`` method. + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.ListOperationsRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self.transport._wrapped_methods[self._client._transport.list_operations] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_operation( + self, + request: Optional[operations_pb2.GetOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Gets the latest state of a long-running operation. + + Args: + request (:class:`~.operations_pb2.GetOperationRequest`): + The request object. Request message for + `GetOperation` method. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + ~.operations_pb2.Operation: + An ``Operation`` object. + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.GetOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self.transport._wrapped_methods[self._client._transport.get_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def delete_operation( + self, + request: Optional[operations_pb2.DeleteOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes a long-running operation. + + This method indicates that the client is no longer interested + in the operation result. It does not cancel the operation. + If the server doesn't support this method, it returns + `google.rpc.Code.UNIMPLEMENTED`. + + Args: + request (:class:`~.operations_pb2.DeleteOperationRequest`): + The request object. Request message for + `DeleteOperation` method. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + None + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.DeleteOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self.transport._wrapped_methods[self._client._transport.delete_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def cancel_operation( + self, + request: Optional[operations_pb2.CancelOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Starts asynchronous cancellation on a long-running operation. + + The server makes a best effort to cancel the operation, but success + is not guaranteed. If the server doesn't support this method, it returns + `google.rpc.Code.UNIMPLEMENTED`. + + Args: + request (:class:`~.operations_pb2.CancelOperationRequest`): + The request object. Request message for + `CancelOperation` method. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + None + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.CancelOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self.transport._wrapped_methods[self._client._transport.cancel_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def __aenter__(self) -> "InstanceAdminAsyncClient": + return self + + async def __aexit__(self, exc_type, exc, tb): + await self.transport.close() + + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +__all__ = ("InstanceAdminAsyncClient",) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py new file mode 100644 index 0000000000..c0fe398c3a --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/client.py @@ -0,0 +1,4111 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from collections import OrderedDict +from http import HTTPStatus +import json +import logging as std_logging +import os +import re +from typing import ( + Dict, + Callable, + Mapping, + MutableMapping, + MutableSequence, + Optional, + Sequence, + Tuple, + Type, + Union, + cast, +) +import uuid +import warnings + +from google.cloud.spanner_admin_instance_v1 import gapic_version as package_version + +from google.api_core import client_options as client_options_lib +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.spanner_admin_instance_v1.services.instance_admin import pagers +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from .transports.base import InstanceAdminTransport, DEFAULT_CLIENT_INFO +from .transports.grpc import InstanceAdminGrpcTransport +from .transports.grpc_asyncio import InstanceAdminGrpcAsyncIOTransport +from .transports.rest import InstanceAdminRestTransport + + +class InstanceAdminClientMeta(type): + """Metaclass for the InstanceAdmin client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = OrderedDict() # type: Dict[str, Type[InstanceAdminTransport]] + _transport_registry["grpc"] = InstanceAdminGrpcTransport + _transport_registry["grpc_asyncio"] = InstanceAdminGrpcAsyncIOTransport + _transport_registry["rest"] = InstanceAdminRestTransport + + def get_transport_class( + cls, + label: Optional[str] = None, + ) -> Type[InstanceAdminTransport]: + """Returns an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class InstanceAdminClient(metaclass=InstanceAdminClientMeta): + """Cloud Spanner Instance Admin API + + The Cloud Spanner Instance Admin API can be used to create, + delete, modify and list instances. Instances are dedicated Cloud + Spanner serving and storage resources to be used by Cloud + Spanner databases. + + Each instance has a "configuration", which dictates where the + serving resources for the Cloud Spanner instance are located + (e.g., US-central, Europe). Configurations are created by Google + based on resource availability. + + Cloud Spanner billing is based on the instances that exist and + their sizes. After an instance exists, there are no additional + per-database or per-operation charges for use of the instance + (though there may be additional network bandwidth charges). + Instances offer isolation: problems with databases in one + instance will not affect other instances. However, within an + instance databases can affect each other. For example, if one + database in an instance receives a lot of requests and consumes + most of the instance resources, fewer resources are available + for other databases in that instance, and their performance may + suffer. + """ + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Converts api endpoint to mTLS endpoint. + + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + # Note: DEFAULT_ENDPOINT is deprecated. Use _DEFAULT_ENDPOINT_TEMPLATE instead. + DEFAULT_ENDPOINT = "spanner.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + _DEFAULT_ENDPOINT_TEMPLATE = "spanner.{UNIVERSE_DOMAIN}" + _DEFAULT_UNIVERSE = "googleapis.com" + + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials + info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + InstanceAdminClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_info(info) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + InstanceAdminClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> InstanceAdminTransport: + """Returns the transport used by the client instance. + + Returns: + InstanceAdminTransport: The transport used by the client + instance. + """ + return self._transport + + @staticmethod + def instance_path( + project: str, + instance: str, + ) -> str: + """Returns a fully-qualified instance string.""" + return "projects/{project}/instances/{instance}".format( + project=project, + instance=instance, + ) + + @staticmethod + def parse_instance_path(path: str) -> Dict[str, str]: + """Parses a instance path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/instances/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def instance_config_path( + project: str, + instance_config: str, + ) -> str: + """Returns a fully-qualified instance_config string.""" + return "projects/{project}/instanceConfigs/{instance_config}".format( + project=project, + instance_config=instance_config, + ) + + @staticmethod + def parse_instance_config_path(path: str) -> Dict[str, str]: + """Parses a instance_config path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instanceConfigs/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def instance_partition_path( + project: str, + instance: str, + instance_partition: str, + ) -> str: + """Returns a fully-qualified instance_partition string.""" + return "projects/{project}/instances/{instance}/instancePartitions/{instance_partition}".format( + project=project, + instance=instance, + instance_partition=instance_partition, + ) + + @staticmethod + def parse_instance_partition_path(path: str) -> Dict[str, str]: + """Parses a instance_partition path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instances/(?P.+?)/instancePartitions/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def common_billing_account_path( + billing_account: str, + ) -> str: + """Returns a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path( + folder: str, + ) -> str: + """Returns a fully-qualified folder string.""" + return "folders/{folder}".format( + folder=folder, + ) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path( + organization: str, + ) -> str: + """Returns a fully-qualified organization string.""" + return "organizations/{organization}".format( + organization=organization, + ) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path( + project: str, + ) -> str: + """Returns a fully-qualified project string.""" + return "projects/{project}".format( + project=project, + ) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path( + project: str, + location: str, + ) -> str: + """Returns a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, + location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @classmethod + def get_mtls_endpoint_and_cert_source( + cls, client_options: Optional[client_options_lib.ClientOptions] = None + ): + """Deprecated. Return the API endpoint and client cert source for mutual TLS. + + The client cert source is determined in the following order: + (1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the + client cert source is None. + (2) if `client_options.client_cert_source` is provided, use the provided one; if the + default client cert source exists, use the default one; otherwise the client cert + source is None. + + The API endpoint is determined in the following order: + (1) if `client_options.api_endpoint` if provided, use the provided one. + (2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the + default mTLS endpoint; if the environment variable is "never", use the default API + endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise + use the default API endpoint. + + More details can be found at https://google.aip.dev/auth/4114. + + Args: + client_options (google.api_core.client_options.ClientOptions): Custom options for the + client. Only the `api_endpoint` and `client_cert_source` properties may be used + in this method. + + Returns: + Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the + client cert source to use. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If any errors happen. + """ + + warnings.warn( + "get_mtls_endpoint_and_cert_source is deprecated. Use the api_endpoint property instead.", + DeprecationWarning, + ) + if client_options is None: + client_options = client_options_lib.ClientOptions() + use_client_cert = os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false") + use_mtls_endpoint = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_client_cert not in ("true", "false"): + raise ValueError( + "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + if use_mtls_endpoint not in ("auto", "never", "always"): + raise MutualTLSChannelError( + "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Figure out the client cert source to use. + client_cert_source = None + if use_client_cert == "true": + if client_options.client_cert_source: + client_cert_source = client_options.client_cert_source + elif mtls.has_default_client_cert_source(): + client_cert_source = mtls.default_client_cert_source() + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + elif use_mtls_endpoint == "always" or ( + use_mtls_endpoint == "auto" and client_cert_source + ): + api_endpoint = cls.DEFAULT_MTLS_ENDPOINT + else: + api_endpoint = cls.DEFAULT_ENDPOINT + + return api_endpoint, client_cert_source + + @staticmethod + def _read_environment_variables(): + """Returns the environment variables used by the client. + + Returns: + Tuple[bool, str, str]: returns the GOOGLE_API_USE_CLIENT_CERTIFICATE, + GOOGLE_API_USE_MTLS_ENDPOINT, and GOOGLE_CLOUD_UNIVERSE_DOMAIN environment variables. + + Raises: + ValueError: If GOOGLE_API_USE_CLIENT_CERTIFICATE is not + any of ["true", "false"]. + google.auth.exceptions.MutualTLSChannelError: If GOOGLE_API_USE_MTLS_ENDPOINT + is not any of ["auto", "never", "always"]. + """ + use_client_cert = os.getenv( + "GOOGLE_API_USE_CLIENT_CERTIFICATE", "false" + ).lower() + use_mtls_endpoint = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto").lower() + universe_domain_env = os.getenv("GOOGLE_CLOUD_UNIVERSE_DOMAIN") + if use_client_cert not in ("true", "false"): + raise ValueError( + "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + if use_mtls_endpoint not in ("auto", "never", "always"): + raise MutualTLSChannelError( + "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + return use_client_cert == "true", use_mtls_endpoint, universe_domain_env + + @staticmethod + def _get_client_cert_source(provided_cert_source, use_cert_flag): + """Return the client cert source to be used by the client. + + Args: + provided_cert_source (bytes): The client certificate source provided. + use_cert_flag (bool): A flag indicating whether to use the client certificate. + + Returns: + bytes or None: The client cert source to be used by the client. + """ + client_cert_source = None + if use_cert_flag: + if provided_cert_source: + client_cert_source = provided_cert_source + elif mtls.has_default_client_cert_source(): + client_cert_source = mtls.default_client_cert_source() + return client_cert_source + + @staticmethod + def _get_api_endpoint( + api_override, client_cert_source, universe_domain, use_mtls_endpoint + ): + """Return the API endpoint used by the client. + + Args: + api_override (str): The API endpoint override. If specified, this is always + the return value of this function and the other arguments are not used. + client_cert_source (bytes): The client certificate source used by the client. + universe_domain (str): The universe domain used by the client. + use_mtls_endpoint (str): How to use the mTLS endpoint, which depends also on the other parameters. + Possible values are "always", "auto", or "never". + + Returns: + str: The API endpoint to be used by the client. + """ + if api_override is not None: + api_endpoint = api_override + elif use_mtls_endpoint == "always" or ( + use_mtls_endpoint == "auto" and client_cert_source + ): + _default_universe = InstanceAdminClient._DEFAULT_UNIVERSE + if universe_domain != _default_universe: + raise MutualTLSChannelError( + f"mTLS is not supported in any universe other than {_default_universe}." + ) + api_endpoint = InstanceAdminClient.DEFAULT_MTLS_ENDPOINT + else: + api_endpoint = InstanceAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=universe_domain + ) + return api_endpoint + + @staticmethod + def _get_universe_domain( + client_universe_domain: Optional[str], universe_domain_env: Optional[str] + ) -> str: + """Return the universe domain used by the client. + + Args: + client_universe_domain (Optional[str]): The universe domain configured via the client options. + universe_domain_env (Optional[str]): The universe domain configured via the "GOOGLE_CLOUD_UNIVERSE_DOMAIN" environment variable. + + Returns: + str: The universe domain to be used by the client. + + Raises: + ValueError: If the universe domain is an empty string. + """ + universe_domain = InstanceAdminClient._DEFAULT_UNIVERSE + if client_universe_domain is not None: + universe_domain = client_universe_domain + elif universe_domain_env is not None: + universe_domain = universe_domain_env + if len(universe_domain.strip()) == 0: + raise ValueError("Universe Domain cannot be an empty string.") + return universe_domain + + def _validate_universe_domain(self): + """Validates client's and credentials' universe domains are consistent. + + Returns: + bool: True iff the configured universe domain is valid. + + Raises: + ValueError: If the configured universe domain is not valid. + """ + + # NOTE (b/349488459): universe validation is disabled until further notice. + return True + + def _add_cred_info_for_auth_errors( + self, error: core_exceptions.GoogleAPICallError + ) -> None: + """Adds credential info string to error details for 401/403/404 errors. + + Args: + error (google.api_core.exceptions.GoogleAPICallError): The error to add the cred info. + """ + if error.code not in [ + HTTPStatus.UNAUTHORIZED, + HTTPStatus.FORBIDDEN, + HTTPStatus.NOT_FOUND, + ]: + return + + cred = self._transport._credentials + + # get_cred_info is only available in google-auth>=2.35.0 + if not hasattr(cred, "get_cred_info"): + return + + # ignore the type check since pypy test fails when get_cred_info + # is not available + cred_info = cred.get_cred_info() # type: ignore + if cred_info and hasattr(error._details, "append"): + error._details.append(json.dumps(cred_info)) + + @property + def api_endpoint(self): + """Return the API endpoint used by the client instance. + + Returns: + str: The API endpoint used by the client instance. + """ + return self._api_endpoint + + @property + def universe_domain(self) -> str: + """Return the universe domain used by the client instance. + + Returns: + str: The universe domain used by the client instance. + """ + return self._universe_domain + + def __init__( + self, + *, + credentials: Optional[ga_credentials.Credentials] = None, + transport: Optional[ + Union[str, InstanceAdminTransport, Callable[..., InstanceAdminTransport]] + ] = None, + client_options: Optional[Union[client_options_lib.ClientOptions, dict]] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiates the instance admin client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Optional[Union[str,InstanceAdminTransport,Callable[..., InstanceAdminTransport]]]): + The transport to use, or a Callable that constructs and returns a new transport. + If a Callable is given, it will be called with the same set of initialization + arguments as used in the InstanceAdminTransport constructor. + If set to None, a transport is chosen automatically. + client_options (Optional[Union[google.api_core.client_options.ClientOptions, dict]]): + Custom options for the client. + + 1. The ``api_endpoint`` property can be used to override the + default endpoint provided by the client when ``transport`` is + not explicitly provided. Only if this property is not set and + ``transport`` was not explicitly provided, the endpoint is + determined by the GOOGLE_API_USE_MTLS_ENDPOINT environment + variable, which have one of the following values: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto-switch to the + default mTLS endpoint if client certificate is present; this is + the default value). + + 2. If the GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide a client certificate for mTLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + 3. The ``universe_domain`` property can be used to override the + default "googleapis.com" universe. Note that the ``api_endpoint`` + property still takes precedence; and ``universe_domain`` is + currently not supported for mTLS. + + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + self._client_options = client_options + if isinstance(self._client_options, dict): + self._client_options = client_options_lib.from_dict(self._client_options) + if self._client_options is None: + self._client_options = client_options_lib.ClientOptions() + self._client_options = cast( + client_options_lib.ClientOptions, self._client_options + ) + + universe_domain_opt = getattr(self._client_options, "universe_domain", None) + + ( + self._use_client_cert, + self._use_mtls_endpoint, + self._universe_domain_env, + ) = InstanceAdminClient._read_environment_variables() + self._client_cert_source = InstanceAdminClient._get_client_cert_source( + self._client_options.client_cert_source, self._use_client_cert + ) + self._universe_domain = InstanceAdminClient._get_universe_domain( + universe_domain_opt, self._universe_domain_env + ) + self._api_endpoint = None # updated below, depending on `transport` + + # Initialize the universe domain validation. + self._is_universe_domain_valid = False + + if CLIENT_LOGGING_SUPPORTED: # pragma: NO COVER + # Setup logging. + client_logging.initialize_logging() + + api_key_value = getattr(self._client_options, "api_key", None) + if api_key_value and credentials: + raise ValueError( + "client_options.api_key and credentials are mutually exclusive" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + transport_provided = isinstance(transport, InstanceAdminTransport) + if transport_provided: + # transport is a InstanceAdminTransport instance. + if credentials or self._client_options.credentials_file or api_key_value: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if self._client_options.scopes: + raise ValueError( + "When providing a transport instance, provide its scopes " + "directly." + ) + self._transport = cast(InstanceAdminTransport, transport) + self._api_endpoint = self._transport.host + + self._api_endpoint = ( + self._api_endpoint + or InstanceAdminClient._get_api_endpoint( + self._client_options.api_endpoint, + self._client_cert_source, + self._universe_domain, + self._use_mtls_endpoint, + ) + ) + + if not transport_provided: + import google.auth._default # type: ignore + + if api_key_value and hasattr( + google.auth._default, "get_api_key_credentials" + ): + credentials = google.auth._default.get_api_key_credentials( + api_key_value + ) + + transport_init: Union[ + Type[InstanceAdminTransport], Callable[..., InstanceAdminTransport] + ] = ( + InstanceAdminClient.get_transport_class(transport) + if isinstance(transport, str) or transport is None + else cast(Callable[..., InstanceAdminTransport], transport) + ) + # initialize with the provided callable or the passed in class + self._transport = transport_init( + credentials=credentials, + credentials_file=self._client_options.credentials_file, + host=self._api_endpoint, + scopes=self._client_options.scopes, + client_cert_source_for_mtls=self._client_cert_source, + quota_project_id=self._client_options.quota_project_id, + client_info=client_info, + always_use_jwt_access=True, + api_audience=self._client_options.api_audience, + ) + + if "async" not in str(self._transport): + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ): # pragma: NO COVER + _LOGGER.debug( + "Created client `google.spanner.admin.instance_v1.InstanceAdminClient`.", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "universeDomain": getattr( + self._transport._credentials, "universe_domain", "" + ), + "credentialsType": f"{type(self._transport._credentials).__module__}.{type(self._transport._credentials).__qualname__}", + "credentialsInfo": getattr( + self.transport._credentials, "get_cred_info", lambda: None + )(), + } + if hasattr(self._transport, "_credentials") + else { + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "credentialsType": None, + }, + ) + + def list_instance_configs( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstanceConfigsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstanceConfigsPager: + r"""Lists the supported instance configurations for a + given project. + Returns both Google-managed configurations and + user-managed configurations. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_list_instance_configs(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstanceConfigsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_configs(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest, dict]): + The request object. The request for + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + parent (str): + Required. The name of the project for which a list of + supported instance configurations is requested. Values + are of the form ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigsPager: + The response for + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.ListInstanceConfigsRequest): + request = spanner_instance_admin.ListInstanceConfigsRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_instance_configs] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListInstanceConfigsPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def get_instance_config( + self, + request: Optional[ + Union[spanner_instance_admin.GetInstanceConfigRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.InstanceConfig: + r"""Gets information about a particular instance + configuration. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_get_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstanceConfigRequest( + name="name_value", + ) + + # Make the request + response = client.get_instance_config(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.GetInstanceConfigRequest, dict]): + The request object. The request for + [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. + name (str): + Required. The name of the requested instance + configuration. Values are of the form + ``projects//instanceConfigs/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.types.InstanceConfig: + A possible configuration for a Cloud + Spanner instance. Configurations define + the geographic placement of nodes and + their replication. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.GetInstanceConfigRequest): + request = spanner_instance_admin.GetInstanceConfigRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_instance_config] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def create_instance_config( + self, + request: Optional[ + Union[spanner_instance_admin.CreateInstanceConfigRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + instance_config: Optional[spanner_instance_admin.InstanceConfig] = None, + instance_config_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Creates an instance configuration and begins preparing it to be + used. The returned long-running operation can be used to track + the progress of preparing the new instance configuration. The + instance configuration name is assigned by the caller. If the + named instance configuration already exists, + ``CreateInstanceConfig`` returns ``ALREADY_EXISTS``. + + Immediately after the request returns: + + - The instance configuration is readable via the API, with all + requested attributes. The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field is set to true. Its state is ``CREATING``. + + While the operation is pending: + + - Cancelling the operation renders the instance configuration + immediately unreadable via the API. + - Except for deleting the creating resource, all other attempts + to modify the instance configuration are rejected. + + Upon completion of the returned operation: + + - Instances can be created using the instance configuration. + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field becomes false. Its state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` and + can be used to track creation of the instance configuration. The + metadata field type is + [CreateInstanceConfigMetadata][google.spanner.admin.instance.v1.CreateInstanceConfigMetadata]. + The response field type is + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig], + if successful. + + Authorization requires ``spanner.instanceConfigs.create`` + permission on the resource + [parent][google.spanner.admin.instance.v1.CreateInstanceConfigRequest.parent]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_create_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.CreateInstanceConfigRequest( + parent="parent_value", + instance_config_id="instance_config_id_value", + ) + + # Make the request + operation = client.create_instance_config(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.CreateInstanceConfigRequest, dict]): + The request object. The request for + [CreateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstanceConfig]. + parent (str): + Required. The name of the project in which to create the + instance configuration. Values are of the form + ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_config (google.cloud.spanner_admin_instance_v1.types.InstanceConfig): + Required. The ``InstanceConfig`` proto of the + configuration to create. ``instance_config.name`` must + be ``/instanceConfigs/``. + ``instance_config.base_config`` must be a Google-managed + configuration name, e.g. /instanceConfigs/us-east1, + /instanceConfigs/nam3. + + This corresponds to the ``instance_config`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_config_id (str): + Required. The ID of the instance configuration to + create. Valid identifiers are of the form + ``custom-[-a-z0-9]*[a-z0-9]`` and must be between 2 and + 64 characters in length. The ``custom-`` prefix is + required to avoid name conflicts with Google-managed + configurations. + + This corresponds to the ``instance_config_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.InstanceConfig` A possible configuration for a Cloud Spanner instance. Configurations + define the geographic placement of nodes and their + replication. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, instance_config, instance_config_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.CreateInstanceConfigRequest): + request = spanner_instance_admin.CreateInstanceConfigRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if instance_config is not None: + request.instance_config = instance_config + if instance_config_id is not None: + request.instance_config_id = instance_config_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.create_instance_config] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_instance_admin.InstanceConfig, + metadata_type=spanner_instance_admin.CreateInstanceConfigMetadata, + ) + + # Done; return the response. + return response + + def update_instance_config( + self, + request: Optional[ + Union[spanner_instance_admin.UpdateInstanceConfigRequest, dict] + ] = None, + *, + instance_config: Optional[spanner_instance_admin.InstanceConfig] = None, + update_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Updates an instance configuration. The returned long-running + operation can be used to track the progress of updating the + instance. If the named instance configuration does not exist, + returns ``NOT_FOUND``. + + Only user-managed configurations can be updated. + + Immediately after the request returns: + + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field is set to true. + + While the operation is pending: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceConfigMetadata.cancel_time]. + The operation is guaranteed to succeed at undoing all changes, + after which point it terminates with a ``CANCELLED`` status. + - All other attempts to modify the instance configuration are + rejected. + - Reading the instance configuration via the API continues to + give the pre-request values. + + Upon completion of the returned operation: + + - Creating instances using the instance configuration uses the + new values. + - The new values of the instance configuration are readable via + the API. + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field becomes false. + + The returned long-running operation will have a name of the + format ``/operations/`` and + can be used to track the instance configuration modification. + The metadata field type is + [UpdateInstanceConfigMetadata][google.spanner.admin.instance.v1.UpdateInstanceConfigMetadata]. + The response field type is + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig], + if successful. + + Authorization requires ``spanner.instanceConfigs.update`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstanceConfig.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_update_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.UpdateInstanceConfigRequest( + ) + + # Make the request + operation = client.update_instance_config(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.UpdateInstanceConfigRequest, dict]): + The request object. The request for + [UpdateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstanceConfig]. + instance_config (google.cloud.spanner_admin_instance_v1.types.InstanceConfig): + Required. The user instance configuration to update, + which must always include the instance configuration + name. Otherwise, only fields mentioned in + [update_mask][google.spanner.admin.instance.v1.UpdateInstanceConfigRequest.update_mask] + need be included. To prevent conflicts of concurrent + updates, + [etag][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + can be used. + + This corresponds to the ``instance_config`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + update_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields in + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] + should be updated. The field mask must always be + specified; this prevents any future fields in + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] + from being erased accidentally by clients that do not + know about them. Only display_name and labels can be + updated. + + This corresponds to the ``update_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.InstanceConfig` A possible configuration for a Cloud Spanner instance. Configurations + define the geographic placement of nodes and their + replication. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [instance_config, update_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.UpdateInstanceConfigRequest): + request = spanner_instance_admin.UpdateInstanceConfigRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if instance_config is not None: + request.instance_config = instance_config + if update_mask is not None: + request.update_mask = update_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.update_instance_config] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("instance_config.name", request.instance_config.name),) + ), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_instance_admin.InstanceConfig, + metadata_type=spanner_instance_admin.UpdateInstanceConfigMetadata, + ) + + # Done; return the response. + return response + + def delete_instance_config( + self, + request: Optional[ + Union[spanner_instance_admin.DeleteInstanceConfigRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes the instance configuration. Deletion is only allowed + when no instances are using the configuration. If any instances + are using the configuration, returns ``FAILED_PRECONDITION``. + + Only user-managed configurations can be deleted. + + Authorization requires ``spanner.instanceConfigs.delete`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstanceConfig.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_delete_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstanceConfigRequest( + name="name_value", + ) + + # Make the request + client.delete_instance_config(request=request) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.DeleteInstanceConfigRequest, dict]): + The request object. The request for + [DeleteInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstanceConfig]. + name (str): + Required. The name of the instance configuration to be + deleted. Values are of the form + ``projects//instanceConfigs/`` + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.DeleteInstanceConfigRequest): + request = spanner_instance_admin.DeleteInstanceConfigRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.delete_instance_config] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def list_instance_config_operations( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstanceConfigOperationsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstanceConfigOperationsPager: + r"""Lists the user-managed instance configuration long-running + operations in the given project. An instance configuration + operation has a name of the form + ``projects//instanceConfigs//operations/``. + The long-running operation metadata field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.start_time`` in descending order + starting from the most recently started operation. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_list_instance_config_operations(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstanceConfigOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_config_operations(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsRequest, dict]): + The request object. The request for + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations]. + parent (str): + Required. The project of the instance configuration + operations. Values are of the form + ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigOperationsPager: + The response for + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.ListInstanceConfigOperationsRequest + ): + request = spanner_instance_admin.ListInstanceConfigOperationsRequest( + request + ) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[ + self._transport.list_instance_config_operations + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListInstanceConfigOperationsPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def list_instances( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstancesRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstancesPager: + r"""Lists all instances in the given project. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_list_instances(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instances(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest, dict]): + The request object. The request for + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. + parent (str): + Required. The name of the project for which a list of + instances is requested. Values are of the form + ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancesPager: + The response for + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.ListInstancesRequest): + request = spanner_instance_admin.ListInstancesRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_instances] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListInstancesPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def list_instance_partitions( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstancePartitionsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstancePartitionsPager: + r"""Lists all instance partitions for the given instance. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_list_instance_partitions(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancePartitionsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_partitions(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsRequest, dict]): + The request object. The request for + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions]. + parent (str): + Required. The instance whose instance partitions should + be listed. Values are of the form + ``projects//instances/``. Use + ``{instance} = '-'`` to list instance partitions for all + Instances in a project, e.g., + ``projects/myproject/instances/-``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancePartitionsPager: + The response for + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.ListInstancePartitionsRequest + ): + request = spanner_instance_admin.ListInstancePartitionsRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_instance_partitions] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListInstancePartitionsPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def get_instance( + self, + request: Optional[ + Union[spanner_instance_admin.GetInstanceRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.Instance: + r"""Gets information about a particular instance. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_get_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstanceRequest( + name="name_value", + ) + + # Make the request + response = client.get_instance(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.GetInstanceRequest, dict]): + The request object. The request for + [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. + name (str): + Required. The name of the requested instance. Values are + of the form ``projects//instances/``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.types.Instance: + An isolated set of Cloud Spanner + resources on which databases can be + hosted. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.GetInstanceRequest): + request = spanner_instance_admin.GetInstanceRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_instance] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def create_instance( + self, + request: Optional[ + Union[spanner_instance_admin.CreateInstanceRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + instance_id: Optional[str] = None, + instance: Optional[spanner_instance_admin.Instance] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Creates an instance and begins preparing it to begin serving. + The returned long-running operation can be used to track the + progress of preparing the new instance. The instance name is + assigned by the caller. If the named instance already exists, + ``CreateInstance`` returns ``ALREADY_EXISTS``. + + Immediately upon completion of this request: + + - The instance is readable via the API, with all requested + attributes but no allocated resources. Its state is + ``CREATING``. + + Until completion of the returned operation: + + - Cancelling the operation renders the instance immediately + unreadable via the API. + - The instance can be deleted. + - All other attempts to modify the instance are rejected. + + Upon completion of the returned operation: + + - Billing for all successfully-allocated resources begins (some + types may have lower than the requested levels). + - Databases can be created in the instance. + - The instance's allocated resource levels are readable via the + API. + - The instance's state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` and can be + used to track creation of the instance. The metadata field type + is + [CreateInstanceMetadata][google.spanner.admin.instance.v1.CreateInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_create_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + instance = spanner_admin_instance_v1.Instance() + instance.name = "name_value" + instance.config = "config_value" + instance.display_name = "display_name_value" + + request = spanner_admin_instance_v1.CreateInstanceRequest( + parent="parent_value", + instance_id="instance_id_value", + instance=instance, + ) + + # Make the request + operation = client.create_instance(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.CreateInstanceRequest, dict]): + The request object. The request for + [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. + parent (str): + Required. The name of the project in which to create the + instance. Values are of the form ``projects/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_id (str): + Required. The ID of the instance to create. Valid + identifiers are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` + and must be between 2 and 64 characters in length. + + This corresponds to the ``instance_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance (google.cloud.spanner_admin_instance_v1.types.Instance): + Required. The instance to create. The name may be + omitted, but if specified must be + ``/instances/``. + + This corresponds to the ``instance`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_instance_v1.types.Instance` + An isolated set of Cloud Spanner resources on which + databases can be hosted. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, instance_id, instance] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.CreateInstanceRequest): + request = spanner_instance_admin.CreateInstanceRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if instance_id is not None: + request.instance_id = instance_id + if instance is not None: + request.instance = instance + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.create_instance] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_instance_admin.Instance, + metadata_type=spanner_instance_admin.CreateInstanceMetadata, + ) + + # Done; return the response. + return response + + def update_instance( + self, + request: Optional[ + Union[spanner_instance_admin.UpdateInstanceRequest, dict] + ] = None, + *, + instance: Optional[spanner_instance_admin.Instance] = None, + field_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Updates an instance, and begins allocating or releasing + resources as requested. The returned long-running operation can + be used to track the progress of updating the instance. If the + named instance does not exist, returns ``NOT_FOUND``. + + Immediately upon completion of this request: + + - For resource types for which a decrease in the instance's + allocation has been requested, billing is based on the + newly-requested level. + + Until completion of the returned operation: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceMetadata.cancel_time], + and begins restoring resources to their pre-request values. + The operation is guaranteed to succeed at undoing all resource + changes, after which point it terminates with a ``CANCELLED`` + status. + - All other attempts to modify the instance are rejected. + - Reading the instance via the API continues to give the + pre-request resource levels. + + Upon completion of the returned operation: + + - Billing begins for all successfully-allocated resources (some + types may have lower than the requested levels). + - All newly-reserved resources are available for serving the + instance's tables. + - The instance's new resource levels are readable via the API. + + The returned long-running operation will have a name of the + format ``/operations/`` and can be + used to track the instance modification. The metadata field type + is + [UpdateInstanceMetadata][google.spanner.admin.instance.v1.UpdateInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. + + Authorization requires ``spanner.instances.update`` permission + on the resource + [name][google.spanner.admin.instance.v1.Instance.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_update_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + instance = spanner_admin_instance_v1.Instance() + instance.name = "name_value" + instance.config = "config_value" + instance.display_name = "display_name_value" + + request = spanner_admin_instance_v1.UpdateInstanceRequest( + instance=instance, + ) + + # Make the request + operation = client.update_instance(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.UpdateInstanceRequest, dict]): + The request object. The request for + [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. + instance (google.cloud.spanner_admin_instance_v1.types.Instance): + Required. The instance to update, which must always + include the instance name. Otherwise, only fields + mentioned in + [field_mask][google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask] + need be included. + + This corresponds to the ``instance`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + field_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields in + [Instance][google.spanner.admin.instance.v1.Instance] + should be updated. The field mask must always be + specified; this prevents any future fields in + [Instance][google.spanner.admin.instance.v1.Instance] + from being erased accidentally by clients that do not + know about them. + + This corresponds to the ``field_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:`google.cloud.spanner_admin_instance_v1.types.Instance` + An isolated set of Cloud Spanner resources on which + databases can be hosted. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [instance, field_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.UpdateInstanceRequest): + request = spanner_instance_admin.UpdateInstanceRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if instance is not None: + request.instance = instance + if field_mask is not None: + request.field_mask = field_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.update_instance] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("instance.name", request.instance.name),) + ), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_instance_admin.Instance, + metadata_type=spanner_instance_admin.UpdateInstanceMetadata, + ) + + # Done; return the response. + return response + + def delete_instance( + self, + request: Optional[ + Union[spanner_instance_admin.DeleteInstanceRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes an instance. + + Immediately upon completion of the request: + + - Billing ceases for all of the instance's reserved resources. + + Soon afterward: + + - The instance and *all of its databases* immediately and + irrevocably disappear from the API. All data in the databases + is permanently deleted. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_delete_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstanceRequest( + name="name_value", + ) + + # Make the request + client.delete_instance(request=request) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.DeleteInstanceRequest, dict]): + The request object. The request for + [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. + name (str): + Required. The name of the instance to be deleted. Values + are of the form + ``projects//instances/`` + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.DeleteInstanceRequest): + request = spanner_instance_admin.DeleteInstanceRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.delete_instance] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def set_iam_policy( + self, + request: Optional[Union[iam_policy_pb2.SetIamPolicyRequest, dict]] = None, + *, + resource: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Sets the access control policy on an instance resource. Replaces + any existing policy. + + Authorization requires ``spanner.instances.setIamPolicy`` on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + def sample_set_iam_policy(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = client.set_iam_policy(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.iam.v1.iam_policy_pb2.SetIamPolicyRequest, dict]): + The request object. Request message for ``SetIamPolicy`` method. + resource (str): + REQUIRED: The resource for which the + policy is being specified. See the + operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which specifies access + controls for Google Cloud resources. + + A Policy is a collection of bindings. A binding binds + one or more members, or principals, to a single role. + Principals can be user accounts, service accounts, + Google groups, and domains (such as G Suite). A role + is a named list of permissions; each role can be an + IAM predefined role or a user-created custom role. + + For some types of Google Cloud resources, a binding + can also specify a condition, which is a logical + expression that allows access to a resource only if + the expression evaluates to true. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the [IAM + documentation](https://cloud.google.com/iam/help/conditions/resource-policies). + + **JSON example:** + + :literal:`` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \` + + **YAML example:** + + :literal:`` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \` + + For a description of IAM and its features, see the + [IAM + documentation](https://cloud.google.com/iam/docs/). + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + if isinstance(request, dict): + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + request = iam_policy_pb2.SetIamPolicyRequest(**request) + elif not request: + # Null request, just make one. + request = iam_policy_pb2.SetIamPolicyRequest() + if resource is not None: + request.resource = resource + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.set_iam_policy] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def get_iam_policy( + self, + request: Optional[Union[iam_policy_pb2.GetIamPolicyRequest, dict]] = None, + *, + resource: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Gets the access control policy for an instance resource. Returns + an empty policy if an instance exists but does not have a policy + set. + + Authorization requires ``spanner.instances.getIamPolicy`` on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + def sample_get_iam_policy(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = client.get_iam_policy(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.iam.v1.iam_policy_pb2.GetIamPolicyRequest, dict]): + The request object. Request message for ``GetIamPolicy`` method. + resource (str): + REQUIRED: The resource for which the + policy is being requested. See the + operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which specifies access + controls for Google Cloud resources. + + A Policy is a collection of bindings. A binding binds + one or more members, or principals, to a single role. + Principals can be user accounts, service accounts, + Google groups, and domains (such as G Suite). A role + is a named list of permissions; each role can be an + IAM predefined role or a user-created custom role. + + For some types of Google Cloud resources, a binding + can also specify a condition, which is a logical + expression that allows access to a resource only if + the expression evaluates to true. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the [IAM + documentation](https://cloud.google.com/iam/help/conditions/resource-policies). + + **JSON example:** + + :literal:`` { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }`\ \` + + **YAML example:** + + :literal:`` bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3`\ \` + + For a description of IAM and its features, see the + [IAM + documentation](https://cloud.google.com/iam/docs/). + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + if isinstance(request, dict): + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + request = iam_policy_pb2.GetIamPolicyRequest(**request) + elif not request: + # Null request, just make one. + request = iam_policy_pb2.GetIamPolicyRequest() + if resource is not None: + request.resource = resource + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_iam_policy] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def test_iam_permissions( + self, + request: Optional[Union[iam_policy_pb2.TestIamPermissionsRequest, dict]] = None, + *, + resource: Optional[str] = None, + permissions: Optional[MutableSequence[str]] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> iam_policy_pb2.TestIamPermissionsResponse: + r"""Returns permissions that the caller has on the specified + instance resource. + + Attempting this RPC on a non-existent Cloud Spanner instance + resource will result in a NOT_FOUND error if the user has + ``spanner.instances.list`` permission on the containing Google + Cloud Project. Otherwise returns an empty set of permissions. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + from google.iam.v1 import iam_policy_pb2 # type: ignore + + def sample_test_iam_permissions(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + permissions=['permissions_value1', 'permissions_value2'], + ) + + # Make the request + response = client.test_iam_permissions(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest, dict]): + The request object. Request message for ``TestIamPermissions`` method. + resource (str): + REQUIRED: The resource for which the + policy detail is being requested. See + the operation documentation for the + appropriate value for this field. + + This corresponds to the ``resource`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + permissions (MutableSequence[str]): + The set of permissions to check for the ``resource``. + Permissions with wildcards (such as '*' or 'storage.*') + are not allowed. For more information see `IAM + Overview `__. + + This corresponds to the ``permissions`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse: + Response message for TestIamPermissions method. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [resource, permissions] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + if isinstance(request, dict): + # - The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + request = iam_policy_pb2.TestIamPermissionsRequest(**request) + elif not request: + # Null request, just make one. + request = iam_policy_pb2.TestIamPermissionsRequest() + if resource is not None: + request.resource = resource + if permissions: + request.permissions.extend(permissions) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.test_iam_permissions] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("resource", request.resource),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def get_instance_partition( + self, + request: Optional[ + Union[spanner_instance_admin.GetInstancePartitionRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.InstancePartition: + r"""Gets information about a particular instance + partition. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_get_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstancePartitionRequest( + name="name_value", + ) + + # Make the request + response = client.get_instance_partition(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.GetInstancePartitionRequest, dict]): + The request object. The request for + [GetInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.GetInstancePartition]. + name (str): + Required. The name of the requested instance partition. + Values are of the form + ``projects/{project}/instances/{instance}/instancePartitions/{instance_partition}``. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.types.InstancePartition: + An isolated set of Cloud Spanner + resources that databases can define + placements on. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.GetInstancePartitionRequest): + request = spanner_instance_admin.GetInstancePartitionRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_instance_partition] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def create_instance_partition( + self, + request: Optional[ + Union[spanner_instance_admin.CreateInstancePartitionRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + instance_partition: Optional[spanner_instance_admin.InstancePartition] = None, + instance_partition_id: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Creates an instance partition and begins preparing it to be + used. The returned long-running operation can be used to track + the progress of preparing the new instance partition. The + instance partition name is assigned by the caller. If the named + instance partition already exists, ``CreateInstancePartition`` + returns ``ALREADY_EXISTS``. + + Immediately upon completion of this request: + + - The instance partition is readable via the API, with all + requested attributes but no allocated resources. Its state is + ``CREATING``. + + Until completion of the returned operation: + + - Cancelling the operation renders the instance partition + immediately unreadable via the API. + - The instance partition can be deleted. + - All other attempts to modify the instance partition are + rejected. + + Upon completion of the returned operation: + + - Billing for all successfully-allocated resources begins (some + types may have lower than the requested levels). + - Databases can start using this instance partition. + - The instance partition's allocated resource levels are + readable via the API. + - The instance partition's state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` + and can be used to track creation of the instance partition. The + metadata field type is + [CreateInstancePartitionMetadata][google.spanner.admin.instance.v1.CreateInstancePartitionMetadata]. + The response field type is + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition], + if successful. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_create_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + instance_partition = spanner_admin_instance_v1.InstancePartition() + instance_partition.node_count = 1070 + instance_partition.name = "name_value" + instance_partition.config = "config_value" + instance_partition.display_name = "display_name_value" + + request = spanner_admin_instance_v1.CreateInstancePartitionRequest( + parent="parent_value", + instance_partition_id="instance_partition_id_value", + instance_partition=instance_partition, + ) + + # Make the request + operation = client.create_instance_partition(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.CreateInstancePartitionRequest, dict]): + The request object. The request for + [CreateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstancePartition]. + parent (str): + Required. The name of the instance in which to create + the instance partition. Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_partition (google.cloud.spanner_admin_instance_v1.types.InstancePartition): + Required. The instance partition to create. The + instance_partition.name may be omitted, but if specified + must be + ``/instancePartitions/``. + + This corresponds to the ``instance_partition`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + instance_partition_id (str): + Required. The ID of the instance partition to create. + Valid identifiers are of the form + ``[a-z][-a-z0-9]*[a-z0-9]`` and must be between 2 and 64 + characters in length. + + This corresponds to the ``instance_partition_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.InstancePartition` An isolated set of Cloud Spanner resources that databases can define + placements on. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent, instance_partition, instance_partition_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.CreateInstancePartitionRequest + ): + request = spanner_instance_admin.CreateInstancePartitionRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + if instance_partition is not None: + request.instance_partition = instance_partition + if instance_partition_id is not None: + request.instance_partition_id = instance_partition_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[ + self._transport.create_instance_partition + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_instance_admin.InstancePartition, + metadata_type=spanner_instance_admin.CreateInstancePartitionMetadata, + ) + + # Done; return the response. + return response + + def delete_instance_partition( + self, + request: Optional[ + Union[spanner_instance_admin.DeleteInstancePartitionRequest, dict] + ] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes an existing instance partition. Requires that the + instance partition is not used by any database or backup and is + not the default instance partition of an instance. + + Authorization requires ``spanner.instancePartitions.delete`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstancePartition.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_delete_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstancePartitionRequest( + name="name_value", + ) + + # Make the request + client.delete_instance_partition(request=request) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.DeleteInstancePartitionRequest, dict]): + The request object. The request for + [DeleteInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstancePartition]. + name (str): + Required. The name of the instance partition to be + deleted. Values are of the form + ``projects/{project}/instances/{instance}/instancePartitions/{instance_partition}`` + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.DeleteInstancePartitionRequest + ): + request = spanner_instance_admin.DeleteInstancePartitionRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[ + self._transport.delete_instance_partition + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def update_instance_partition( + self, + request: Optional[ + Union[spanner_instance_admin.UpdateInstancePartitionRequest, dict] + ] = None, + *, + instance_partition: Optional[spanner_instance_admin.InstancePartition] = None, + field_mask: Optional[field_mask_pb2.FieldMask] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Updates an instance partition, and begins allocating or + releasing resources as requested. The returned long-running + operation can be used to track the progress of updating the + instance partition. If the named instance partition does not + exist, returns ``NOT_FOUND``. + + Immediately upon completion of this request: + + - For resource types for which a decrease in the instance + partition's allocation has been requested, billing is based on + the newly-requested level. + + Until completion of the returned operation: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstancePartitionMetadata.cancel_time], + and begins restoring resources to their pre-request values. + The operation is guaranteed to succeed at undoing all resource + changes, after which point it terminates with a ``CANCELLED`` + status. + - All other attempts to modify the instance partition are + rejected. + - Reading the instance partition via the API continues to give + the pre-request resource levels. + + Upon completion of the returned operation: + + - Billing begins for all successfully-allocated resources (some + types may have lower than the requested levels). + - All newly-reserved resources are available for serving the + instance partition's tables. + - The instance partition's new resource levels are readable via + the API. + + The returned long-running operation will have a name of the + format ``/operations/`` + and can be used to track the instance partition modification. + The metadata field type is + [UpdateInstancePartitionMetadata][google.spanner.admin.instance.v1.UpdateInstancePartitionMetadata]. + The response field type is + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition], + if successful. + + Authorization requires ``spanner.instancePartitions.update`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstancePartition.name]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_update_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + instance_partition = spanner_admin_instance_v1.InstancePartition() + instance_partition.node_count = 1070 + instance_partition.name = "name_value" + instance_partition.config = "config_value" + instance_partition.display_name = "display_name_value" + + request = spanner_admin_instance_v1.UpdateInstancePartitionRequest( + instance_partition=instance_partition, + ) + + # Make the request + operation = client.update_instance_partition(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.UpdateInstancePartitionRequest, dict]): + The request object. The request for + [UpdateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstancePartition]. + instance_partition (google.cloud.spanner_admin_instance_v1.types.InstancePartition): + Required. The instance partition to update, which must + always include the instance partition name. Otherwise, + only fields mentioned in + [field_mask][google.spanner.admin.instance.v1.UpdateInstancePartitionRequest.field_mask] + need be included. + + This corresponds to the ``instance_partition`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + field_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields in + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition] + should be updated. The field mask must always be + specified; this prevents any future fields in + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition] + from being erased accidentally by clients that do not + know about them. + + This corresponds to the ``field_mask`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.InstancePartition` An isolated set of Cloud Spanner resources that databases can define + placements on. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [instance_partition, field_mask] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.UpdateInstancePartitionRequest + ): + request = spanner_instance_admin.UpdateInstancePartitionRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if instance_partition is not None: + request.instance_partition = instance_partition + if field_mask is not None: + request.field_mask = field_mask + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[ + self._transport.update_instance_partition + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata( + (("instance_partition.name", request.instance_partition.name),) + ), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_instance_admin.InstancePartition, + metadata_type=spanner_instance_admin.UpdateInstancePartitionMetadata, + ) + + # Done; return the response. + return response + + def list_instance_partition_operations( + self, + request: Optional[ + Union[spanner_instance_admin.ListInstancePartitionOperationsRequest, dict] + ] = None, + *, + parent: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListInstancePartitionOperationsPager: + r"""Lists instance partition long-running operations in the given + instance. An instance partition operation has a name of the form + ``projects//instances//instancePartitions//operations/``. + The long-running operation metadata field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.start_time`` in descending order + starting from the most recently started operation. + + Authorization requires + ``spanner.instancePartitionOperations.list`` permission on the + resource + [parent][google.spanner.admin.instance.v1.ListInstancePartitionOperationsRequest.parent]. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_list_instance_partition_operations(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancePartitionOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_partition_operations(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsRequest, dict]): + The request object. The request for + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations]. + parent (str): + Required. The parent instance of the instance partition + operations. Values are of the form + ``projects//instances/``. + + This corresponds to the ``parent`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancePartitionOperationsPager: + The response for + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [parent] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance( + request, spanner_instance_admin.ListInstancePartitionOperationsRequest + ): + request = spanner_instance_admin.ListInstancePartitionOperationsRequest( + request + ) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if parent is not None: + request.parent = parent + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[ + self._transport.list_instance_partition_operations + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", request.parent),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListInstancePartitionOperationsPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def move_instance( + self, + request: Optional[ + Union[spanner_instance_admin.MoveInstanceRequest, dict] + ] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operation.Operation: + r"""Moves an instance to the target instance configuration. You can + use the returned long-running operation to track the progress of + moving the instance. + + ``MoveInstance`` returns ``FAILED_PRECONDITION`` if the instance + meets any of the following criteria: + + - Is undergoing a move to a different instance configuration + - Has backups + - Has an ongoing update + - Contains any CMEK-enabled databases + - Is a free trial instance + + While the operation is pending: + + - All other attempts to modify the instance, including changes + to its compute capacity, are rejected. + + - The following database and backup admin operations are + rejected: + + - ``DatabaseAdmin.CreateDatabase`` + - ``DatabaseAdmin.UpdateDatabaseDdl`` (disabled if + default_leader is specified in the request.) + - ``DatabaseAdmin.RestoreDatabase`` + - ``DatabaseAdmin.CreateBackup`` + - ``DatabaseAdmin.CopyBackup`` + + - Both the source and target instance configurations are subject + to hourly compute and storage charges. + + - The instance might experience higher read-write latencies and + a higher transaction abort rate. However, moving an instance + doesn't cause any downtime. + + The returned long-running operation has a name of the format + ``/operations/`` and can be used to + track the move instance operation. The metadata field type is + [MoveInstanceMetadata][google.spanner.admin.instance.v1.MoveInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.MoveInstanceMetadata.cancel_time]. + Cancellation is not immediate because it involves moving any + data previously moved to the target instance configuration back + to the original instance configuration. You can use this + operation to track the progress of the cancellation. Upon + successful completion of the cancellation, the operation + terminates with ``CANCELLED`` status. + + If not cancelled, upon completion of the returned operation: + + - The instance successfully moves to the target instance + configuration. + - You are billed for compute and storage in target instance + configuration. + + Authorization requires the ``spanner.instances.update`` + permission on the resource + [instance][google.spanner.admin.instance.v1.Instance]. + + For more details, see `Move an + instance `__. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_admin_instance_v1 + + def sample_move_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.MoveInstanceRequest( + name="name_value", + target_config="target_config_value", + ) + + # Make the request + operation = client.move_instance(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_admin_instance_v1.types.MoveInstanceRequest, dict]): + The request object. The request for + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.api_core.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be :class:`google.cloud.spanner_admin_instance_v1.types.MoveInstanceResponse` The response for + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance]. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner_instance_admin.MoveInstanceRequest): + request = spanner_instance_admin.MoveInstanceRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.move_instance] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + spanner_instance_admin.MoveInstanceResponse, + metadata_type=spanner_instance_admin.MoveInstanceMetadata, + ) + + # Done; return the response. + return response + + def __enter__(self) -> "InstanceAdminClient": + return self + + def __exit__(self, type, value, traceback): + """Releases underlying transport's resources. + + .. warning:: + ONLY use as a context manager if the transport is NOT shared + with other clients! Exiting the with block will CLOSE the transport + and may cause errors in other clients! + """ + self.transport.close() + + def list_operations( + self, + request: Optional[operations_pb2.ListOperationsRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.ListOperationsResponse: + r"""Lists operations that match the specified filter in the request. + + Args: + request (:class:`~.operations_pb2.ListOperationsRequest`): + The request object. Request message for + `ListOperations` method. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + ~.operations_pb2.ListOperationsResponse: + Response message for ``ListOperations`` method. + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.ListOperationsRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_operations] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + try: + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + except core_exceptions.GoogleAPICallError as e: + self._add_cred_info_for_auth_errors(e) + raise e + + def get_operation( + self, + request: Optional[operations_pb2.GetOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Gets the latest state of a long-running operation. + + Args: + request (:class:`~.operations_pb2.GetOperationRequest`): + The request object. Request message for + `GetOperation` method. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + ~.operations_pb2.Operation: + An ``Operation`` object. + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.GetOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + try: + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + except core_exceptions.GoogleAPICallError as e: + self._add_cred_info_for_auth_errors(e) + raise e + + def delete_operation( + self, + request: Optional[operations_pb2.DeleteOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Deletes a long-running operation. + + This method indicates that the client is no longer interested + in the operation result. It does not cancel the operation. + If the server doesn't support this method, it returns + `google.rpc.Code.UNIMPLEMENTED`. + + Args: + request (:class:`~.operations_pb2.DeleteOperationRequest`): + The request object. Request message for + `DeleteOperation` method. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + None + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.DeleteOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.delete_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def cancel_operation( + self, + request: Optional[operations_pb2.CancelOperationRequest] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Starts asynchronous cancellation on a long-running operation. + + The server makes a best effort to cancel the operation, but success + is not guaranteed. If the server doesn't support this method, it returns + `google.rpc.Code.UNIMPLEMENTED`. + + Args: + request (:class:`~.operations_pb2.CancelOperationRequest`): + The request object. Request message for + `CancelOperation` method. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + Returns: + None + """ + # Create or coerce a protobuf request object. + # The request isn't a proto-plus wrapped type, + # so it must be constructed via keyword expansion. + if isinstance(request, dict): + request = operations_pb2.CancelOperationRequest(**request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.cancel_operation] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + +__all__ = ("InstanceAdminClient",) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/pagers.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/pagers.py new file mode 100644 index 0000000000..d4a3dde6d8 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/pagers.py @@ -0,0 +1,856 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.api_core import retry_async as retries_async +from typing import ( + Any, + AsyncIterator, + Awaitable, + Callable, + Sequence, + Tuple, + Optional, + Iterator, + Union, +) + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] + OptionalAsyncRetry = Union[ + retries_async.AsyncRetry, gapic_v1.method._MethodDefault, None + ] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + OptionalAsyncRetry = Union[retries_async.AsyncRetry, object, None] # type: ignore + +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.longrunning import operations_pb2 # type: ignore + + +class ListInstanceConfigsPager: + """A pager for iterating through ``list_instance_configs`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse` object, and + provides an ``__iter__`` method to iterate through its + ``instance_configs`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListInstanceConfigs`` requests and continue to iterate + through the ``instance_configs`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., spanner_instance_admin.ListInstanceConfigsResponse], + request: spanner_instance_admin.ListInstanceConfigsRequest, + response: spanner_instance_admin.ListInstanceConfigsResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstanceConfigsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[spanner_instance_admin.ListInstanceConfigsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[spanner_instance_admin.InstanceConfig]: + for page in self.pages: + yield from page.instance_configs + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstanceConfigsAsyncPager: + """A pager for iterating through ``list_instance_configs`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``instance_configs`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListInstanceConfigs`` requests and continue to iterate + through the ``instance_configs`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[ + ..., Awaitable[spanner_instance_admin.ListInstanceConfigsResponse] + ], + request: spanner_instance_admin.ListInstanceConfigsRequest, + response: spanner_instance_admin.ListInstanceConfigsResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstanceConfigsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages( + self, + ) -> AsyncIterator[spanner_instance_admin.ListInstanceConfigsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[spanner_instance_admin.InstanceConfig]: + async def async_generator(): + async for page in self.pages: + for response in page.instance_configs: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstanceConfigOperationsPager: + """A pager for iterating through ``list_instance_config_operations`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsResponse` object, and + provides an ``__iter__`` method to iterate through its + ``operations`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListInstanceConfigOperations`` requests and continue to iterate + through the ``operations`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[ + ..., spanner_instance_admin.ListInstanceConfigOperationsResponse + ], + request: spanner_instance_admin.ListInstanceConfigOperationsRequest, + response: spanner_instance_admin.ListInstanceConfigOperationsResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstanceConfigOperationsRequest( + request + ) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages( + self, + ) -> Iterator[spanner_instance_admin.ListInstanceConfigOperationsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[operations_pb2.Operation]: + for page in self.pages: + yield from page.operations + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstanceConfigOperationsAsyncPager: + """A pager for iterating through ``list_instance_config_operations`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``operations`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListInstanceConfigOperations`` requests and continue to iterate + through the ``operations`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[ + ..., Awaitable[spanner_instance_admin.ListInstanceConfigOperationsResponse] + ], + request: spanner_instance_admin.ListInstanceConfigOperationsRequest, + response: spanner_instance_admin.ListInstanceConfigOperationsResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstanceConfigOperationsRequest( + request + ) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages( + self, + ) -> AsyncIterator[spanner_instance_admin.ListInstanceConfigOperationsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[operations_pb2.Operation]: + async def async_generator(): + async for page in self.pages: + for response in page.operations: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstancesPager: + """A pager for iterating through ``list_instances`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse` object, and + provides an ``__iter__`` method to iterate through its + ``instances`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListInstances`` requests and continue to iterate + through the ``instances`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., spanner_instance_admin.ListInstancesResponse], + request: spanner_instance_admin.ListInstancesRequest, + response: spanner_instance_admin.ListInstancesResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstancesRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[spanner_instance_admin.ListInstancesResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[spanner_instance_admin.Instance]: + for page in self.pages: + yield from page.instances + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstancesAsyncPager: + """A pager for iterating through ``list_instances`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``instances`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListInstances`` requests and continue to iterate + through the ``instances`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., Awaitable[spanner_instance_admin.ListInstancesResponse]], + request: spanner_instance_admin.ListInstancesRequest, + response: spanner_instance_admin.ListInstancesResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstancesResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstancesRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages( + self, + ) -> AsyncIterator[spanner_instance_admin.ListInstancesResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[spanner_instance_admin.Instance]: + async def async_generator(): + async for page in self.pages: + for response in page.instances: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstancePartitionsPager: + """A pager for iterating through ``list_instance_partitions`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsResponse` object, and + provides an ``__iter__`` method to iterate through its + ``instance_partitions`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListInstancePartitions`` requests and continue to iterate + through the ``instance_partitions`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., spanner_instance_admin.ListInstancePartitionsResponse], + request: spanner_instance_admin.ListInstancePartitionsRequest, + response: spanner_instance_admin.ListInstancePartitionsResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstancePartitionsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[spanner_instance_admin.ListInstancePartitionsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[spanner_instance_admin.InstancePartition]: + for page in self.pages: + yield from page.instance_partitions + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstancePartitionsAsyncPager: + """A pager for iterating through ``list_instance_partitions`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``instance_partitions`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListInstancePartitions`` requests and continue to iterate + through the ``instance_partitions`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[ + ..., Awaitable[spanner_instance_admin.ListInstancePartitionsResponse] + ], + request: spanner_instance_admin.ListInstancePartitionsRequest, + response: spanner_instance_admin.ListInstancePartitionsResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstancePartitionsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages( + self, + ) -> AsyncIterator[spanner_instance_admin.ListInstancePartitionsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[spanner_instance_admin.InstancePartition]: + async def async_generator(): + async for page in self.pages: + for response in page.instance_partitions: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstancePartitionOperationsPager: + """A pager for iterating through ``list_instance_partition_operations`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsResponse` object, and + provides an ``__iter__`` method to iterate through its + ``operations`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListInstancePartitionOperations`` requests and continue to iterate + through the ``operations`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[ + ..., spanner_instance_admin.ListInstancePartitionOperationsResponse + ], + request: spanner_instance_admin.ListInstancePartitionOperationsRequest, + response: spanner_instance_admin.ListInstancePartitionOperationsResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstancePartitionOperationsRequest( + request + ) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages( + self, + ) -> Iterator[spanner_instance_admin.ListInstancePartitionOperationsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[operations_pb2.Operation]: + for page in self.pages: + yield from page.operations + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListInstancePartitionOperationsAsyncPager: + """A pager for iterating through ``list_instance_partition_operations`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``operations`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListInstancePartitionOperations`` requests and continue to iterate + through the ``operations`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[ + ..., + Awaitable[spanner_instance_admin.ListInstancePartitionOperationsResponse], + ], + request: spanner_instance_admin.ListInstancePartitionOperationsRequest, + response: spanner_instance_admin.ListInstancePartitionOperationsResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsRequest): + The initial request object. + response (google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner_instance_admin.ListInstancePartitionOperationsRequest( + request + ) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages( + self, + ) -> AsyncIterator[spanner_instance_admin.ListInstancePartitionOperationsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[operations_pb2.Operation]: + async def async_generator(): + async for page in self.pages: + for response in page.operations: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/README.rst b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/README.rst new file mode 100644 index 0000000000..762ac0c765 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/README.rst @@ -0,0 +1,9 @@ + +transport inheritance structure +_______________________________ + +`InstanceAdminTransport` is the ABC for all transports. +- public child `InstanceAdminGrpcTransport` for sync gRPC transport (defined in `grpc.py`). +- public child `InstanceAdminGrpcAsyncIOTransport` for async gRPC transport (defined in `grpc_asyncio.py`). +- private child `_BaseInstanceAdminRestTransport` for base REST transport with inner classes `_BaseMETHOD` (defined in `rest_base.py`). +- public child `InstanceAdminRestTransport` for sync REST transport with inner classes `METHOD` derived from the parent's corresponding `_BaseMETHOD` classes (defined in `rest.py`). diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/__init__.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/__init__.py new file mode 100644 index 0000000000..24e71739c7 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/__init__.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from collections import OrderedDict +from typing import Dict, Type + +from .base import InstanceAdminTransport +from .grpc import InstanceAdminGrpcTransport +from .grpc_asyncio import InstanceAdminGrpcAsyncIOTransport +from .rest import InstanceAdminRestTransport +from .rest import InstanceAdminRestInterceptor + + +# Compile a registry of transports. +_transport_registry = OrderedDict() # type: Dict[str, Type[InstanceAdminTransport]] +_transport_registry["grpc"] = InstanceAdminGrpcTransport +_transport_registry["grpc_asyncio"] = InstanceAdminGrpcAsyncIOTransport +_transport_registry["rest"] = InstanceAdminRestTransport + +__all__ = ( + "InstanceAdminTransport", + "InstanceAdminGrpcTransport", + "InstanceAdminGrpcAsyncIOTransport", + "InstanceAdminRestTransport", + "InstanceAdminRestInterceptor", +) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/base.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/base.py new file mode 100644 index 0000000000..5a737b69f7 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/base.py @@ -0,0 +1,598 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import abc +from typing import Awaitable, Callable, Dict, Optional, Sequence, Union + +from google.cloud.spanner_admin_instance_v1 import gapic_version as package_version + +import google.auth # type: ignore +import google.api_core +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.api_core import operations_v1 +from google.auth import credentials as ga_credentials # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +class InstanceAdminTransport(abc.ABC): + """Abstract transport class for InstanceAdmin.""" + + AUTH_SCOPES = ( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ) + + DEFAULT_HOST: str = "spanner.googleapis.com" + + def __init__( + self, + *, + host: str = DEFAULT_HOST, + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + """ + + scopes_kwargs = {"scopes": scopes, "default_scopes": self.AUTH_SCOPES} + + # Save the scopes. + self._scopes = scopes + if not hasattr(self, "_ignore_credentials"): + self._ignore_credentials: bool = False + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise core_exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = google.auth.load_credentials_from_file( + credentials_file, **scopes_kwargs, quota_project_id=quota_project_id + ) + elif credentials is None and not self._ignore_credentials: + credentials, _ = google.auth.default( + **scopes_kwargs, quota_project_id=quota_project_id + ) + # Don't apply audience if the credentials file passed from user. + if hasattr(credentials, "with_gdch_audience"): + credentials = credentials.with_gdch_audience( + api_audience if api_audience else host + ) + + # If the credentials are service account credentials, then always try to use self signed JWT. + if ( + always_use_jwt_access + and isinstance(credentials, service_account.Credentials) + and hasattr(service_account.Credentials, "with_always_use_jwt_access") + ): + credentials = credentials.with_always_use_jwt_access(True) + + # Save the credentials. + self._credentials = credentials + + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + @property + def host(self): + return self._host + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.list_instance_configs: gapic_v1.method.wrap_method( + self.list_instance_configs, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.get_instance_config: gapic_v1.method.wrap_method( + self.get_instance_config, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.create_instance_config: gapic_v1.method.wrap_method( + self.create_instance_config, + default_timeout=None, + client_info=client_info, + ), + self.update_instance_config: gapic_v1.method.wrap_method( + self.update_instance_config, + default_timeout=None, + client_info=client_info, + ), + self.delete_instance_config: gapic_v1.method.wrap_method( + self.delete_instance_config, + default_timeout=None, + client_info=client_info, + ), + self.list_instance_config_operations: gapic_v1.method.wrap_method( + self.list_instance_config_operations, + default_timeout=None, + client_info=client_info, + ), + self.list_instances: gapic_v1.method.wrap_method( + self.list_instances, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_instance_partitions: gapic_v1.method.wrap_method( + self.list_instance_partitions, + default_timeout=None, + client_info=client_info, + ), + self.get_instance: gapic_v1.method.wrap_method( + self.get_instance, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.create_instance: gapic_v1.method.wrap_method( + self.create_instance, + default_timeout=3600.0, + client_info=client_info, + ), + self.update_instance: gapic_v1.method.wrap_method( + self.update_instance, + default_timeout=3600.0, + client_info=client_info, + ), + self.delete_instance: gapic_v1.method.wrap_method( + self.delete_instance, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.set_iam_policy: gapic_v1.method.wrap_method( + self.set_iam_policy, + default_timeout=30.0, + client_info=client_info, + ), + self.get_iam_policy: gapic_v1.method.wrap_method( + self.get_iam_policy, + default_retry=retries.Retry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.test_iam_permissions: gapic_v1.method.wrap_method( + self.test_iam_permissions, + default_timeout=30.0, + client_info=client_info, + ), + self.get_instance_partition: gapic_v1.method.wrap_method( + self.get_instance_partition, + default_timeout=None, + client_info=client_info, + ), + self.create_instance_partition: gapic_v1.method.wrap_method( + self.create_instance_partition, + default_timeout=None, + client_info=client_info, + ), + self.delete_instance_partition: gapic_v1.method.wrap_method( + self.delete_instance_partition, + default_timeout=None, + client_info=client_info, + ), + self.update_instance_partition: gapic_v1.method.wrap_method( + self.update_instance_partition, + default_timeout=None, + client_info=client_info, + ), + self.list_instance_partition_operations: gapic_v1.method.wrap_method( + self.list_instance_partition_operations, + default_timeout=None, + client_info=client_info, + ), + self.move_instance: gapic_v1.method.wrap_method( + self.move_instance, + default_timeout=None, + client_info=client_info, + ), + self.cancel_operation: gapic_v1.method.wrap_method( + self.cancel_operation, + default_timeout=None, + client_info=client_info, + ), + self.delete_operation: gapic_v1.method.wrap_method( + self.delete_operation, + default_timeout=None, + client_info=client_info, + ), + self.get_operation: gapic_v1.method.wrap_method( + self.get_operation, + default_timeout=None, + client_info=client_info, + ), + self.list_operations: gapic_v1.method.wrap_method( + self.list_operations, + default_timeout=None, + client_info=client_info, + ), + } + + def close(self): + """Closes resources associated with the transport. + + .. warning:: + Only call this method if the transport is NOT shared + with other clients - this may cause errors in other clients! + """ + raise NotImplementedError() + + @property + def operations_client(self): + """Return the client designed to process long-running operations.""" + raise NotImplementedError() + + @property + def list_instance_configs( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstanceConfigsRequest], + Union[ + spanner_instance_admin.ListInstanceConfigsResponse, + Awaitable[spanner_instance_admin.ListInstanceConfigsResponse], + ], + ]: + raise NotImplementedError() + + @property + def get_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstanceConfigRequest], + Union[ + spanner_instance_admin.InstanceConfig, + Awaitable[spanner_instance_admin.InstanceConfig], + ], + ]: + raise NotImplementedError() + + @property + def create_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstanceConfigRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def update_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstanceConfigRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def delete_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstanceConfigRequest], + Union[empty_pb2.Empty, Awaitable[empty_pb2.Empty]], + ]: + raise NotImplementedError() + + @property + def list_instance_config_operations( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstanceConfigOperationsRequest], + Union[ + spanner_instance_admin.ListInstanceConfigOperationsResponse, + Awaitable[spanner_instance_admin.ListInstanceConfigOperationsResponse], + ], + ]: + raise NotImplementedError() + + @property + def list_instances( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancesRequest], + Union[ + spanner_instance_admin.ListInstancesResponse, + Awaitable[spanner_instance_admin.ListInstancesResponse], + ], + ]: + raise NotImplementedError() + + @property + def list_instance_partitions( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancePartitionsRequest], + Union[ + spanner_instance_admin.ListInstancePartitionsResponse, + Awaitable[spanner_instance_admin.ListInstancePartitionsResponse], + ], + ]: + raise NotImplementedError() + + @property + def get_instance( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstanceRequest], + Union[ + spanner_instance_admin.Instance, Awaitable[spanner_instance_admin.Instance] + ], + ]: + raise NotImplementedError() + + @property + def create_instance( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstanceRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def update_instance( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstanceRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def delete_instance( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstanceRequest], + Union[empty_pb2.Empty, Awaitable[empty_pb2.Empty]], + ]: + raise NotImplementedError() + + @property + def set_iam_policy( + self, + ) -> Callable[ + [iam_policy_pb2.SetIamPolicyRequest], + Union[policy_pb2.Policy, Awaitable[policy_pb2.Policy]], + ]: + raise NotImplementedError() + + @property + def get_iam_policy( + self, + ) -> Callable[ + [iam_policy_pb2.GetIamPolicyRequest], + Union[policy_pb2.Policy, Awaitable[policy_pb2.Policy]], + ]: + raise NotImplementedError() + + @property + def test_iam_permissions( + self, + ) -> Callable[ + [iam_policy_pb2.TestIamPermissionsRequest], + Union[ + iam_policy_pb2.TestIamPermissionsResponse, + Awaitable[iam_policy_pb2.TestIamPermissionsResponse], + ], + ]: + raise NotImplementedError() + + @property + def get_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstancePartitionRequest], + Union[ + spanner_instance_admin.InstancePartition, + Awaitable[spanner_instance_admin.InstancePartition], + ], + ]: + raise NotImplementedError() + + @property + def create_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstancePartitionRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def delete_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstancePartitionRequest], + Union[empty_pb2.Empty, Awaitable[empty_pb2.Empty]], + ]: + raise NotImplementedError() + + @property + def update_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstancePartitionRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def list_instance_partition_operations( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancePartitionOperationsRequest], + Union[ + spanner_instance_admin.ListInstancePartitionOperationsResponse, + Awaitable[spanner_instance_admin.ListInstancePartitionOperationsResponse], + ], + ]: + raise NotImplementedError() + + @property + def move_instance( + self, + ) -> Callable[ + [spanner_instance_admin.MoveInstanceRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def list_operations( + self, + ) -> Callable[ + [operations_pb2.ListOperationsRequest], + Union[ + operations_pb2.ListOperationsResponse, + Awaitable[operations_pb2.ListOperationsResponse], + ], + ]: + raise NotImplementedError() + + @property + def get_operation( + self, + ) -> Callable[ + [operations_pb2.GetOperationRequest], + Union[operations_pb2.Operation, Awaitable[operations_pb2.Operation]], + ]: + raise NotImplementedError() + + @property + def cancel_operation( + self, + ) -> Callable[[operations_pb2.CancelOperationRequest], None,]: + raise NotImplementedError() + + @property + def delete_operation( + self, + ) -> Callable[[operations_pb2.DeleteOperationRequest], None,]: + raise NotImplementedError() + + @property + def kind(self) -> str: + raise NotImplementedError() + + +__all__ = ("InstanceAdminTransport",) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc.py new file mode 100644 index 0000000000..ee5b765210 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc.py @@ -0,0 +1,1416 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import json +import logging as std_logging +import pickle +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple, Union + +from google.api_core import grpc_helpers +from google.api_core import operations_v1 +from google.api_core import gapic_v1 +import google.auth # type: ignore +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.protobuf.json_format import MessageToJson +import google.protobuf.message + +import grpc # type: ignore +import proto # type: ignore + +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from .base import InstanceAdminTransport, DEFAULT_CLIENT_INFO + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class _LoggingClientInterceptor(grpc.UnaryUnaryClientInterceptor): # pragma: NO COVER + def intercept_unary_unary(self, continuation, client_call_details, request): + logging_enabled = CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ) + if logging_enabled: # pragma: NO COVER + request_metadata = client_call_details.metadata + if isinstance(request, proto.Message): + request_payload = type(request).to_json(request) + elif isinstance(request, google.protobuf.message.Message): + request_payload = MessageToJson(request) + else: + request_payload = f"{type(request).__name__}: {pickle.dumps(request)}" + + request_metadata = { + key: value.decode("utf-8") if isinstance(value, bytes) else value + for key, value in request_metadata + } + grpc_request = { + "payload": request_payload, + "requestMethod": "grpc", + "metadata": dict(request_metadata), + } + _LOGGER.debug( + f"Sending request for {client_call_details.method}", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": str(client_call_details.method), + "request": grpc_request, + "metadata": grpc_request["metadata"], + }, + ) + response = continuation(client_call_details, request) + if logging_enabled: # pragma: NO COVER + response_metadata = response.trailing_metadata() + # Convert gRPC metadata `` to list of tuples + metadata = ( + dict([(k, str(v)) for k, v in response_metadata]) + if response_metadata + else None + ) + result = response.result() + if isinstance(result, proto.Message): + response_payload = type(result).to_json(result) + elif isinstance(result, google.protobuf.message.Message): + response_payload = MessageToJson(result) + else: + response_payload = f"{type(result).__name__}: {pickle.dumps(result)}" + grpc_response = { + "payload": response_payload, + "metadata": metadata, + "status": "OK", + } + _LOGGER.debug( + f"Received response for {client_call_details.method}.", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": client_call_details.method, + "response": grpc_response, + "metadata": grpc_response["metadata"], + }, + ) + return response + + +class InstanceAdminGrpcTransport(InstanceAdminTransport): + """gRPC backend transport for InstanceAdmin. + + Cloud Spanner Instance Admin API + + The Cloud Spanner Instance Admin API can be used to create, + delete, modify and list instances. Instances are dedicated Cloud + Spanner serving and storage resources to be used by Cloud + Spanner databases. + + Each instance has a "configuration", which dictates where the + serving resources for the Cloud Spanner instance are located + (e.g., US-central, Europe). Configurations are created by Google + based on resource availability. + + Cloud Spanner billing is based on the instances that exist and + their sizes. After an instance exists, there are no additional + per-database or per-operation charges for use of the instance + (though there may be additional network bandwidth charges). + Instances offer isolation: problems with databases in one + instance will not affect other instances. However, within an + instance databases can affect each other. For example, if one + database in an instance receives a lot of requests and consumes + most of the instance resources, fewer resources are available + for other databases in that instance, and their performance may + suffer. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: Optional[Union[grpc.Channel, Callable[..., grpc.Channel]]] = None, + api_mtls_endpoint: Optional[str] = None, + client_cert_source: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + ssl_channel_credentials: Optional[grpc.ChannelCredentials] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if a ``channel`` instance is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if a ``channel`` instance is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if a ``channel`` instance is provided. + channel (Optional[Union[grpc.Channel, Callable[..., grpc.Channel]]]): + A ``Channel`` instance through which to make calls, or a Callable + that constructs and returns one. If set to None, ``self.create_channel`` + is used to create the channel. If a Callable is given, it will be called + with the same arguments as used in ``self.create_channel``. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or application default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for the grpc channel. It is ignored if a ``channel`` instance is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure a mutual TLS channel. It is + ignored if a ``channel`` instance or ``ssl_channel_credentials`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._grpc_channel = None + self._ssl_channel_credentials = ssl_channel_credentials + self._stubs: Dict[str, Callable] = {} + self._operations_client: Optional[operations_v1.OperationsClient] = None + + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + + if isinstance(channel, grpc.Channel): + # Ignore credentials if a channel was passed. + credentials = None + self._ignore_credentials = True + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + + else: + if api_mtls_endpoint: + host = api_mtls_endpoint + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + self._ssl_channel_credentials = SslCredentials().ssl_credentials + + else: + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + + # The base transport sets the host, credentials and scopes + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + if not self._grpc_channel: + # initialize with the provided callable or the default channel + channel_init = channel or type(self).create_channel + self._grpc_channel = channel_init( + self._host, + # use the credentials which are saved + credentials=self._credentials, + # Set ``credentials_file`` to ``None`` here as + # the credentials that we saved earlier should be used. + credentials_file=None, + scopes=self._scopes, + ssl_credentials=self._ssl_channel_credentials, + quota_project_id=quota_project_id, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + self._interceptor = _LoggingClientInterceptor() + self._logged_channel = grpc.intercept_channel( + self._grpc_channel, self._interceptor + ) + + # Wrap messages. This must be done after self._logged_channel exists + self._prep_wrapped_messages(client_info) + + @classmethod + def create_channel( + cls, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + host (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + quota_project_id=quota_project_id, + default_scopes=cls.AUTH_SCOPES, + scopes=scopes, + default_host=cls.DEFAULT_HOST, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service.""" + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Quick check: Only create a new client if we do not already have one. + if self._operations_client is None: + self._operations_client = operations_v1.OperationsClient( + self._logged_channel + ) + + # Return the client from cache. + return self._operations_client + + @property + def list_instance_configs( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstanceConfigsRequest], + spanner_instance_admin.ListInstanceConfigsResponse, + ]: + r"""Return a callable for the list instance configs method over gRPC. + + Lists the supported instance configurations for a + given project. + Returns both Google-managed configurations and + user-managed configurations. + + Returns: + Callable[[~.ListInstanceConfigsRequest], + ~.ListInstanceConfigsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instance_configs" not in self._stubs: + self._stubs["list_instance_configs"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstanceConfigs", + request_serializer=spanner_instance_admin.ListInstanceConfigsRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstanceConfigsResponse.deserialize, + ) + return self._stubs["list_instance_configs"] + + @property + def get_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstanceConfigRequest], + spanner_instance_admin.InstanceConfig, + ]: + r"""Return a callable for the get instance config method over gRPC. + + Gets information about a particular instance + configuration. + + Returns: + Callable[[~.GetInstanceConfigRequest], + ~.InstanceConfig]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_instance_config" not in self._stubs: + self._stubs["get_instance_config"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstanceConfig", + request_serializer=spanner_instance_admin.GetInstanceConfigRequest.serialize, + response_deserializer=spanner_instance_admin.InstanceConfig.deserialize, + ) + return self._stubs["get_instance_config"] + + @property + def create_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstanceConfigRequest], operations_pb2.Operation + ]: + r"""Return a callable for the create instance config method over gRPC. + + Creates an instance configuration and begins preparing it to be + used. The returned long-running operation can be used to track + the progress of preparing the new instance configuration. The + instance configuration name is assigned by the caller. If the + named instance configuration already exists, + ``CreateInstanceConfig`` returns ``ALREADY_EXISTS``. + + Immediately after the request returns: + + - The instance configuration is readable via the API, with all + requested attributes. The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field is set to true. Its state is ``CREATING``. + + While the operation is pending: + + - Cancelling the operation renders the instance configuration + immediately unreadable via the API. + - Except for deleting the creating resource, all other attempts + to modify the instance configuration are rejected. + + Upon completion of the returned operation: + + - Instances can be created using the instance configuration. + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field becomes false. Its state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` and + can be used to track creation of the instance configuration. The + metadata field type is + [CreateInstanceConfigMetadata][google.spanner.admin.instance.v1.CreateInstanceConfigMetadata]. + The response field type is + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig], + if successful. + + Authorization requires ``spanner.instanceConfigs.create`` + permission on the resource + [parent][google.spanner.admin.instance.v1.CreateInstanceConfigRequest.parent]. + + Returns: + Callable[[~.CreateInstanceConfigRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_instance_config" not in self._stubs: + self._stubs["create_instance_config"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/CreateInstanceConfig", + request_serializer=spanner_instance_admin.CreateInstanceConfigRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_instance_config"] + + @property + def update_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstanceConfigRequest], operations_pb2.Operation + ]: + r"""Return a callable for the update instance config method over gRPC. + + Updates an instance configuration. The returned long-running + operation can be used to track the progress of updating the + instance. If the named instance configuration does not exist, + returns ``NOT_FOUND``. + + Only user-managed configurations can be updated. + + Immediately after the request returns: + + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field is set to true. + + While the operation is pending: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceConfigMetadata.cancel_time]. + The operation is guaranteed to succeed at undoing all changes, + after which point it terminates with a ``CANCELLED`` status. + - All other attempts to modify the instance configuration are + rejected. + - Reading the instance configuration via the API continues to + give the pre-request values. + + Upon completion of the returned operation: + + - Creating instances using the instance configuration uses the + new values. + - The new values of the instance configuration are readable via + the API. + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field becomes false. + + The returned long-running operation will have a name of the + format ``/operations/`` and + can be used to track the instance configuration modification. + The metadata field type is + [UpdateInstanceConfigMetadata][google.spanner.admin.instance.v1.UpdateInstanceConfigMetadata]. + The response field type is + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig], + if successful. + + Authorization requires ``spanner.instanceConfigs.update`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstanceConfig.name]. + + Returns: + Callable[[~.UpdateInstanceConfigRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_instance_config" not in self._stubs: + self._stubs["update_instance_config"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/UpdateInstanceConfig", + request_serializer=spanner_instance_admin.UpdateInstanceConfigRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_instance_config"] + + @property + def delete_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstanceConfigRequest], empty_pb2.Empty + ]: + r"""Return a callable for the delete instance config method over gRPC. + + Deletes the instance configuration. Deletion is only allowed + when no instances are using the configuration. If any instances + are using the configuration, returns ``FAILED_PRECONDITION``. + + Only user-managed configurations can be deleted. + + Authorization requires ``spanner.instanceConfigs.delete`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstanceConfig.name]. + + Returns: + Callable[[~.DeleteInstanceConfigRequest], + ~.Empty]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_instance_config" not in self._stubs: + self._stubs["delete_instance_config"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/DeleteInstanceConfig", + request_serializer=spanner_instance_admin.DeleteInstanceConfigRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_instance_config"] + + @property + def list_instance_config_operations( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstanceConfigOperationsRequest], + spanner_instance_admin.ListInstanceConfigOperationsResponse, + ]: + r"""Return a callable for the list instance config + operations method over gRPC. + + Lists the user-managed instance configuration long-running + operations in the given project. An instance configuration + operation has a name of the form + ``projects//instanceConfigs//operations/``. + The long-running operation metadata field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.start_time`` in descending order + starting from the most recently started operation. + + Returns: + Callable[[~.ListInstanceConfigOperationsRequest], + ~.ListInstanceConfigOperationsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instance_config_operations" not in self._stubs: + self._stubs[ + "list_instance_config_operations" + ] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstanceConfigOperations", + request_serializer=spanner_instance_admin.ListInstanceConfigOperationsRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstanceConfigOperationsResponse.deserialize, + ) + return self._stubs["list_instance_config_operations"] + + @property + def list_instances( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancesRequest], + spanner_instance_admin.ListInstancesResponse, + ]: + r"""Return a callable for the list instances method over gRPC. + + Lists all instances in the given project. + + Returns: + Callable[[~.ListInstancesRequest], + ~.ListInstancesResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instances" not in self._stubs: + self._stubs["list_instances"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstances", + request_serializer=spanner_instance_admin.ListInstancesRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstancesResponse.deserialize, + ) + return self._stubs["list_instances"] + + @property + def list_instance_partitions( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancePartitionsRequest], + spanner_instance_admin.ListInstancePartitionsResponse, + ]: + r"""Return a callable for the list instance partitions method over gRPC. + + Lists all instance partitions for the given instance. + + Returns: + Callable[[~.ListInstancePartitionsRequest], + ~.ListInstancePartitionsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instance_partitions" not in self._stubs: + self._stubs["list_instance_partitions"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstancePartitions", + request_serializer=spanner_instance_admin.ListInstancePartitionsRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstancePartitionsResponse.deserialize, + ) + return self._stubs["list_instance_partitions"] + + @property + def get_instance( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstanceRequest], spanner_instance_admin.Instance + ]: + r"""Return a callable for the get instance method over gRPC. + + Gets information about a particular instance. + + Returns: + Callable[[~.GetInstanceRequest], + ~.Instance]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_instance" not in self._stubs: + self._stubs["get_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstance", + request_serializer=spanner_instance_admin.GetInstanceRequest.serialize, + response_deserializer=spanner_instance_admin.Instance.deserialize, + ) + return self._stubs["get_instance"] + + @property + def create_instance( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstanceRequest], operations_pb2.Operation + ]: + r"""Return a callable for the create instance method over gRPC. + + Creates an instance and begins preparing it to begin serving. + The returned long-running operation can be used to track the + progress of preparing the new instance. The instance name is + assigned by the caller. If the named instance already exists, + ``CreateInstance`` returns ``ALREADY_EXISTS``. + + Immediately upon completion of this request: + + - The instance is readable via the API, with all requested + attributes but no allocated resources. Its state is + ``CREATING``. + + Until completion of the returned operation: + + - Cancelling the operation renders the instance immediately + unreadable via the API. + - The instance can be deleted. + - All other attempts to modify the instance are rejected. + + Upon completion of the returned operation: + + - Billing for all successfully-allocated resources begins (some + types may have lower than the requested levels). + - Databases can be created in the instance. + - The instance's allocated resource levels are readable via the + API. + - The instance's state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` and can be + used to track creation of the instance. The metadata field type + is + [CreateInstanceMetadata][google.spanner.admin.instance.v1.CreateInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. + + Returns: + Callable[[~.CreateInstanceRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_instance" not in self._stubs: + self._stubs["create_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/CreateInstance", + request_serializer=spanner_instance_admin.CreateInstanceRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_instance"] + + @property + def update_instance( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstanceRequest], operations_pb2.Operation + ]: + r"""Return a callable for the update instance method over gRPC. + + Updates an instance, and begins allocating or releasing + resources as requested. The returned long-running operation can + be used to track the progress of updating the instance. If the + named instance does not exist, returns ``NOT_FOUND``. + + Immediately upon completion of this request: + + - For resource types for which a decrease in the instance's + allocation has been requested, billing is based on the + newly-requested level. + + Until completion of the returned operation: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceMetadata.cancel_time], + and begins restoring resources to their pre-request values. + The operation is guaranteed to succeed at undoing all resource + changes, after which point it terminates with a ``CANCELLED`` + status. + - All other attempts to modify the instance are rejected. + - Reading the instance via the API continues to give the + pre-request resource levels. + + Upon completion of the returned operation: + + - Billing begins for all successfully-allocated resources (some + types may have lower than the requested levels). + - All newly-reserved resources are available for serving the + instance's tables. + - The instance's new resource levels are readable via the API. + + The returned long-running operation will have a name of the + format ``/operations/`` and can be + used to track the instance modification. The metadata field type + is + [UpdateInstanceMetadata][google.spanner.admin.instance.v1.UpdateInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. + + Authorization requires ``spanner.instances.update`` permission + on the resource + [name][google.spanner.admin.instance.v1.Instance.name]. + + Returns: + Callable[[~.UpdateInstanceRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_instance" not in self._stubs: + self._stubs["update_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/UpdateInstance", + request_serializer=spanner_instance_admin.UpdateInstanceRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_instance"] + + @property + def delete_instance( + self, + ) -> Callable[[spanner_instance_admin.DeleteInstanceRequest], empty_pb2.Empty]: + r"""Return a callable for the delete instance method over gRPC. + + Deletes an instance. + + Immediately upon completion of the request: + + - Billing ceases for all of the instance's reserved resources. + + Soon afterward: + + - The instance and *all of its databases* immediately and + irrevocably disappear from the API. All data in the databases + is permanently deleted. + + Returns: + Callable[[~.DeleteInstanceRequest], + ~.Empty]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_instance" not in self._stubs: + self._stubs["delete_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/DeleteInstance", + request_serializer=spanner_instance_admin.DeleteInstanceRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_instance"] + + @property + def set_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.SetIamPolicyRequest], policy_pb2.Policy]: + r"""Return a callable for the set iam policy method over gRPC. + + Sets the access control policy on an instance resource. Replaces + any existing policy. + + Authorization requires ``spanner.instances.setIamPolicy`` on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. + + Returns: + Callable[[~.SetIamPolicyRequest], + ~.Policy]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "set_iam_policy" not in self._stubs: + self._stubs["set_iam_policy"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/SetIamPolicy", + request_serializer=iam_policy_pb2.SetIamPolicyRequest.SerializeToString, + response_deserializer=policy_pb2.Policy.FromString, + ) + return self._stubs["set_iam_policy"] + + @property + def get_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.GetIamPolicyRequest], policy_pb2.Policy]: + r"""Return a callable for the get iam policy method over gRPC. + + Gets the access control policy for an instance resource. Returns + an empty policy if an instance exists but does not have a policy + set. + + Authorization requires ``spanner.instances.getIamPolicy`` on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + + Returns: + Callable[[~.GetIamPolicyRequest], + ~.Policy]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_iam_policy" not in self._stubs: + self._stubs["get_iam_policy"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/GetIamPolicy", + request_serializer=iam_policy_pb2.GetIamPolicyRequest.SerializeToString, + response_deserializer=policy_pb2.Policy.FromString, + ) + return self._stubs["get_iam_policy"] + + @property + def test_iam_permissions( + self, + ) -> Callable[ + [iam_policy_pb2.TestIamPermissionsRequest], + iam_policy_pb2.TestIamPermissionsResponse, + ]: + r"""Return a callable for the test iam permissions method over gRPC. + + Returns permissions that the caller has on the specified + instance resource. + + Attempting this RPC on a non-existent Cloud Spanner instance + resource will result in a NOT_FOUND error if the user has + ``spanner.instances.list`` permission on the containing Google + Cloud Project. Otherwise returns an empty set of permissions. + + Returns: + Callable[[~.TestIamPermissionsRequest], + ~.TestIamPermissionsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "test_iam_permissions" not in self._stubs: + self._stubs["test_iam_permissions"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/TestIamPermissions", + request_serializer=iam_policy_pb2.TestIamPermissionsRequest.SerializeToString, + response_deserializer=iam_policy_pb2.TestIamPermissionsResponse.FromString, + ) + return self._stubs["test_iam_permissions"] + + @property + def get_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstancePartitionRequest], + spanner_instance_admin.InstancePartition, + ]: + r"""Return a callable for the get instance partition method over gRPC. + + Gets information about a particular instance + partition. + + Returns: + Callable[[~.GetInstancePartitionRequest], + ~.InstancePartition]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_instance_partition" not in self._stubs: + self._stubs["get_instance_partition"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstancePartition", + request_serializer=spanner_instance_admin.GetInstancePartitionRequest.serialize, + response_deserializer=spanner_instance_admin.InstancePartition.deserialize, + ) + return self._stubs["get_instance_partition"] + + @property + def create_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstancePartitionRequest], + operations_pb2.Operation, + ]: + r"""Return a callable for the create instance partition method over gRPC. + + Creates an instance partition and begins preparing it to be + used. The returned long-running operation can be used to track + the progress of preparing the new instance partition. The + instance partition name is assigned by the caller. If the named + instance partition already exists, ``CreateInstancePartition`` + returns ``ALREADY_EXISTS``. + + Immediately upon completion of this request: + + - The instance partition is readable via the API, with all + requested attributes but no allocated resources. Its state is + ``CREATING``. + + Until completion of the returned operation: + + - Cancelling the operation renders the instance partition + immediately unreadable via the API. + - The instance partition can be deleted. + - All other attempts to modify the instance partition are + rejected. + + Upon completion of the returned operation: + + - Billing for all successfully-allocated resources begins (some + types may have lower than the requested levels). + - Databases can start using this instance partition. + - The instance partition's allocated resource levels are + readable via the API. + - The instance partition's state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` + and can be used to track creation of the instance partition. The + metadata field type is + [CreateInstancePartitionMetadata][google.spanner.admin.instance.v1.CreateInstancePartitionMetadata]. + The response field type is + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition], + if successful. + + Returns: + Callable[[~.CreateInstancePartitionRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_instance_partition" not in self._stubs: + self._stubs["create_instance_partition"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/CreateInstancePartition", + request_serializer=spanner_instance_admin.CreateInstancePartitionRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_instance_partition"] + + @property + def delete_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstancePartitionRequest], empty_pb2.Empty + ]: + r"""Return a callable for the delete instance partition method over gRPC. + + Deletes an existing instance partition. Requires that the + instance partition is not used by any database or backup and is + not the default instance partition of an instance. + + Authorization requires ``spanner.instancePartitions.delete`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstancePartition.name]. + + Returns: + Callable[[~.DeleteInstancePartitionRequest], + ~.Empty]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_instance_partition" not in self._stubs: + self._stubs["delete_instance_partition"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/DeleteInstancePartition", + request_serializer=spanner_instance_admin.DeleteInstancePartitionRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_instance_partition"] + + @property + def update_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstancePartitionRequest], + operations_pb2.Operation, + ]: + r"""Return a callable for the update instance partition method over gRPC. + + Updates an instance partition, and begins allocating or + releasing resources as requested. The returned long-running + operation can be used to track the progress of updating the + instance partition. If the named instance partition does not + exist, returns ``NOT_FOUND``. + + Immediately upon completion of this request: + + - For resource types for which a decrease in the instance + partition's allocation has been requested, billing is based on + the newly-requested level. + + Until completion of the returned operation: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstancePartitionMetadata.cancel_time], + and begins restoring resources to their pre-request values. + The operation is guaranteed to succeed at undoing all resource + changes, after which point it terminates with a ``CANCELLED`` + status. + - All other attempts to modify the instance partition are + rejected. + - Reading the instance partition via the API continues to give + the pre-request resource levels. + + Upon completion of the returned operation: + + - Billing begins for all successfully-allocated resources (some + types may have lower than the requested levels). + - All newly-reserved resources are available for serving the + instance partition's tables. + - The instance partition's new resource levels are readable via + the API. + + The returned long-running operation will have a name of the + format ``/operations/`` + and can be used to track the instance partition modification. + The metadata field type is + [UpdateInstancePartitionMetadata][google.spanner.admin.instance.v1.UpdateInstancePartitionMetadata]. + The response field type is + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition], + if successful. + + Authorization requires ``spanner.instancePartitions.update`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstancePartition.name]. + + Returns: + Callable[[~.UpdateInstancePartitionRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_instance_partition" not in self._stubs: + self._stubs["update_instance_partition"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/UpdateInstancePartition", + request_serializer=spanner_instance_admin.UpdateInstancePartitionRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_instance_partition"] + + @property + def list_instance_partition_operations( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancePartitionOperationsRequest], + spanner_instance_admin.ListInstancePartitionOperationsResponse, + ]: + r"""Return a callable for the list instance partition + operations method over gRPC. + + Lists instance partition long-running operations in the given + instance. An instance partition operation has a name of the form + ``projects//instances//instancePartitions//operations/``. + The long-running operation metadata field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.start_time`` in descending order + starting from the most recently started operation. + + Authorization requires + ``spanner.instancePartitionOperations.list`` permission on the + resource + [parent][google.spanner.admin.instance.v1.ListInstancePartitionOperationsRequest.parent]. + + Returns: + Callable[[~.ListInstancePartitionOperationsRequest], + ~.ListInstancePartitionOperationsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instance_partition_operations" not in self._stubs: + self._stubs[ + "list_instance_partition_operations" + ] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstancePartitionOperations", + request_serializer=spanner_instance_admin.ListInstancePartitionOperationsRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstancePartitionOperationsResponse.deserialize, + ) + return self._stubs["list_instance_partition_operations"] + + @property + def move_instance( + self, + ) -> Callable[ + [spanner_instance_admin.MoveInstanceRequest], operations_pb2.Operation + ]: + r"""Return a callable for the move instance method over gRPC. + + Moves an instance to the target instance configuration. You can + use the returned long-running operation to track the progress of + moving the instance. + + ``MoveInstance`` returns ``FAILED_PRECONDITION`` if the instance + meets any of the following criteria: + + - Is undergoing a move to a different instance configuration + - Has backups + - Has an ongoing update + - Contains any CMEK-enabled databases + - Is a free trial instance + + While the operation is pending: + + - All other attempts to modify the instance, including changes + to its compute capacity, are rejected. + + - The following database and backup admin operations are + rejected: + + - ``DatabaseAdmin.CreateDatabase`` + - ``DatabaseAdmin.UpdateDatabaseDdl`` (disabled if + default_leader is specified in the request.) + - ``DatabaseAdmin.RestoreDatabase`` + - ``DatabaseAdmin.CreateBackup`` + - ``DatabaseAdmin.CopyBackup`` + + - Both the source and target instance configurations are subject + to hourly compute and storage charges. + + - The instance might experience higher read-write latencies and + a higher transaction abort rate. However, moving an instance + doesn't cause any downtime. + + The returned long-running operation has a name of the format + ``/operations/`` and can be used to + track the move instance operation. The metadata field type is + [MoveInstanceMetadata][google.spanner.admin.instance.v1.MoveInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.MoveInstanceMetadata.cancel_time]. + Cancellation is not immediate because it involves moving any + data previously moved to the target instance configuration back + to the original instance configuration. You can use this + operation to track the progress of the cancellation. Upon + successful completion of the cancellation, the operation + terminates with ``CANCELLED`` status. + + If not cancelled, upon completion of the returned operation: + + - The instance successfully moves to the target instance + configuration. + - You are billed for compute and storage in target instance + configuration. + + Authorization requires the ``spanner.instances.update`` + permission on the resource + [instance][google.spanner.admin.instance.v1.Instance]. + + For more details, see `Move an + instance `__. + + Returns: + Callable[[~.MoveInstanceRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "move_instance" not in self._stubs: + self._stubs["move_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/MoveInstance", + request_serializer=spanner_instance_admin.MoveInstanceRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["move_instance"] + + def close(self): + self._logged_channel.close() + + @property + def delete_operation( + self, + ) -> Callable[[operations_pb2.DeleteOperationRequest], None]: + r"""Return a callable for the delete_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_operation" not in self._stubs: + self._stubs["delete_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/DeleteOperation", + request_serializer=operations_pb2.DeleteOperationRequest.SerializeToString, + response_deserializer=None, + ) + return self._stubs["delete_operation"] + + @property + def cancel_operation( + self, + ) -> Callable[[operations_pb2.CancelOperationRequest], None]: + r"""Return a callable for the cancel_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "cancel_operation" not in self._stubs: + self._stubs["cancel_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/CancelOperation", + request_serializer=operations_pb2.CancelOperationRequest.SerializeToString, + response_deserializer=None, + ) + return self._stubs["cancel_operation"] + + @property + def get_operation( + self, + ) -> Callable[[operations_pb2.GetOperationRequest], operations_pb2.Operation]: + r"""Return a callable for the get_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_operation" not in self._stubs: + self._stubs["get_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/GetOperation", + request_serializer=operations_pb2.GetOperationRequest.SerializeToString, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["get_operation"] + + @property + def list_operations( + self, + ) -> Callable[ + [operations_pb2.ListOperationsRequest], operations_pb2.ListOperationsResponse + ]: + r"""Return a callable for the list_operations method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_operations" not in self._stubs: + self._stubs["list_operations"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/ListOperations", + request_serializer=operations_pb2.ListOperationsRequest.SerializeToString, + response_deserializer=operations_pb2.ListOperationsResponse.FromString, + ) + return self._stubs["list_operations"] + + @property + def kind(self) -> str: + return "grpc" + + +__all__ = ("InstanceAdminGrpcTransport",) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc_asyncio.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc_asyncio.py new file mode 100644 index 0000000000..f2df40d1f2 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/grpc_asyncio.py @@ -0,0 +1,1628 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import inspect +import json +import pickle +import logging as std_logging +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple, Union + +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers_async +from google.api_core import exceptions as core_exceptions +from google.api_core import retry_async as retries +from google.api_core import operations_v1 +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.protobuf.json_format import MessageToJson +import google.protobuf.message + +import grpc # type: ignore +import proto # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from .base import InstanceAdminTransport, DEFAULT_CLIENT_INFO +from .grpc import InstanceAdminGrpcTransport + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class _LoggingClientAIOInterceptor( + grpc.aio.UnaryUnaryClientInterceptor +): # pragma: NO COVER + async def intercept_unary_unary(self, continuation, client_call_details, request): + logging_enabled = CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ) + if logging_enabled: # pragma: NO COVER + request_metadata = client_call_details.metadata + if isinstance(request, proto.Message): + request_payload = type(request).to_json(request) + elif isinstance(request, google.protobuf.message.Message): + request_payload = MessageToJson(request) + else: + request_payload = f"{type(request).__name__}: {pickle.dumps(request)}" + + request_metadata = { + key: value.decode("utf-8") if isinstance(value, bytes) else value + for key, value in request_metadata + } + grpc_request = { + "payload": request_payload, + "requestMethod": "grpc", + "metadata": dict(request_metadata), + } + _LOGGER.debug( + f"Sending request for {client_call_details.method}", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": str(client_call_details.method), + "request": grpc_request, + "metadata": grpc_request["metadata"], + }, + ) + response = await continuation(client_call_details, request) + if logging_enabled: # pragma: NO COVER + response_metadata = await response.trailing_metadata() + # Convert gRPC metadata `` to list of tuples + metadata = ( + dict([(k, str(v)) for k, v in response_metadata]) + if response_metadata + else None + ) + result = await response + if isinstance(result, proto.Message): + response_payload = type(result).to_json(result) + elif isinstance(result, google.protobuf.message.Message): + response_payload = MessageToJson(result) + else: + response_payload = f"{type(result).__name__}: {pickle.dumps(result)}" + grpc_response = { + "payload": response_payload, + "metadata": metadata, + "status": "OK", + } + _LOGGER.debug( + f"Received response to rpc {client_call_details.method}.", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": str(client_call_details.method), + "response": grpc_response, + "metadata": grpc_response["metadata"], + }, + ) + return response + + +class InstanceAdminGrpcAsyncIOTransport(InstanceAdminTransport): + """gRPC AsyncIO backend transport for InstanceAdmin. + + Cloud Spanner Instance Admin API + + The Cloud Spanner Instance Admin API can be used to create, + delete, modify and list instances. Instances are dedicated Cloud + Spanner serving and storage resources to be used by Cloud + Spanner databases. + + Each instance has a "configuration", which dictates where the + serving resources for the Cloud Spanner instance are located + (e.g., US-central, Europe). Configurations are created by Google + based on resource availability. + + Cloud Spanner billing is based on the instances that exist and + their sizes. After an instance exists, there are no additional + per-database or per-operation charges for use of the instance + (though there may be additional network bandwidth charges). + Instances offer isolation: problems with databases in one + instance will not affect other instances. However, within an + instance databases can affect each other. For example, if one + database in an instance receives a lot of requests and consumes + most of the instance resources, fewer resources are available + for other databases in that instance, and their performance may + suffer. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + host (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + quota_project_id=quota_project_id, + default_scopes=cls.AUTH_SCOPES, + scopes=scopes, + default_host=cls.DEFAULT_HOST, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: Optional[Union[aio.Channel, Callable[..., aio.Channel]]] = None, + api_mtls_endpoint: Optional[str] = None, + client_cert_source: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + ssl_channel_credentials: Optional[grpc.ChannelCredentials] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if a ``channel`` instance is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if a ``channel`` instance is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[Union[aio.Channel, Callable[..., aio.Channel]]]): + A ``Channel`` instance through which to make calls, or a Callable + that constructs and returns one. If set to None, ``self.create_channel`` + is used to create the channel. If a Callable is given, it will be called + with the same arguments as used in ``self.create_channel``. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or application default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for the grpc channel. It is ignored if a ``channel`` instance is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure a mutual TLS channel. It is + ignored if a ``channel`` instance or ``ssl_channel_credentials`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._grpc_channel = None + self._ssl_channel_credentials = ssl_channel_credentials + self._stubs: Dict[str, Callable] = {} + self._operations_client: Optional[operations_v1.OperationsAsyncClient] = None + + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + + if isinstance(channel, aio.Channel): + # Ignore credentials if a channel was passed. + credentials = None + self._ignore_credentials = True + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + else: + if api_mtls_endpoint: + host = api_mtls_endpoint + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + self._ssl_channel_credentials = SslCredentials().ssl_credentials + + else: + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + + # The base transport sets the host, credentials and scopes + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + if not self._grpc_channel: + # initialize with the provided callable or the default channel + channel_init = channel or type(self).create_channel + self._grpc_channel = channel_init( + self._host, + # use the credentials which are saved + credentials=self._credentials, + # Set ``credentials_file`` to ``None`` here as + # the credentials that we saved earlier should be used. + credentials_file=None, + scopes=self._scopes, + ssl_credentials=self._ssl_channel_credentials, + quota_project_id=quota_project_id, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + self._interceptor = _LoggingClientAIOInterceptor() + self._grpc_channel._unary_unary_interceptors.append(self._interceptor) + self._logged_channel = self._grpc_channel + self._wrap_with_kind = ( + "kind" in inspect.signature(gapic_v1.method_async.wrap_method).parameters + ) + # Wrap messages. This must be done after self._logged_channel exists + self._prep_wrapped_messages(client_info) + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsAsyncClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Quick check: Only create a new client if we do not already have one. + if self._operations_client is None: + self._operations_client = operations_v1.OperationsAsyncClient( + self._logged_channel + ) + + # Return the client from cache. + return self._operations_client + + @property + def list_instance_configs( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstanceConfigsRequest], + Awaitable[spanner_instance_admin.ListInstanceConfigsResponse], + ]: + r"""Return a callable for the list instance configs method over gRPC. + + Lists the supported instance configurations for a + given project. + Returns both Google-managed configurations and + user-managed configurations. + + Returns: + Callable[[~.ListInstanceConfigsRequest], + Awaitable[~.ListInstanceConfigsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instance_configs" not in self._stubs: + self._stubs["list_instance_configs"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstanceConfigs", + request_serializer=spanner_instance_admin.ListInstanceConfigsRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstanceConfigsResponse.deserialize, + ) + return self._stubs["list_instance_configs"] + + @property + def get_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstanceConfigRequest], + Awaitable[spanner_instance_admin.InstanceConfig], + ]: + r"""Return a callable for the get instance config method over gRPC. + + Gets information about a particular instance + configuration. + + Returns: + Callable[[~.GetInstanceConfigRequest], + Awaitable[~.InstanceConfig]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_instance_config" not in self._stubs: + self._stubs["get_instance_config"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstanceConfig", + request_serializer=spanner_instance_admin.GetInstanceConfigRequest.serialize, + response_deserializer=spanner_instance_admin.InstanceConfig.deserialize, + ) + return self._stubs["get_instance_config"] + + @property + def create_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstanceConfigRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the create instance config method over gRPC. + + Creates an instance configuration and begins preparing it to be + used. The returned long-running operation can be used to track + the progress of preparing the new instance configuration. The + instance configuration name is assigned by the caller. If the + named instance configuration already exists, + ``CreateInstanceConfig`` returns ``ALREADY_EXISTS``. + + Immediately after the request returns: + + - The instance configuration is readable via the API, with all + requested attributes. The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field is set to true. Its state is ``CREATING``. + + While the operation is pending: + + - Cancelling the operation renders the instance configuration + immediately unreadable via the API. + - Except for deleting the creating resource, all other attempts + to modify the instance configuration are rejected. + + Upon completion of the returned operation: + + - Instances can be created using the instance configuration. + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field becomes false. Its state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` and + can be used to track creation of the instance configuration. The + metadata field type is + [CreateInstanceConfigMetadata][google.spanner.admin.instance.v1.CreateInstanceConfigMetadata]. + The response field type is + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig], + if successful. + + Authorization requires ``spanner.instanceConfigs.create`` + permission on the resource + [parent][google.spanner.admin.instance.v1.CreateInstanceConfigRequest.parent]. + + Returns: + Callable[[~.CreateInstanceConfigRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_instance_config" not in self._stubs: + self._stubs["create_instance_config"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/CreateInstanceConfig", + request_serializer=spanner_instance_admin.CreateInstanceConfigRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_instance_config"] + + @property + def update_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstanceConfigRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the update instance config method over gRPC. + + Updates an instance configuration. The returned long-running + operation can be used to track the progress of updating the + instance. If the named instance configuration does not exist, + returns ``NOT_FOUND``. + + Only user-managed configurations can be updated. + + Immediately after the request returns: + + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field is set to true. + + While the operation is pending: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceConfigMetadata.cancel_time]. + The operation is guaranteed to succeed at undoing all changes, + after which point it terminates with a ``CANCELLED`` status. + - All other attempts to modify the instance configuration are + rejected. + - Reading the instance configuration via the API continues to + give the pre-request values. + + Upon completion of the returned operation: + + - Creating instances using the instance configuration uses the + new values. + - The new values of the instance configuration are readable via + the API. + - The instance configuration's + [reconciling][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + field becomes false. + + The returned long-running operation will have a name of the + format ``/operations/`` and + can be used to track the instance configuration modification. + The metadata field type is + [UpdateInstanceConfigMetadata][google.spanner.admin.instance.v1.UpdateInstanceConfigMetadata]. + The response field type is + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig], + if successful. + + Authorization requires ``spanner.instanceConfigs.update`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstanceConfig.name]. + + Returns: + Callable[[~.UpdateInstanceConfigRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_instance_config" not in self._stubs: + self._stubs["update_instance_config"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/UpdateInstanceConfig", + request_serializer=spanner_instance_admin.UpdateInstanceConfigRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_instance_config"] + + @property + def delete_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstanceConfigRequest], Awaitable[empty_pb2.Empty] + ]: + r"""Return a callable for the delete instance config method over gRPC. + + Deletes the instance configuration. Deletion is only allowed + when no instances are using the configuration. If any instances + are using the configuration, returns ``FAILED_PRECONDITION``. + + Only user-managed configurations can be deleted. + + Authorization requires ``spanner.instanceConfigs.delete`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstanceConfig.name]. + + Returns: + Callable[[~.DeleteInstanceConfigRequest], + Awaitable[~.Empty]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_instance_config" not in self._stubs: + self._stubs["delete_instance_config"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/DeleteInstanceConfig", + request_serializer=spanner_instance_admin.DeleteInstanceConfigRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_instance_config"] + + @property + def list_instance_config_operations( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstanceConfigOperationsRequest], + Awaitable[spanner_instance_admin.ListInstanceConfigOperationsResponse], + ]: + r"""Return a callable for the list instance config + operations method over gRPC. + + Lists the user-managed instance configuration long-running + operations in the given project. An instance configuration + operation has a name of the form + ``projects//instanceConfigs//operations/``. + The long-running operation metadata field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.start_time`` in descending order + starting from the most recently started operation. + + Returns: + Callable[[~.ListInstanceConfigOperationsRequest], + Awaitable[~.ListInstanceConfigOperationsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instance_config_operations" not in self._stubs: + self._stubs[ + "list_instance_config_operations" + ] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstanceConfigOperations", + request_serializer=spanner_instance_admin.ListInstanceConfigOperationsRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstanceConfigOperationsResponse.deserialize, + ) + return self._stubs["list_instance_config_operations"] + + @property + def list_instances( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancesRequest], + Awaitable[spanner_instance_admin.ListInstancesResponse], + ]: + r"""Return a callable for the list instances method over gRPC. + + Lists all instances in the given project. + + Returns: + Callable[[~.ListInstancesRequest], + Awaitable[~.ListInstancesResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instances" not in self._stubs: + self._stubs["list_instances"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstances", + request_serializer=spanner_instance_admin.ListInstancesRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstancesResponse.deserialize, + ) + return self._stubs["list_instances"] + + @property + def list_instance_partitions( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancePartitionsRequest], + Awaitable[spanner_instance_admin.ListInstancePartitionsResponse], + ]: + r"""Return a callable for the list instance partitions method over gRPC. + + Lists all instance partitions for the given instance. + + Returns: + Callable[[~.ListInstancePartitionsRequest], + Awaitable[~.ListInstancePartitionsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instance_partitions" not in self._stubs: + self._stubs["list_instance_partitions"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstancePartitions", + request_serializer=spanner_instance_admin.ListInstancePartitionsRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstancePartitionsResponse.deserialize, + ) + return self._stubs["list_instance_partitions"] + + @property + def get_instance( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstanceRequest], + Awaitable[spanner_instance_admin.Instance], + ]: + r"""Return a callable for the get instance method over gRPC. + + Gets information about a particular instance. + + Returns: + Callable[[~.GetInstanceRequest], + Awaitable[~.Instance]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_instance" not in self._stubs: + self._stubs["get_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstance", + request_serializer=spanner_instance_admin.GetInstanceRequest.serialize, + response_deserializer=spanner_instance_admin.Instance.deserialize, + ) + return self._stubs["get_instance"] + + @property + def create_instance( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstanceRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the create instance method over gRPC. + + Creates an instance and begins preparing it to begin serving. + The returned long-running operation can be used to track the + progress of preparing the new instance. The instance name is + assigned by the caller. If the named instance already exists, + ``CreateInstance`` returns ``ALREADY_EXISTS``. + + Immediately upon completion of this request: + + - The instance is readable via the API, with all requested + attributes but no allocated resources. Its state is + ``CREATING``. + + Until completion of the returned operation: + + - Cancelling the operation renders the instance immediately + unreadable via the API. + - The instance can be deleted. + - All other attempts to modify the instance are rejected. + + Upon completion of the returned operation: + + - Billing for all successfully-allocated resources begins (some + types may have lower than the requested levels). + - Databases can be created in the instance. + - The instance's allocated resource levels are readable via the + API. + - The instance's state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` and can be + used to track creation of the instance. The metadata field type + is + [CreateInstanceMetadata][google.spanner.admin.instance.v1.CreateInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. + + Returns: + Callable[[~.CreateInstanceRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_instance" not in self._stubs: + self._stubs["create_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/CreateInstance", + request_serializer=spanner_instance_admin.CreateInstanceRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_instance"] + + @property + def update_instance( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstanceRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the update instance method over gRPC. + + Updates an instance, and begins allocating or releasing + resources as requested. The returned long-running operation can + be used to track the progress of updating the instance. If the + named instance does not exist, returns ``NOT_FOUND``. + + Immediately upon completion of this request: + + - For resource types for which a decrease in the instance's + allocation has been requested, billing is based on the + newly-requested level. + + Until completion of the returned operation: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceMetadata.cancel_time], + and begins restoring resources to their pre-request values. + The operation is guaranteed to succeed at undoing all resource + changes, after which point it terminates with a ``CANCELLED`` + status. + - All other attempts to modify the instance are rejected. + - Reading the instance via the API continues to give the + pre-request resource levels. + + Upon completion of the returned operation: + + - Billing begins for all successfully-allocated resources (some + types may have lower than the requested levels). + - All newly-reserved resources are available for serving the + instance's tables. + - The instance's new resource levels are readable via the API. + + The returned long-running operation will have a name of the + format ``/operations/`` and can be + used to track the instance modification. The metadata field type + is + [UpdateInstanceMetadata][google.spanner.admin.instance.v1.UpdateInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. + + Authorization requires ``spanner.instances.update`` permission + on the resource + [name][google.spanner.admin.instance.v1.Instance.name]. + + Returns: + Callable[[~.UpdateInstanceRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_instance" not in self._stubs: + self._stubs["update_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/UpdateInstance", + request_serializer=spanner_instance_admin.UpdateInstanceRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_instance"] + + @property + def delete_instance( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstanceRequest], Awaitable[empty_pb2.Empty] + ]: + r"""Return a callable for the delete instance method over gRPC. + + Deletes an instance. + + Immediately upon completion of the request: + + - Billing ceases for all of the instance's reserved resources. + + Soon afterward: + + - The instance and *all of its databases* immediately and + irrevocably disappear from the API. All data in the databases + is permanently deleted. + + Returns: + Callable[[~.DeleteInstanceRequest], + Awaitable[~.Empty]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_instance" not in self._stubs: + self._stubs["delete_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/DeleteInstance", + request_serializer=spanner_instance_admin.DeleteInstanceRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_instance"] + + @property + def set_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.SetIamPolicyRequest], Awaitable[policy_pb2.Policy]]: + r"""Return a callable for the set iam policy method over gRPC. + + Sets the access control policy on an instance resource. Replaces + any existing policy. + + Authorization requires ``spanner.instances.setIamPolicy`` on + [resource][google.iam.v1.SetIamPolicyRequest.resource]. + + Returns: + Callable[[~.SetIamPolicyRequest], + Awaitable[~.Policy]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "set_iam_policy" not in self._stubs: + self._stubs["set_iam_policy"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/SetIamPolicy", + request_serializer=iam_policy_pb2.SetIamPolicyRequest.SerializeToString, + response_deserializer=policy_pb2.Policy.FromString, + ) + return self._stubs["set_iam_policy"] + + @property + def get_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.GetIamPolicyRequest], Awaitable[policy_pb2.Policy]]: + r"""Return a callable for the get iam policy method over gRPC. + + Gets the access control policy for an instance resource. Returns + an empty policy if an instance exists but does not have a policy + set. + + Authorization requires ``spanner.instances.getIamPolicy`` on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + + Returns: + Callable[[~.GetIamPolicyRequest], + Awaitable[~.Policy]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_iam_policy" not in self._stubs: + self._stubs["get_iam_policy"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/GetIamPolicy", + request_serializer=iam_policy_pb2.GetIamPolicyRequest.SerializeToString, + response_deserializer=policy_pb2.Policy.FromString, + ) + return self._stubs["get_iam_policy"] + + @property + def test_iam_permissions( + self, + ) -> Callable[ + [iam_policy_pb2.TestIamPermissionsRequest], + Awaitable[iam_policy_pb2.TestIamPermissionsResponse], + ]: + r"""Return a callable for the test iam permissions method over gRPC. + + Returns permissions that the caller has on the specified + instance resource. + + Attempting this RPC on a non-existent Cloud Spanner instance + resource will result in a NOT_FOUND error if the user has + ``spanner.instances.list`` permission on the containing Google + Cloud Project. Otherwise returns an empty set of permissions. + + Returns: + Callable[[~.TestIamPermissionsRequest], + Awaitable[~.TestIamPermissionsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "test_iam_permissions" not in self._stubs: + self._stubs["test_iam_permissions"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/TestIamPermissions", + request_serializer=iam_policy_pb2.TestIamPermissionsRequest.SerializeToString, + response_deserializer=iam_policy_pb2.TestIamPermissionsResponse.FromString, + ) + return self._stubs["test_iam_permissions"] + + @property + def get_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstancePartitionRequest], + Awaitable[spanner_instance_admin.InstancePartition], + ]: + r"""Return a callable for the get instance partition method over gRPC. + + Gets information about a particular instance + partition. + + Returns: + Callable[[~.GetInstancePartitionRequest], + Awaitable[~.InstancePartition]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_instance_partition" not in self._stubs: + self._stubs["get_instance_partition"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/GetInstancePartition", + request_serializer=spanner_instance_admin.GetInstancePartitionRequest.serialize, + response_deserializer=spanner_instance_admin.InstancePartition.deserialize, + ) + return self._stubs["get_instance_partition"] + + @property + def create_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstancePartitionRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the create instance partition method over gRPC. + + Creates an instance partition and begins preparing it to be + used. The returned long-running operation can be used to track + the progress of preparing the new instance partition. The + instance partition name is assigned by the caller. If the named + instance partition already exists, ``CreateInstancePartition`` + returns ``ALREADY_EXISTS``. + + Immediately upon completion of this request: + + - The instance partition is readable via the API, with all + requested attributes but no allocated resources. Its state is + ``CREATING``. + + Until completion of the returned operation: + + - Cancelling the operation renders the instance partition + immediately unreadable via the API. + - The instance partition can be deleted. + - All other attempts to modify the instance partition are + rejected. + + Upon completion of the returned operation: + + - Billing for all successfully-allocated resources begins (some + types may have lower than the requested levels). + - Databases can start using this instance partition. + - The instance partition's allocated resource levels are + readable via the API. + - The instance partition's state becomes ``READY``. + + The returned long-running operation will have a name of the + format ``/operations/`` + and can be used to track creation of the instance partition. The + metadata field type is + [CreateInstancePartitionMetadata][google.spanner.admin.instance.v1.CreateInstancePartitionMetadata]. + The response field type is + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition], + if successful. + + Returns: + Callable[[~.CreateInstancePartitionRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_instance_partition" not in self._stubs: + self._stubs["create_instance_partition"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/CreateInstancePartition", + request_serializer=spanner_instance_admin.CreateInstancePartitionRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["create_instance_partition"] + + @property + def delete_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstancePartitionRequest], + Awaitable[empty_pb2.Empty], + ]: + r"""Return a callable for the delete instance partition method over gRPC. + + Deletes an existing instance partition. Requires that the + instance partition is not used by any database or backup and is + not the default instance partition of an instance. + + Authorization requires ``spanner.instancePartitions.delete`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstancePartition.name]. + + Returns: + Callable[[~.DeleteInstancePartitionRequest], + Awaitable[~.Empty]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_instance_partition" not in self._stubs: + self._stubs["delete_instance_partition"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/DeleteInstancePartition", + request_serializer=spanner_instance_admin.DeleteInstancePartitionRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_instance_partition"] + + @property + def update_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstancePartitionRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the update instance partition method over gRPC. + + Updates an instance partition, and begins allocating or + releasing resources as requested. The returned long-running + operation can be used to track the progress of updating the + instance partition. If the named instance partition does not + exist, returns ``NOT_FOUND``. + + Immediately upon completion of this request: + + - For resource types for which a decrease in the instance + partition's allocation has been requested, billing is based on + the newly-requested level. + + Until completion of the returned operation: + + - Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.UpdateInstancePartitionMetadata.cancel_time], + and begins restoring resources to their pre-request values. + The operation is guaranteed to succeed at undoing all resource + changes, after which point it terminates with a ``CANCELLED`` + status. + - All other attempts to modify the instance partition are + rejected. + - Reading the instance partition via the API continues to give + the pre-request resource levels. + + Upon completion of the returned operation: + + - Billing begins for all successfully-allocated resources (some + types may have lower than the requested levels). + - All newly-reserved resources are available for serving the + instance partition's tables. + - The instance partition's new resource levels are readable via + the API. + + The returned long-running operation will have a name of the + format ``/operations/`` + and can be used to track the instance partition modification. + The metadata field type is + [UpdateInstancePartitionMetadata][google.spanner.admin.instance.v1.UpdateInstancePartitionMetadata]. + The response field type is + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition], + if successful. + + Authorization requires ``spanner.instancePartitions.update`` + permission on the resource + [name][google.spanner.admin.instance.v1.InstancePartition.name]. + + Returns: + Callable[[~.UpdateInstancePartitionRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "update_instance_partition" not in self._stubs: + self._stubs["update_instance_partition"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/UpdateInstancePartition", + request_serializer=spanner_instance_admin.UpdateInstancePartitionRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["update_instance_partition"] + + @property + def list_instance_partition_operations( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancePartitionOperationsRequest], + Awaitable[spanner_instance_admin.ListInstancePartitionOperationsResponse], + ]: + r"""Return a callable for the list instance partition + operations method over gRPC. + + Lists instance partition long-running operations in the given + instance. An instance partition operation has a name of the form + ``projects//instances//instancePartitions//operations/``. + The long-running operation metadata field type + ``metadata.type_url`` describes the type of the metadata. + Operations returned include those that have + completed/failed/canceled within the last 7 days, and pending + operations. Operations returned are ordered by + ``operation.metadata.value.start_time`` in descending order + starting from the most recently started operation. + + Authorization requires + ``spanner.instancePartitionOperations.list`` permission on the + resource + [parent][google.spanner.admin.instance.v1.ListInstancePartitionOperationsRequest.parent]. + + Returns: + Callable[[~.ListInstancePartitionOperationsRequest], + Awaitable[~.ListInstancePartitionOperationsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_instance_partition_operations" not in self._stubs: + self._stubs[ + "list_instance_partition_operations" + ] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/ListInstancePartitionOperations", + request_serializer=spanner_instance_admin.ListInstancePartitionOperationsRequest.serialize, + response_deserializer=spanner_instance_admin.ListInstancePartitionOperationsResponse.deserialize, + ) + return self._stubs["list_instance_partition_operations"] + + @property + def move_instance( + self, + ) -> Callable[ + [spanner_instance_admin.MoveInstanceRequest], + Awaitable[operations_pb2.Operation], + ]: + r"""Return a callable for the move instance method over gRPC. + + Moves an instance to the target instance configuration. You can + use the returned long-running operation to track the progress of + moving the instance. + + ``MoveInstance`` returns ``FAILED_PRECONDITION`` if the instance + meets any of the following criteria: + + - Is undergoing a move to a different instance configuration + - Has backups + - Has an ongoing update + - Contains any CMEK-enabled databases + - Is a free trial instance + + While the operation is pending: + + - All other attempts to modify the instance, including changes + to its compute capacity, are rejected. + + - The following database and backup admin operations are + rejected: + + - ``DatabaseAdmin.CreateDatabase`` + - ``DatabaseAdmin.UpdateDatabaseDdl`` (disabled if + default_leader is specified in the request.) + - ``DatabaseAdmin.RestoreDatabase`` + - ``DatabaseAdmin.CreateBackup`` + - ``DatabaseAdmin.CopyBackup`` + + - Both the source and target instance configurations are subject + to hourly compute and storage charges. + + - The instance might experience higher read-write latencies and + a higher transaction abort rate. However, moving an instance + doesn't cause any downtime. + + The returned long-running operation has a name of the format + ``/operations/`` and can be used to + track the move instance operation. The metadata field type is + [MoveInstanceMetadata][google.spanner.admin.instance.v1.MoveInstanceMetadata]. + The response field type is + [Instance][google.spanner.admin.instance.v1.Instance], if + successful. Cancelling the operation sets its metadata's + [cancel_time][google.spanner.admin.instance.v1.MoveInstanceMetadata.cancel_time]. + Cancellation is not immediate because it involves moving any + data previously moved to the target instance configuration back + to the original instance configuration. You can use this + operation to track the progress of the cancellation. Upon + successful completion of the cancellation, the operation + terminates with ``CANCELLED`` status. + + If not cancelled, upon completion of the returned operation: + + - The instance successfully moves to the target instance + configuration. + - You are billed for compute and storage in target instance + configuration. + + Authorization requires the ``spanner.instances.update`` + permission on the resource + [instance][google.spanner.admin.instance.v1.Instance]. + + For more details, see `Move an + instance `__. + + Returns: + Callable[[~.MoveInstanceRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "move_instance" not in self._stubs: + self._stubs["move_instance"] = self._logged_channel.unary_unary( + "/google.spanner.admin.instance.v1.InstanceAdmin/MoveInstance", + request_serializer=spanner_instance_admin.MoveInstanceRequest.serialize, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["move_instance"] + + def _prep_wrapped_messages(self, client_info): + """Precompute the wrapped methods, overriding the base class method to use async wrappers.""" + self._wrapped_methods = { + self.list_instance_configs: self._wrap_method( + self.list_instance_configs, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.get_instance_config: self._wrap_method( + self.get_instance_config, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.create_instance_config: self._wrap_method( + self.create_instance_config, + default_timeout=None, + client_info=client_info, + ), + self.update_instance_config: self._wrap_method( + self.update_instance_config, + default_timeout=None, + client_info=client_info, + ), + self.delete_instance_config: self._wrap_method( + self.delete_instance_config, + default_timeout=None, + client_info=client_info, + ), + self.list_instance_config_operations: self._wrap_method( + self.list_instance_config_operations, + default_timeout=None, + client_info=client_info, + ), + self.list_instances: self._wrap_method( + self.list_instances, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.list_instance_partitions: self._wrap_method( + self.list_instance_partitions, + default_timeout=None, + client_info=client_info, + ), + self.get_instance: self._wrap_method( + self.get_instance, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.create_instance: self._wrap_method( + self.create_instance, + default_timeout=3600.0, + client_info=client_info, + ), + self.update_instance: self._wrap_method( + self.update_instance, + default_timeout=3600.0, + client_info=client_info, + ), + self.delete_instance: self._wrap_method( + self.delete_instance, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.set_iam_policy: self._wrap_method( + self.set_iam_policy, + default_timeout=30.0, + client_info=client_info, + ), + self.get_iam_policy: self._wrap_method( + self.get_iam_policy, + default_retry=retries.AsyncRetry( + initial=1.0, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.DeadlineExceeded, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.test_iam_permissions: self._wrap_method( + self.test_iam_permissions, + default_timeout=30.0, + client_info=client_info, + ), + self.get_instance_partition: self._wrap_method( + self.get_instance_partition, + default_timeout=None, + client_info=client_info, + ), + self.create_instance_partition: self._wrap_method( + self.create_instance_partition, + default_timeout=None, + client_info=client_info, + ), + self.delete_instance_partition: self._wrap_method( + self.delete_instance_partition, + default_timeout=None, + client_info=client_info, + ), + self.update_instance_partition: self._wrap_method( + self.update_instance_partition, + default_timeout=None, + client_info=client_info, + ), + self.list_instance_partition_operations: self._wrap_method( + self.list_instance_partition_operations, + default_timeout=None, + client_info=client_info, + ), + self.move_instance: self._wrap_method( + self.move_instance, + default_timeout=None, + client_info=client_info, + ), + self.cancel_operation: self._wrap_method( + self.cancel_operation, + default_timeout=None, + client_info=client_info, + ), + self.delete_operation: self._wrap_method( + self.delete_operation, + default_timeout=None, + client_info=client_info, + ), + self.get_operation: self._wrap_method( + self.get_operation, + default_timeout=None, + client_info=client_info, + ), + self.list_operations: self._wrap_method( + self.list_operations, + default_timeout=None, + client_info=client_info, + ), + } + + def _wrap_method(self, func, *args, **kwargs): + if self._wrap_with_kind: # pragma: NO COVER + kwargs["kind"] = self.kind + return gapic_v1.method_async.wrap_method(func, *args, **kwargs) + + def close(self): + return self._logged_channel.close() + + @property + def kind(self) -> str: + return "grpc_asyncio" + + @property + def delete_operation( + self, + ) -> Callable[[operations_pb2.DeleteOperationRequest], None]: + r"""Return a callable for the delete_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_operation" not in self._stubs: + self._stubs["delete_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/DeleteOperation", + request_serializer=operations_pb2.DeleteOperationRequest.SerializeToString, + response_deserializer=None, + ) + return self._stubs["delete_operation"] + + @property + def cancel_operation( + self, + ) -> Callable[[operations_pb2.CancelOperationRequest], None]: + r"""Return a callable for the cancel_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "cancel_operation" not in self._stubs: + self._stubs["cancel_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/CancelOperation", + request_serializer=operations_pb2.CancelOperationRequest.SerializeToString, + response_deserializer=None, + ) + return self._stubs["cancel_operation"] + + @property + def get_operation( + self, + ) -> Callable[[operations_pb2.GetOperationRequest], operations_pb2.Operation]: + r"""Return a callable for the get_operation method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_operation" not in self._stubs: + self._stubs["get_operation"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/GetOperation", + request_serializer=operations_pb2.GetOperationRequest.SerializeToString, + response_deserializer=operations_pb2.Operation.FromString, + ) + return self._stubs["get_operation"] + + @property + def list_operations( + self, + ) -> Callable[ + [operations_pb2.ListOperationsRequest], operations_pb2.ListOperationsResponse + ]: + r"""Return a callable for the list_operations method over gRPC.""" + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_operations" not in self._stubs: + self._stubs["list_operations"] = self._logged_channel.unary_unary( + "/google.longrunning.Operations/ListOperations", + request_serializer=operations_pb2.ListOperationsRequest.SerializeToString, + response_deserializer=operations_pb2.ListOperationsResponse.FromString, + ) + return self._stubs["list_operations"] + + +__all__ = ("InstanceAdminGrpcAsyncIOTransport",) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/rest.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/rest.py new file mode 100644 index 0000000000..ca32cafa99 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/rest.py @@ -0,0 +1,5502 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import logging +import json # type: ignore + +from google.auth.transport.requests import AuthorizedSession # type: ignore +from google.auth import credentials as ga_credentials # type: ignore +from google.api_core import exceptions as core_exceptions +from google.api_core import retry as retries +from google.api_core import rest_helpers +from google.api_core import rest_streaming +from google.api_core import gapic_v1 +import google.protobuf + +from google.protobuf import json_format +from google.api_core import operations_v1 + +from requests import __version__ as requests_version +import dataclasses +from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union +import warnings + + +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore + + +from .rest_base import _BaseInstanceAdminRestTransport +from .base import DEFAULT_CLIENT_INFO as BASE_DEFAULT_CLIENT_INFO + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = logging.getLogger(__name__) + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=BASE_DEFAULT_CLIENT_INFO.gapic_version, + grpc_version=None, + rest_version=f"requests@{requests_version}", +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +class InstanceAdminRestInterceptor: + """Interceptor for InstanceAdmin. + + Interceptors are used to manipulate requests, request metadata, and responses + in arbitrary ways. + Example use cases include: + * Logging + * Verifying requests according to service or custom semantics + * Stripping extraneous information from responses + + These use cases and more can be enabled by injecting an + instance of a custom subclass when constructing the InstanceAdminRestTransport. + + .. code-block:: python + class MyCustomInstanceAdminInterceptor(InstanceAdminRestInterceptor): + def pre_create_instance(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_create_instance(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_create_instance_config(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_create_instance_config(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_create_instance_partition(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_create_instance_partition(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_delete_instance(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def pre_delete_instance_config(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def pre_delete_instance_partition(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def pre_get_iam_policy(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_iam_policy(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_get_instance(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_instance(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_get_instance_config(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_instance_config(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_get_instance_partition(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_instance_partition(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_instance_config_operations(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_instance_config_operations(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_instance_configs(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_instance_configs(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_instance_partition_operations(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_instance_partition_operations(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_instance_partitions(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_instance_partitions(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_instances(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_instances(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_move_instance(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_move_instance(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_set_iam_policy(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_set_iam_policy(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_test_iam_permissions(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_test_iam_permissions(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_update_instance(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_update_instance(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_update_instance_config(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_update_instance_config(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_update_instance_partition(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_update_instance_partition(self, response): + logging.log(f"Received response: {response}") + return response + + transport = InstanceAdminRestTransport(interceptor=MyCustomInstanceAdminInterceptor()) + client = InstanceAdminClient(transport=transport) + + + """ + + def pre_create_instance( + self, + request: spanner_instance_admin.CreateInstanceRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.CreateInstanceRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for create_instance + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_create_instance( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for create_instance + + DEPRECATED. Please use the `post_create_instance_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_create_instance` interceptor runs + before the `post_create_instance_with_metadata` interceptor. + """ + return response + + def post_create_instance_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for create_instance + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_create_instance_with_metadata` + interceptor in new development instead of the `post_create_instance` interceptor. + When both interceptors are used, this `post_create_instance_with_metadata` interceptor runs after the + `post_create_instance` interceptor. The (possibly modified) response returned by + `post_create_instance` will be passed to + `post_create_instance_with_metadata`. + """ + return response, metadata + + def pre_create_instance_config( + self, + request: spanner_instance_admin.CreateInstanceConfigRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.CreateInstanceConfigRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for create_instance_config + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_create_instance_config( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for create_instance_config + + DEPRECATED. Please use the `post_create_instance_config_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_create_instance_config` interceptor runs + before the `post_create_instance_config_with_metadata` interceptor. + """ + return response + + def post_create_instance_config_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for create_instance_config + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_create_instance_config_with_metadata` + interceptor in new development instead of the `post_create_instance_config` interceptor. + When both interceptors are used, this `post_create_instance_config_with_metadata` interceptor runs after the + `post_create_instance_config` interceptor. The (possibly modified) response returned by + `post_create_instance_config` will be passed to + `post_create_instance_config_with_metadata`. + """ + return response, metadata + + def pre_create_instance_partition( + self, + request: spanner_instance_admin.CreateInstancePartitionRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.CreateInstancePartitionRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for create_instance_partition + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_create_instance_partition( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for create_instance_partition + + DEPRECATED. Please use the `post_create_instance_partition_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_create_instance_partition` interceptor runs + before the `post_create_instance_partition_with_metadata` interceptor. + """ + return response + + def post_create_instance_partition_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for create_instance_partition + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_create_instance_partition_with_metadata` + interceptor in new development instead of the `post_create_instance_partition` interceptor. + When both interceptors are used, this `post_create_instance_partition_with_metadata` interceptor runs after the + `post_create_instance_partition` interceptor. The (possibly modified) response returned by + `post_create_instance_partition` will be passed to + `post_create_instance_partition_with_metadata`. + """ + return response, metadata + + def pre_delete_instance( + self, + request: spanner_instance_admin.DeleteInstanceRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.DeleteInstanceRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for delete_instance + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def pre_delete_instance_config( + self, + request: spanner_instance_admin.DeleteInstanceConfigRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.DeleteInstanceConfigRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for delete_instance_config + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def pre_delete_instance_partition( + self, + request: spanner_instance_admin.DeleteInstancePartitionRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.DeleteInstancePartitionRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for delete_instance_partition + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def pre_get_iam_policy( + self, + request: iam_policy_pb2.GetIamPolicyRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + iam_policy_pb2.GetIamPolicyRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for get_iam_policy + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_get_iam_policy(self, response: policy_pb2.Policy) -> policy_pb2.Policy: + """Post-rpc interceptor for get_iam_policy + + DEPRECATED. Please use the `post_get_iam_policy_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_get_iam_policy` interceptor runs + before the `post_get_iam_policy_with_metadata` interceptor. + """ + return response + + def post_get_iam_policy_with_metadata( + self, + response: policy_pb2.Policy, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[policy_pb2.Policy, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for get_iam_policy + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_get_iam_policy_with_metadata` + interceptor in new development instead of the `post_get_iam_policy` interceptor. + When both interceptors are used, this `post_get_iam_policy_with_metadata` interceptor runs after the + `post_get_iam_policy` interceptor. The (possibly modified) response returned by + `post_get_iam_policy` will be passed to + `post_get_iam_policy_with_metadata`. + """ + return response, metadata + + def pre_get_instance( + self, + request: spanner_instance_admin.GetInstanceRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.GetInstanceRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for get_instance + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_get_instance( + self, response: spanner_instance_admin.Instance + ) -> spanner_instance_admin.Instance: + """Post-rpc interceptor for get_instance + + DEPRECATED. Please use the `post_get_instance_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_get_instance` interceptor runs + before the `post_get_instance_with_metadata` interceptor. + """ + return response + + def post_get_instance_with_metadata( + self, + response: spanner_instance_admin.Instance, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.Instance, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for get_instance + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_get_instance_with_metadata` + interceptor in new development instead of the `post_get_instance` interceptor. + When both interceptors are used, this `post_get_instance_with_metadata` interceptor runs after the + `post_get_instance` interceptor. The (possibly modified) response returned by + `post_get_instance` will be passed to + `post_get_instance_with_metadata`. + """ + return response, metadata + + def pre_get_instance_config( + self, + request: spanner_instance_admin.GetInstanceConfigRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.GetInstanceConfigRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for get_instance_config + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_get_instance_config( + self, response: spanner_instance_admin.InstanceConfig + ) -> spanner_instance_admin.InstanceConfig: + """Post-rpc interceptor for get_instance_config + + DEPRECATED. Please use the `post_get_instance_config_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_get_instance_config` interceptor runs + before the `post_get_instance_config_with_metadata` interceptor. + """ + return response + + def post_get_instance_config_with_metadata( + self, + response: spanner_instance_admin.InstanceConfig, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.InstanceConfig, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for get_instance_config + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_get_instance_config_with_metadata` + interceptor in new development instead of the `post_get_instance_config` interceptor. + When both interceptors are used, this `post_get_instance_config_with_metadata` interceptor runs after the + `post_get_instance_config` interceptor. The (possibly modified) response returned by + `post_get_instance_config` will be passed to + `post_get_instance_config_with_metadata`. + """ + return response, metadata + + def pre_get_instance_partition( + self, + request: spanner_instance_admin.GetInstancePartitionRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.GetInstancePartitionRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for get_instance_partition + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_get_instance_partition( + self, response: spanner_instance_admin.InstancePartition + ) -> spanner_instance_admin.InstancePartition: + """Post-rpc interceptor for get_instance_partition + + DEPRECATED. Please use the `post_get_instance_partition_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_get_instance_partition` interceptor runs + before the `post_get_instance_partition_with_metadata` interceptor. + """ + return response + + def post_get_instance_partition_with_metadata( + self, + response: spanner_instance_admin.InstancePartition, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.InstancePartition, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for get_instance_partition + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_get_instance_partition_with_metadata` + interceptor in new development instead of the `post_get_instance_partition` interceptor. + When both interceptors are used, this `post_get_instance_partition_with_metadata` interceptor runs after the + `post_get_instance_partition` interceptor. The (possibly modified) response returned by + `post_get_instance_partition` will be passed to + `post_get_instance_partition_with_metadata`. + """ + return response, metadata + + def pre_list_instance_config_operations( + self, + request: spanner_instance_admin.ListInstanceConfigOperationsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstanceConfigOperationsRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_instance_config_operations + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_list_instance_config_operations( + self, response: spanner_instance_admin.ListInstanceConfigOperationsResponse + ) -> spanner_instance_admin.ListInstanceConfigOperationsResponse: + """Post-rpc interceptor for list_instance_config_operations + + DEPRECATED. Please use the `post_list_instance_config_operations_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_list_instance_config_operations` interceptor runs + before the `post_list_instance_config_operations_with_metadata` interceptor. + """ + return response + + def post_list_instance_config_operations_with_metadata( + self, + response: spanner_instance_admin.ListInstanceConfigOperationsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstanceConfigOperationsResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_instance_config_operations + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_list_instance_config_operations_with_metadata` + interceptor in new development instead of the `post_list_instance_config_operations` interceptor. + When both interceptors are used, this `post_list_instance_config_operations_with_metadata` interceptor runs after the + `post_list_instance_config_operations` interceptor. The (possibly modified) response returned by + `post_list_instance_config_operations` will be passed to + `post_list_instance_config_operations_with_metadata`. + """ + return response, metadata + + def pre_list_instance_configs( + self, + request: spanner_instance_admin.ListInstanceConfigsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstanceConfigsRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_instance_configs + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_list_instance_configs( + self, response: spanner_instance_admin.ListInstanceConfigsResponse + ) -> spanner_instance_admin.ListInstanceConfigsResponse: + """Post-rpc interceptor for list_instance_configs + + DEPRECATED. Please use the `post_list_instance_configs_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_list_instance_configs` interceptor runs + before the `post_list_instance_configs_with_metadata` interceptor. + """ + return response + + def post_list_instance_configs_with_metadata( + self, + response: spanner_instance_admin.ListInstanceConfigsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstanceConfigsResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_instance_configs + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_list_instance_configs_with_metadata` + interceptor in new development instead of the `post_list_instance_configs` interceptor. + When both interceptors are used, this `post_list_instance_configs_with_metadata` interceptor runs after the + `post_list_instance_configs` interceptor. The (possibly modified) response returned by + `post_list_instance_configs` will be passed to + `post_list_instance_configs_with_metadata`. + """ + return response, metadata + + def pre_list_instance_partition_operations( + self, + request: spanner_instance_admin.ListInstancePartitionOperationsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstancePartitionOperationsRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_instance_partition_operations + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_list_instance_partition_operations( + self, response: spanner_instance_admin.ListInstancePartitionOperationsResponse + ) -> spanner_instance_admin.ListInstancePartitionOperationsResponse: + """Post-rpc interceptor for list_instance_partition_operations + + DEPRECATED. Please use the `post_list_instance_partition_operations_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_list_instance_partition_operations` interceptor runs + before the `post_list_instance_partition_operations_with_metadata` interceptor. + """ + return response + + def post_list_instance_partition_operations_with_metadata( + self, + response: spanner_instance_admin.ListInstancePartitionOperationsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstancePartitionOperationsResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_instance_partition_operations + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_list_instance_partition_operations_with_metadata` + interceptor in new development instead of the `post_list_instance_partition_operations` interceptor. + When both interceptors are used, this `post_list_instance_partition_operations_with_metadata` interceptor runs after the + `post_list_instance_partition_operations` interceptor. The (possibly modified) response returned by + `post_list_instance_partition_operations` will be passed to + `post_list_instance_partition_operations_with_metadata`. + """ + return response, metadata + + def pre_list_instance_partitions( + self, + request: spanner_instance_admin.ListInstancePartitionsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstancePartitionsRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_instance_partitions + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_list_instance_partitions( + self, response: spanner_instance_admin.ListInstancePartitionsResponse + ) -> spanner_instance_admin.ListInstancePartitionsResponse: + """Post-rpc interceptor for list_instance_partitions + + DEPRECATED. Please use the `post_list_instance_partitions_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_list_instance_partitions` interceptor runs + before the `post_list_instance_partitions_with_metadata` interceptor. + """ + return response + + def post_list_instance_partitions_with_metadata( + self, + response: spanner_instance_admin.ListInstancePartitionsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstancePartitionsResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_instance_partitions + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_list_instance_partitions_with_metadata` + interceptor in new development instead of the `post_list_instance_partitions` interceptor. + When both interceptors are used, this `post_list_instance_partitions_with_metadata` interceptor runs after the + `post_list_instance_partitions` interceptor. The (possibly modified) response returned by + `post_list_instance_partitions` will be passed to + `post_list_instance_partitions_with_metadata`. + """ + return response, metadata + + def pre_list_instances( + self, + request: spanner_instance_admin.ListInstancesRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstancesRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for list_instances + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_list_instances( + self, response: spanner_instance_admin.ListInstancesResponse + ) -> spanner_instance_admin.ListInstancesResponse: + """Post-rpc interceptor for list_instances + + DEPRECATED. Please use the `post_list_instances_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_list_instances` interceptor runs + before the `post_list_instances_with_metadata` interceptor. + """ + return response + + def post_list_instances_with_metadata( + self, + response: spanner_instance_admin.ListInstancesResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.ListInstancesResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for list_instances + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_list_instances_with_metadata` + interceptor in new development instead of the `post_list_instances` interceptor. + When both interceptors are used, this `post_list_instances_with_metadata` interceptor runs after the + `post_list_instances` interceptor. The (possibly modified) response returned by + `post_list_instances` will be passed to + `post_list_instances_with_metadata`. + """ + return response, metadata + + def pre_move_instance( + self, + request: spanner_instance_admin.MoveInstanceRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.MoveInstanceRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for move_instance + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_move_instance( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for move_instance + + DEPRECATED. Please use the `post_move_instance_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_move_instance` interceptor runs + before the `post_move_instance_with_metadata` interceptor. + """ + return response + + def post_move_instance_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for move_instance + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_move_instance_with_metadata` + interceptor in new development instead of the `post_move_instance` interceptor. + When both interceptors are used, this `post_move_instance_with_metadata` interceptor runs after the + `post_move_instance` interceptor. The (possibly modified) response returned by + `post_move_instance` will be passed to + `post_move_instance_with_metadata`. + """ + return response, metadata + + def pre_set_iam_policy( + self, + request: iam_policy_pb2.SetIamPolicyRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + iam_policy_pb2.SetIamPolicyRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for set_iam_policy + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_set_iam_policy(self, response: policy_pb2.Policy) -> policy_pb2.Policy: + """Post-rpc interceptor for set_iam_policy + + DEPRECATED. Please use the `post_set_iam_policy_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_set_iam_policy` interceptor runs + before the `post_set_iam_policy_with_metadata` interceptor. + """ + return response + + def post_set_iam_policy_with_metadata( + self, + response: policy_pb2.Policy, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[policy_pb2.Policy, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for set_iam_policy + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_set_iam_policy_with_metadata` + interceptor in new development instead of the `post_set_iam_policy` interceptor. + When both interceptors are used, this `post_set_iam_policy_with_metadata` interceptor runs after the + `post_set_iam_policy` interceptor. The (possibly modified) response returned by + `post_set_iam_policy` will be passed to + `post_set_iam_policy_with_metadata`. + """ + return response, metadata + + def pre_test_iam_permissions( + self, + request: iam_policy_pb2.TestIamPermissionsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + iam_policy_pb2.TestIamPermissionsRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for test_iam_permissions + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_test_iam_permissions( + self, response: iam_policy_pb2.TestIamPermissionsResponse + ) -> iam_policy_pb2.TestIamPermissionsResponse: + """Post-rpc interceptor for test_iam_permissions + + DEPRECATED. Please use the `post_test_iam_permissions_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_test_iam_permissions` interceptor runs + before the `post_test_iam_permissions_with_metadata` interceptor. + """ + return response + + def post_test_iam_permissions_with_metadata( + self, + response: iam_policy_pb2.TestIamPermissionsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + iam_policy_pb2.TestIamPermissionsResponse, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Post-rpc interceptor for test_iam_permissions + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_test_iam_permissions_with_metadata` + interceptor in new development instead of the `post_test_iam_permissions` interceptor. + When both interceptors are used, this `post_test_iam_permissions_with_metadata` interceptor runs after the + `post_test_iam_permissions` interceptor. The (possibly modified) response returned by + `post_test_iam_permissions` will be passed to + `post_test_iam_permissions_with_metadata`. + """ + return response, metadata + + def pre_update_instance( + self, + request: spanner_instance_admin.UpdateInstanceRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.UpdateInstanceRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for update_instance + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_update_instance( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for update_instance + + DEPRECATED. Please use the `post_update_instance_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_update_instance` interceptor runs + before the `post_update_instance_with_metadata` interceptor. + """ + return response + + def post_update_instance_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for update_instance + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_update_instance_with_metadata` + interceptor in new development instead of the `post_update_instance` interceptor. + When both interceptors are used, this `post_update_instance_with_metadata` interceptor runs after the + `post_update_instance` interceptor. The (possibly modified) response returned by + `post_update_instance` will be passed to + `post_update_instance_with_metadata`. + """ + return response, metadata + + def pre_update_instance_config( + self, + request: spanner_instance_admin.UpdateInstanceConfigRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.UpdateInstanceConfigRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for update_instance_config + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_update_instance_config( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for update_instance_config + + DEPRECATED. Please use the `post_update_instance_config_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_update_instance_config` interceptor runs + before the `post_update_instance_config_with_metadata` interceptor. + """ + return response + + def post_update_instance_config_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for update_instance_config + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_update_instance_config_with_metadata` + interceptor in new development instead of the `post_update_instance_config` interceptor. + When both interceptors are used, this `post_update_instance_config_with_metadata` interceptor runs after the + `post_update_instance_config` interceptor. The (possibly modified) response returned by + `post_update_instance_config` will be passed to + `post_update_instance_config_with_metadata`. + """ + return response, metadata + + def pre_update_instance_partition( + self, + request: spanner_instance_admin.UpdateInstancePartitionRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner_instance_admin.UpdateInstancePartitionRequest, + Sequence[Tuple[str, Union[str, bytes]]], + ]: + """Pre-rpc interceptor for update_instance_partition + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_update_instance_partition( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for update_instance_partition + + DEPRECATED. Please use the `post_update_instance_partition_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. This `post_update_instance_partition` interceptor runs + before the `post_update_instance_partition_with_metadata` interceptor. + """ + return response + + def post_update_instance_partition_with_metadata( + self, + response: operations_pb2.Operation, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[operations_pb2.Operation, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for update_instance_partition + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the InstanceAdmin server but before it is returned to user code. + + We recommend only using this `post_update_instance_partition_with_metadata` + interceptor in new development instead of the `post_update_instance_partition` interceptor. + When both interceptors are used, this `post_update_instance_partition_with_metadata` interceptor runs after the + `post_update_instance_partition` interceptor. The (possibly modified) response returned by + `post_update_instance_partition` will be passed to + `post_update_instance_partition_with_metadata`. + """ + return response, metadata + + def pre_cancel_operation( + self, + request: operations_pb2.CancelOperationRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + operations_pb2.CancelOperationRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for cancel_operation + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_cancel_operation(self, response: None) -> None: + """Post-rpc interceptor for cancel_operation + + Override in a subclass to manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. + """ + return response + + def pre_delete_operation( + self, + request: operations_pb2.DeleteOperationRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + operations_pb2.DeleteOperationRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for delete_operation + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_delete_operation(self, response: None) -> None: + """Post-rpc interceptor for delete_operation + + Override in a subclass to manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. + """ + return response + + def pre_get_operation( + self, + request: operations_pb2.GetOperationRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + operations_pb2.GetOperationRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for get_operation + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_get_operation( + self, response: operations_pb2.Operation + ) -> operations_pb2.Operation: + """Post-rpc interceptor for get_operation + + Override in a subclass to manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. + """ + return response + + def pre_list_operations( + self, + request: operations_pb2.ListOperationsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + operations_pb2.ListOperationsRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for list_operations + + Override in a subclass to manipulate the request or metadata + before they are sent to the InstanceAdmin server. + """ + return request, metadata + + def post_list_operations( + self, response: operations_pb2.ListOperationsResponse + ) -> operations_pb2.ListOperationsResponse: + """Post-rpc interceptor for list_operations + + Override in a subclass to manipulate the response + after it is returned by the InstanceAdmin server but before + it is returned to user code. + """ + return response + + +@dataclasses.dataclass +class InstanceAdminRestStub: + _session: AuthorizedSession + _host: str + _interceptor: InstanceAdminRestInterceptor + + +class InstanceAdminRestTransport(_BaseInstanceAdminRestTransport): + """REST backend synchronous transport for InstanceAdmin. + + Cloud Spanner Instance Admin API + + The Cloud Spanner Instance Admin API can be used to create, + delete, modify and list instances. Instances are dedicated Cloud + Spanner serving and storage resources to be used by Cloud + Spanner databases. + + Each instance has a "configuration", which dictates where the + serving resources for the Cloud Spanner instance are located + (e.g., US-central, Europe). Configurations are created by Google + based on resource availability. + + Cloud Spanner billing is based on the instances that exist and + their sizes. After an instance exists, there are no additional + per-database or per-operation charges for use of the instance + (though there may be additional network bandwidth charges). + Instances offer isolation: problems with databases in one + instance will not affect other instances. However, within an + instance databases can affect each other. For example, if one + database in an instance receives a lot of requests and consumes + most of the instance resources, fewer resources are available + for other databases in that instance, and their performance may + suffer. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends JSON representations of protocol buffers over HTTP/1.1 + """ + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + url_scheme: str = "https", + interceptor: Optional[InstanceAdminRestInterceptor] = None, + api_audience: Optional[str] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + client_cert_source_for_mtls (Callable[[], Tuple[bytes, bytes]]): Client + certificate to configure mutual TLS HTTP channel. It is ignored + if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you are developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + url_scheme: the protocol scheme for the API endpoint. Normally + "https", but for testing or local servers, + "http" can be specified. + """ + # Run the base constructor + # TODO(yon-mg): resolve other ctor params i.e. scopes, quota, etc. + # TODO: When custom host (api_endpoint) is set, `scopes` must *also* be set on the + # credentials object + super().__init__( + host=host, + credentials=credentials, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + url_scheme=url_scheme, + api_audience=api_audience, + ) + self._session = AuthorizedSession( + self._credentials, default_host=self.DEFAULT_HOST + ) + self._operations_client: Optional[operations_v1.AbstractOperationsClient] = None + if client_cert_source_for_mtls: + self._session.configure_mtls_channel(client_cert_source_for_mtls) + self._interceptor = interceptor or InstanceAdminRestInterceptor() + self._prep_wrapped_messages(client_info) + + @property + def operations_client(self) -> operations_v1.AbstractOperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Only create a new client if we do not already have one. + if self._operations_client is None: + http_options: Dict[str, List[Dict[str, str]]] = { + "google.longrunning.Operations.CancelOperation": [ + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instanceConfigs/*/ssdCaches/*/operations/*}:cancel", + }, + ], + "google.longrunning.Operations.DeleteOperation": [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instanceConfigs/*/ssdCaches/*/operations/*}", + }, + ], + "google.longrunning.Operations.GetOperation": [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/ssdCaches/*/operations/*}", + }, + ], + "google.longrunning.Operations.ListOperations": [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/ssdCaches/*/operations}", + }, + ], + } + + rest_transport = operations_v1.OperationsRestTransport( + host=self._host, + # use the credentials which are saved + credentials=self._credentials, + scopes=self._scopes, + http_options=http_options, + path_prefix="v1", + ) + + self._operations_client = operations_v1.AbstractOperationsClient( + transport=rest_transport + ) + + # Return the client from cache. + return self._operations_client + + class _CreateInstance( + _BaseInstanceAdminRestTransport._BaseCreateInstance, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.CreateInstance") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_instance_admin.CreateInstanceRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the create instance method over HTTP. + + Args: + request (~.spanner_instance_admin.CreateInstanceRequest): + The request object. The request for + [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseCreateInstance._get_http_options() + ) + + request, metadata = self._interceptor.pre_create_instance(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseCreateInstance._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseCreateInstance._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseCreateInstance._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.CreateInstance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "CreateInstance", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._CreateInstance._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_create_instance(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_create_instance_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.create_instance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "CreateInstance", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _CreateInstanceConfig( + _BaseInstanceAdminRestTransport._BaseCreateInstanceConfig, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.CreateInstanceConfig") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_instance_admin.CreateInstanceConfigRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the create instance config method over HTTP. + + Args: + request (~.spanner_instance_admin.CreateInstanceConfigRequest): + The request object. The request for + [CreateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstanceConfig]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseCreateInstanceConfig._get_http_options() + ) + + request, metadata = self._interceptor.pre_create_instance_config( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseCreateInstanceConfig._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseCreateInstanceConfig._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseCreateInstanceConfig._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.CreateInstanceConfig", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "CreateInstanceConfig", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._CreateInstanceConfig._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_create_instance_config(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_create_instance_config_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.create_instance_config", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "CreateInstanceConfig", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _CreateInstancePartition( + _BaseInstanceAdminRestTransport._BaseCreateInstancePartition, + InstanceAdminRestStub, + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.CreateInstancePartition") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_instance_admin.CreateInstancePartitionRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the create instance partition method over HTTP. + + Args: + request (~.spanner_instance_admin.CreateInstancePartitionRequest): + The request object. The request for + [CreateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstancePartition]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseCreateInstancePartition._get_http_options() + ) + + request, metadata = self._interceptor.pre_create_instance_partition( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseCreateInstancePartition._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseCreateInstancePartition._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseCreateInstancePartition._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.CreateInstancePartition", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "CreateInstancePartition", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = ( + InstanceAdminRestTransport._CreateInstancePartition._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_create_instance_partition(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_create_instance_partition_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.create_instance_partition", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "CreateInstancePartition", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _DeleteInstance( + _BaseInstanceAdminRestTransport._BaseDeleteInstance, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.DeleteInstance") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.DeleteInstanceRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ): + r"""Call the delete instance method over HTTP. + + Args: + request (~.spanner_instance_admin.DeleteInstanceRequest): + The request object. The request for + [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseDeleteInstance._get_http_options() + ) + + request, metadata = self._interceptor.pre_delete_instance(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseDeleteInstance._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseDeleteInstance._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.DeleteInstance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "DeleteInstance", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._DeleteInstance._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + class _DeleteInstanceConfig( + _BaseInstanceAdminRestTransport._BaseDeleteInstanceConfig, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.DeleteInstanceConfig") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.DeleteInstanceConfigRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ): + r"""Call the delete instance config method over HTTP. + + Args: + request (~.spanner_instance_admin.DeleteInstanceConfigRequest): + The request object. The request for + [DeleteInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstanceConfig]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseDeleteInstanceConfig._get_http_options() + ) + + request, metadata = self._interceptor.pre_delete_instance_config( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseDeleteInstanceConfig._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseDeleteInstanceConfig._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.DeleteInstanceConfig", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "DeleteInstanceConfig", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._DeleteInstanceConfig._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + class _DeleteInstancePartition( + _BaseInstanceAdminRestTransport._BaseDeleteInstancePartition, + InstanceAdminRestStub, + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.DeleteInstancePartition") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.DeleteInstancePartitionRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ): + r"""Call the delete instance partition method over HTTP. + + Args: + request (~.spanner_instance_admin.DeleteInstancePartitionRequest): + The request object. The request for + [DeleteInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstancePartition]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseDeleteInstancePartition._get_http_options() + ) + + request, metadata = self._interceptor.pre_delete_instance_partition( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseDeleteInstancePartition._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseDeleteInstancePartition._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.DeleteInstancePartition", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "DeleteInstancePartition", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = ( + InstanceAdminRestTransport._DeleteInstancePartition._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + class _GetIamPolicy( + _BaseInstanceAdminRestTransport._BaseGetIamPolicy, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.GetIamPolicy") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: iam_policy_pb2.GetIamPolicyRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Call the get iam policy method over HTTP. + + Args: + request (~.iam_policy_pb2.GetIamPolicyRequest): + The request object. Request message for ``GetIamPolicy`` method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which + specifies access controls for Google Cloud resources. + + A ``Policy`` is a collection of ``bindings``. A + ``binding`` binds one or more ``members``, or + principals, to a single ``role``. Principals can be user + accounts, service accounts, Google groups, and domains + (such as G Suite). A ``role`` is a named list of + permissions; each ``role`` can be an IAM predefined role + or a user-created custom role. + + For some types of Google Cloud resources, a ``binding`` + can also specify a ``condition``, which is a logical + expression that allows access to a resource only if the + expression evaluates to ``true``. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the `IAM + documentation `__. + + **JSON example:** + + :: + + { + "bindings": [ + { + "role": "roles/resourcemanager.organizationAdmin", + "members": [ + "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + }, + { + "role": "roles/resourcemanager.organizationViewer", + "members": [ + "user:eve@example.com" + ], + "condition": { + "title": "expirable access", + "description": "Does not grant access after Sep 2020", + "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", + } + } + ], + "etag": "BwWWja0YfJA=", + "version": 3 + } + + **YAML example:** + + :: + + bindings: + - members: + - user:mike@example.com + - group:admins@example.com + - domain:google.com + - serviceAccount:my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin + - members: + - user:eve@example.com + role: roles/resourcemanager.organizationViewer + condition: + title: expirable access + description: Does not grant access after Sep 2020 + expression: request.time < timestamp('2020-10-01T00:00:00.000Z') + etag: BwWWja0YfJA= + version: 3 + + For a description of IAM and its features, see the `IAM + documentation `__. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseGetIamPolicy._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_iam_policy(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseGetIamPolicy._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseGetIamPolicy._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseGetIamPolicy._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.GetIamPolicy", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetIamPolicy", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._GetIamPolicy._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = policy_pb2.Policy() + pb_resp = resp + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_iam_policy(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_iam_policy_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.get_iam_policy", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetIamPolicy", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _GetInstance( + _BaseInstanceAdminRestTransport._BaseGetInstance, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.GetInstance") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.GetInstanceRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.Instance: + r"""Call the get instance method over HTTP. + + Args: + request (~.spanner_instance_admin.GetInstanceRequest): + The request object. The request for + [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_instance_admin.Instance: + An isolated set of Cloud Spanner + resources on which databases can be + hosted. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseGetInstance._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_instance(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseGetInstance._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = ( + _BaseInstanceAdminRestTransport._BaseGetInstance._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.GetInstance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetInstance", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._GetInstance._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_instance_admin.Instance() + pb_resp = spanner_instance_admin.Instance.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_instance(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_instance_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner_instance_admin.Instance.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.get_instance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetInstance", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _GetInstanceConfig( + _BaseInstanceAdminRestTransport._BaseGetInstanceConfig, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.GetInstanceConfig") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.GetInstanceConfigRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.InstanceConfig: + r"""Call the get instance config method over HTTP. + + Args: + request (~.spanner_instance_admin.GetInstanceConfigRequest): + The request object. The request for + [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_instance_admin.InstanceConfig: + A possible configuration for a Cloud + Spanner instance. Configurations define + the geographic placement of nodes and + their replication. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseGetInstanceConfig._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_instance_config( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseGetInstanceConfig._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseGetInstanceConfig._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.GetInstanceConfig", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetInstanceConfig", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._GetInstanceConfig._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_instance_admin.InstanceConfig() + pb_resp = spanner_instance_admin.InstanceConfig.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_instance_config(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_instance_config_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner_instance_admin.InstanceConfig.to_json( + response + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.get_instance_config", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetInstanceConfig", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _GetInstancePartition( + _BaseInstanceAdminRestTransport._BaseGetInstancePartition, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.GetInstancePartition") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.GetInstancePartitionRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.InstancePartition: + r"""Call the get instance partition method over HTTP. + + Args: + request (~.spanner_instance_admin.GetInstancePartitionRequest): + The request object. The request for + [GetInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.GetInstancePartition]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_instance_admin.InstancePartition: + An isolated set of Cloud Spanner + resources that databases can define + placements on. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseGetInstancePartition._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_instance_partition( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseGetInstancePartition._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseGetInstancePartition._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.GetInstancePartition", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetInstancePartition", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._GetInstancePartition._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_instance_admin.InstancePartition() + pb_resp = spanner_instance_admin.InstancePartition.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_instance_partition(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_instance_partition_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner_instance_admin.InstancePartition.to_json( + response + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.get_instance_partition", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetInstancePartition", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListInstanceConfigOperations( + _BaseInstanceAdminRestTransport._BaseListInstanceConfigOperations, + InstanceAdminRestStub, + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.ListInstanceConfigOperations") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.ListInstanceConfigOperationsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.ListInstanceConfigOperationsResponse: + r"""Call the list instance config + operations method over HTTP. + + Args: + request (~.spanner_instance_admin.ListInstanceConfigOperationsRequest): + The request object. The request for + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_instance_admin.ListInstanceConfigOperationsResponse: + The response for + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations]. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseListInstanceConfigOperations._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_instance_config_operations( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseListInstanceConfigOperations._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseListInstanceConfigOperations._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.ListInstanceConfigOperations", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstanceConfigOperations", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = ( + InstanceAdminRestTransport._ListInstanceConfigOperations._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_instance_admin.ListInstanceConfigOperationsResponse() + pb_resp = spanner_instance_admin.ListInstanceConfigOperationsResponse.pb( + resp + ) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_instance_config_operations(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + ( + resp, + _, + ) = self._interceptor.post_list_instance_config_operations_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner_instance_admin.ListInstanceConfigOperationsResponse.to_json( + response + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.list_instance_config_operations", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstanceConfigOperations", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListInstanceConfigs( + _BaseInstanceAdminRestTransport._BaseListInstanceConfigs, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.ListInstanceConfigs") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.ListInstanceConfigsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.ListInstanceConfigsResponse: + r"""Call the list instance configs method over HTTP. + + Args: + request (~.spanner_instance_admin.ListInstanceConfigsRequest): + The request object. The request for + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_instance_admin.ListInstanceConfigsResponse: + The response for + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseListInstanceConfigs._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_instance_configs( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseListInstanceConfigs._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseListInstanceConfigs._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.ListInstanceConfigs", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstanceConfigs", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._ListInstanceConfigs._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_instance_admin.ListInstanceConfigsResponse() + pb_resp = spanner_instance_admin.ListInstanceConfigsResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_instance_configs(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_instance_configs_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + spanner_instance_admin.ListInstanceConfigsResponse.to_json( + response + ) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.list_instance_configs", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstanceConfigs", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListInstancePartitionOperations( + _BaseInstanceAdminRestTransport._BaseListInstancePartitionOperations, + InstanceAdminRestStub, + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.ListInstancePartitionOperations") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.ListInstancePartitionOperationsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.ListInstancePartitionOperationsResponse: + r"""Call the list instance partition + operations method over HTTP. + + Args: + request (~.spanner_instance_admin.ListInstancePartitionOperationsRequest): + The request object. The request for + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_instance_admin.ListInstancePartitionOperationsResponse: + The response for + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations]. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseListInstancePartitionOperations._get_http_options() + ) + + ( + request, + metadata, + ) = self._interceptor.pre_list_instance_partition_operations( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseListInstancePartitionOperations._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseListInstancePartitionOperations._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.ListInstancePartitionOperations", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstancePartitionOperations", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._ListInstancePartitionOperations._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_instance_admin.ListInstancePartitionOperationsResponse() + pb_resp = spanner_instance_admin.ListInstancePartitionOperationsResponse.pb( + resp + ) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_instance_partition_operations(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + ( + resp, + _, + ) = self._interceptor.post_list_instance_partition_operations_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner_instance_admin.ListInstancePartitionOperationsResponse.to_json( + response + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.list_instance_partition_operations", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstancePartitionOperations", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListInstancePartitions( + _BaseInstanceAdminRestTransport._BaseListInstancePartitions, + InstanceAdminRestStub, + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.ListInstancePartitions") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.ListInstancePartitionsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.ListInstancePartitionsResponse: + r"""Call the list instance partitions method over HTTP. + + Args: + request (~.spanner_instance_admin.ListInstancePartitionsRequest): + The request object. The request for + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_instance_admin.ListInstancePartitionsResponse: + The response for + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions]. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseListInstancePartitions._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_instance_partitions( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseListInstancePartitions._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseListInstancePartitions._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.ListInstancePartitions", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstancePartitions", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._ListInstancePartitions._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_instance_admin.ListInstancePartitionsResponse() + pb_resp = spanner_instance_admin.ListInstancePartitionsResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_instance_partitions(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_instance_partitions_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + spanner_instance_admin.ListInstancePartitionsResponse.to_json( + response + ) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.list_instance_partitions", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstancePartitions", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListInstances( + _BaseInstanceAdminRestTransport._BaseListInstances, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.ListInstances") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner_instance_admin.ListInstancesRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner_instance_admin.ListInstancesResponse: + r"""Call the list instances method over HTTP. + + Args: + request (~.spanner_instance_admin.ListInstancesRequest): + The request object. The request for + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner_instance_admin.ListInstancesResponse: + The response for + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseListInstances._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_instances(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseListInstances._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseListInstances._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.ListInstances", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstances", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._ListInstances._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner_instance_admin.ListInstancesResponse() + pb_resp = spanner_instance_admin.ListInstancesResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_instances(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_instances_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = ( + spanner_instance_admin.ListInstancesResponse.to_json(response) + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.list_instances", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListInstances", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _MoveInstance( + _BaseInstanceAdminRestTransport._BaseMoveInstance, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.MoveInstance") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_instance_admin.MoveInstanceRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the move instance method over HTTP. + + Args: + request (~.spanner_instance_admin.MoveInstanceRequest): + The request object. The request for + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseMoveInstance._get_http_options() + ) + + request, metadata = self._interceptor.pre_move_instance(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseMoveInstance._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseMoveInstance._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseMoveInstance._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.MoveInstance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "MoveInstance", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._MoveInstance._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_move_instance(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_move_instance_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.move_instance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "MoveInstance", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _SetIamPolicy( + _BaseInstanceAdminRestTransport._BaseSetIamPolicy, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.SetIamPolicy") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: iam_policy_pb2.SetIamPolicyRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> policy_pb2.Policy: + r"""Call the set iam policy method over HTTP. + + Args: + request (~.iam_policy_pb2.SetIamPolicyRequest): + The request object. Request message for ``SetIamPolicy`` method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.policy_pb2.Policy: + An Identity and Access Management (IAM) policy, which + specifies access controls for Google Cloud resources. + + A ``Policy`` is a collection of ``bindings``. A + ``binding`` binds one or more ``members``, or + principals, to a single ``role``. Principals can be user + accounts, service accounts, Google groups, and domains + (such as G Suite). A ``role`` is a named list of + permissions; each ``role`` can be an IAM predefined role + or a user-created custom role. + + For some types of Google Cloud resources, a ``binding`` + can also specify a ``condition``, which is a logical + expression that allows access to a resource only if the + expression evaluates to ``true``. A condition can add + constraints based on attributes of the request, the + resource, or both. To learn which resources support + conditions in their IAM policies, see the `IAM + documentation `__. + + **JSON example:** + + :: + + { + "bindings": [ + { + "role": "roles/resourcemanager.organizationAdmin", + "members": [ + "user:mike@example.com", + "group:admins@example.com", + "domain:google.com", + "serviceAccount:my-project-id@appspot.gserviceaccount.com" + ] + }, + { + "role": "roles/resourcemanager.organizationViewer", + "members": [ + "user:eve@example.com" + ], + "condition": { + "title": "expirable access", + "description": "Does not grant access after Sep 2020", + "expression": "request.time < + timestamp('2020-10-01T00:00:00.000Z')", + } + } + ], + "etag": "BwWWja0YfJA=", + "version": 3 + } + + **YAML example:** + + :: + + bindings: + - members: + - user:mike@example.com + - group:admins@example.com + - domain:google.com + - serviceAccount:my-project-id@appspot.gserviceaccount.com + role: roles/resourcemanager.organizationAdmin + - members: + - user:eve@example.com + role: roles/resourcemanager.organizationViewer + condition: + title: expirable access + description: Does not grant access after Sep 2020 + expression: request.time < timestamp('2020-10-01T00:00:00.000Z') + etag: BwWWja0YfJA= + version: 3 + + For a description of IAM and its features, see the `IAM + documentation `__. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseSetIamPolicy._get_http_options() + ) + + request, metadata = self._interceptor.pre_set_iam_policy(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseSetIamPolicy._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseSetIamPolicy._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseSetIamPolicy._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.SetIamPolicy", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "SetIamPolicy", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._SetIamPolicy._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = policy_pb2.Policy() + pb_resp = resp + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_set_iam_policy(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_set_iam_policy_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.set_iam_policy", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "SetIamPolicy", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _TestIamPermissions( + _BaseInstanceAdminRestTransport._BaseTestIamPermissions, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.TestIamPermissions") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: iam_policy_pb2.TestIamPermissionsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> iam_policy_pb2.TestIamPermissionsResponse: + r"""Call the test iam permissions method over HTTP. + + Args: + request (~.iam_policy_pb2.TestIamPermissionsRequest): + The request object. Request message for ``TestIamPermissions`` method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.iam_policy_pb2.TestIamPermissionsResponse: + Response message for ``TestIamPermissions`` method. + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseTestIamPermissions._get_http_options() + ) + + request, metadata = self._interceptor.pre_test_iam_permissions( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseTestIamPermissions._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseTestIamPermissions._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseTestIamPermissions._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.TestIamPermissions", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "TestIamPermissions", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._TestIamPermissions._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = iam_policy_pb2.TestIamPermissionsResponse() + pb_resp = resp + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_test_iam_permissions(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_test_iam_permissions_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.test_iam_permissions", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "TestIamPermissions", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _UpdateInstance( + _BaseInstanceAdminRestTransport._BaseUpdateInstance, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.UpdateInstance") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_instance_admin.UpdateInstanceRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the update instance method over HTTP. + + Args: + request (~.spanner_instance_admin.UpdateInstanceRequest): + The request object. The request for + [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseUpdateInstance._get_http_options() + ) + + request, metadata = self._interceptor.pre_update_instance(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseUpdateInstance._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseUpdateInstance._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseUpdateInstance._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.UpdateInstance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "UpdateInstance", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._UpdateInstance._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_update_instance(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_update_instance_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.update_instance", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "UpdateInstance", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _UpdateInstanceConfig( + _BaseInstanceAdminRestTransport._BaseUpdateInstanceConfig, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.UpdateInstanceConfig") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_instance_admin.UpdateInstanceConfigRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the update instance config method over HTTP. + + Args: + request (~.spanner_instance_admin.UpdateInstanceConfigRequest): + The request object. The request for + [UpdateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstanceConfig]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseUpdateInstanceConfig._get_http_options() + ) + + request, metadata = self._interceptor.pre_update_instance_config( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseUpdateInstanceConfig._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseUpdateInstanceConfig._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseUpdateInstanceConfig._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.UpdateInstanceConfig", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "UpdateInstanceConfig", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._UpdateInstanceConfig._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_update_instance_config(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_update_instance_config_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.update_instance_config", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "UpdateInstanceConfig", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _UpdateInstancePartition( + _BaseInstanceAdminRestTransport._BaseUpdateInstancePartition, + InstanceAdminRestStub, + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.UpdateInstancePartition") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner_instance_admin.UpdateInstancePartitionRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the update instance partition method over HTTP. + + Args: + request (~.spanner_instance_admin.UpdateInstancePartitionRequest): + The request object. The request for + [UpdateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstancePartition]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.operations_pb2.Operation: + This resource represents a + long-running operation that is the + result of a network API call. + + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseUpdateInstancePartition._get_http_options() + ) + + request, metadata = self._interceptor.pre_update_instance_partition( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseUpdateInstancePartition._get_transcoded_request( + http_options, request + ) + + body = _BaseInstanceAdminRestTransport._BaseUpdateInstancePartition._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseUpdateInstancePartition._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.UpdateInstancePartition", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "UpdateInstancePartition", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = ( + InstanceAdminRestTransport._UpdateInstancePartition._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = operations_pb2.Operation() + json_format.Parse(response.content, resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_update_instance_partition(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_update_instance_partition_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminClient.update_instance_partition", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "UpdateInstancePartition", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + @property + def create_instance( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstanceRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._CreateInstance(self._session, self._host, self._interceptor) # type: ignore + + @property + def create_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstanceConfigRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._CreateInstanceConfig(self._session, self._host, self._interceptor) # type: ignore + + @property + def create_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.CreateInstancePartitionRequest], + operations_pb2.Operation, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._CreateInstancePartition(self._session, self._host, self._interceptor) # type: ignore + + @property + def delete_instance( + self, + ) -> Callable[[spanner_instance_admin.DeleteInstanceRequest], empty_pb2.Empty]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._DeleteInstance(self._session, self._host, self._interceptor) # type: ignore + + @property + def delete_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstanceConfigRequest], empty_pb2.Empty + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._DeleteInstanceConfig(self._session, self._host, self._interceptor) # type: ignore + + @property + def delete_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.DeleteInstancePartitionRequest], empty_pb2.Empty + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._DeleteInstancePartition(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.GetIamPolicyRequest], policy_pb2.Policy]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetIamPolicy(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_instance( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstanceRequest], spanner_instance_admin.Instance + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetInstance(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstanceConfigRequest], + spanner_instance_admin.InstanceConfig, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetInstanceConfig(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.GetInstancePartitionRequest], + spanner_instance_admin.InstancePartition, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetInstancePartition(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_instance_config_operations( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstanceConfigOperationsRequest], + spanner_instance_admin.ListInstanceConfigOperationsResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListInstanceConfigOperations(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_instance_configs( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstanceConfigsRequest], + spanner_instance_admin.ListInstanceConfigsResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListInstanceConfigs(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_instance_partition_operations( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancePartitionOperationsRequest], + spanner_instance_admin.ListInstancePartitionOperationsResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListInstancePartitionOperations(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_instance_partitions( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancePartitionsRequest], + spanner_instance_admin.ListInstancePartitionsResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListInstancePartitions(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_instances( + self, + ) -> Callable[ + [spanner_instance_admin.ListInstancesRequest], + spanner_instance_admin.ListInstancesResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListInstances(self._session, self._host, self._interceptor) # type: ignore + + @property + def move_instance( + self, + ) -> Callable[ + [spanner_instance_admin.MoveInstanceRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._MoveInstance(self._session, self._host, self._interceptor) # type: ignore + + @property + def set_iam_policy( + self, + ) -> Callable[[iam_policy_pb2.SetIamPolicyRequest], policy_pb2.Policy]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._SetIamPolicy(self._session, self._host, self._interceptor) # type: ignore + + @property + def test_iam_permissions( + self, + ) -> Callable[ + [iam_policy_pb2.TestIamPermissionsRequest], + iam_policy_pb2.TestIamPermissionsResponse, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._TestIamPermissions(self._session, self._host, self._interceptor) # type: ignore + + @property + def update_instance( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstanceRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._UpdateInstance(self._session, self._host, self._interceptor) # type: ignore + + @property + def update_instance_config( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstanceConfigRequest], operations_pb2.Operation + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._UpdateInstanceConfig(self._session, self._host, self._interceptor) # type: ignore + + @property + def update_instance_partition( + self, + ) -> Callable[ + [spanner_instance_admin.UpdateInstancePartitionRequest], + operations_pb2.Operation, + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._UpdateInstancePartition(self._session, self._host, self._interceptor) # type: ignore + + @property + def cancel_operation(self): + return self._CancelOperation(self._session, self._host, self._interceptor) # type: ignore + + class _CancelOperation( + _BaseInstanceAdminRestTransport._BaseCancelOperation, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.CancelOperation") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: operations_pb2.CancelOperationRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Call the cancel operation method over HTTP. + + Args: + request (operations_pb2.CancelOperationRequest): + The request object for CancelOperation method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseCancelOperation._get_http_options() + ) + + request, metadata = self._interceptor.pre_cancel_operation( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseCancelOperation._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseCancelOperation._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.CancelOperation", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "CancelOperation", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._CancelOperation._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + return self._interceptor.post_cancel_operation(None) + + @property + def delete_operation(self): + return self._DeleteOperation(self._session, self._host, self._interceptor) # type: ignore + + class _DeleteOperation( + _BaseInstanceAdminRestTransport._BaseDeleteOperation, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.DeleteOperation") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: operations_pb2.DeleteOperationRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Call the delete operation method over HTTP. + + Args: + request (operations_pb2.DeleteOperationRequest): + The request object for DeleteOperation method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseDeleteOperation._get_http_options() + ) + + request, metadata = self._interceptor.pre_delete_operation( + request, metadata + ) + transcoded_request = _BaseInstanceAdminRestTransport._BaseDeleteOperation._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseDeleteOperation._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.DeleteOperation", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "DeleteOperation", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._DeleteOperation._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + return self._interceptor.post_delete_operation(None) + + @property + def get_operation(self): + return self._GetOperation(self._session, self._host, self._interceptor) # type: ignore + + class _GetOperation( + _BaseInstanceAdminRestTransport._BaseGetOperation, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.GetOperation") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: operations_pb2.GetOperationRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.Operation: + r"""Call the get operation method over HTTP. + + Args: + request (operations_pb2.GetOperationRequest): + The request object for GetOperation method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + operations_pb2.Operation: Response from GetOperation method. + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseGetOperation._get_http_options() + ) + + request, metadata = self._interceptor.pre_get_operation(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseGetOperation._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseGetOperation._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.GetOperation", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetOperation", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._GetOperation._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + content = response.content.decode("utf-8") + resp = operations_pb2.Operation() + resp = json_format.Parse(content, resp) + resp = self._interceptor.post_get_operation(resp) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminAsyncClient.GetOperation", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "GetOperation", + "httpResponse": http_response, + "metadata": http_response["headers"], + }, + ) + return resp + + @property + def list_operations(self): + return self._ListOperations(self._session, self._host, self._interceptor) # type: ignore + + class _ListOperations( + _BaseInstanceAdminRestTransport._BaseListOperations, InstanceAdminRestStub + ): + def __hash__(self): + return hash("InstanceAdminRestTransport.ListOperations") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: operations_pb2.ListOperationsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> operations_pb2.ListOperationsResponse: + r"""Call the list operations method over HTTP. + + Args: + request (operations_pb2.ListOperationsRequest): + The request object for ListOperations method. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + operations_pb2.ListOperationsResponse: Response from ListOperations method. + """ + + http_options = ( + _BaseInstanceAdminRestTransport._BaseListOperations._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_operations(request, metadata) + transcoded_request = _BaseInstanceAdminRestTransport._BaseListOperations._get_transcoded_request( + http_options, request + ) + + # Jsonify the query params + query_params = _BaseInstanceAdminRestTransport._BaseListOperations._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner.admin.instance_v1.InstanceAdminClient.ListOperations", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListOperations", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = InstanceAdminRestTransport._ListOperations._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + content = response.content.decode("utf-8") + resp = operations_pb2.ListOperationsResponse() + resp = json_format.Parse(content, resp) + resp = self._interceptor.post_list_operations(resp) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = json_format.MessageToJson(resp) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner.admin.instance_v1.InstanceAdminAsyncClient.ListOperations", + extra={ + "serviceName": "google.spanner.admin.instance.v1.InstanceAdmin", + "rpcName": "ListOperations", + "httpResponse": http_response, + "metadata": http_response["headers"], + }, + ) + return resp + + @property + def kind(self) -> str: + return "rest" + + def close(self): + self._session.close() + + +__all__ = ("InstanceAdminRestTransport",) diff --git a/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/rest_base.py b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/rest_base.py new file mode 100644 index 0000000000..bf41644213 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/services/instance_admin/transports/rest_base.py @@ -0,0 +1,1378 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import json # type: ignore +from google.api_core import path_template +from google.api_core import gapic_v1 + +from google.protobuf import json_format +from .base import InstanceAdminTransport, DEFAULT_CLIENT_INFO + +import re +from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union + + +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore + + +class _BaseInstanceAdminRestTransport(InstanceAdminTransport): + """Base REST backend transport for InstanceAdmin. + + Note: This class is not meant to be used directly. Use its sync and + async sub-classes instead. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends JSON representations of protocol buffers over HTTP/1.1 + """ + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[Any] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + url_scheme: str = "https", + api_audience: Optional[str] = None, + ) -> None: + """Instantiate the transport. + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[Any]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you are developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + url_scheme: the protocol scheme for the API endpoint. Normally + "https", but for testing or local servers, + "http" can be specified. + """ + # Run the base constructor + maybe_url_match = re.match("^(?Phttp(?:s)?://)?(?P.*)$", host) + if maybe_url_match is None: + raise ValueError( + f"Unexpected hostname structure: {host}" + ) # pragma: NO COVER + + url_match_items = maybe_url_match.groupdict() + + host = f"{url_scheme}://{host}" if not url_match_items["scheme"] else host + + super().__init__( + host=host, + credentials=credentials, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + class _BaseCreateInstance: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{parent=projects/*}/instances", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.CreateInstanceRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseCreateInstance._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCreateInstanceConfig: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{parent=projects/*}/instanceConfigs", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.CreateInstanceConfigRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseCreateInstanceConfig._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCreateInstancePartition: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{parent=projects/*/instances/*}/instancePartitions", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.CreateInstancePartitionRequest.pb( + request + ) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseCreateInstancePartition._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseDeleteInstance: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.DeleteInstanceRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseDeleteInstance._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseDeleteInstanceConfig: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instanceConfigs/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.DeleteInstanceConfigRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseDeleteInstanceConfig._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseDeleteInstancePartition: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.DeleteInstancePartitionRequest.pb( + request + ) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseDeleteInstancePartition._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetIamPolicy: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*}:getIamPolicy", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = request + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseGetIamPolicy._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetInstance: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.GetInstanceRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseGetInstance._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetInstanceConfig: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.GetInstanceConfigRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseGetInstanceConfig._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetInstancePartition: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.GetInstancePartitionRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseGetInstancePartition._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListInstanceConfigOperations: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*}/instanceConfigOperations", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.ListInstanceConfigOperationsRequest.pb( + request + ) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseListInstanceConfigOperations._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListInstanceConfigs: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*}/instanceConfigs", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.ListInstanceConfigsRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseListInstanceConfigs._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListInstancePartitionOperations: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*/instances/*}/instancePartitionOperations", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = ( + spanner_instance_admin.ListInstancePartitionOperationsRequest.pb( + request + ) + ) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseListInstancePartitionOperations._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListInstancePartitions: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*/instances/*}/instancePartitions", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.ListInstancePartitionsRequest.pb( + request + ) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseListInstancePartitions._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListInstances: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{parent=projects/*}/instances", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.ListInstancesRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseListInstances._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseMoveInstance: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*}:move", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.MoveInstanceRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseMoveInstance._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseSetIamPolicy: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*}:setIamPolicy", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = request + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseSetIamPolicy._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseTestIamPermissions: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{resource=projects/*/instances/*}:testIamPermissions", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = request + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseTestIamPermissions._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseUpdateInstance: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "patch", + "uri": "/v1/{instance.name=projects/*/instances/*}", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.UpdateInstanceRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseUpdateInstance._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseUpdateInstanceConfig: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "patch", + "uri": "/v1/{instance_config.name=projects/*/instanceConfigs/*}", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.UpdateInstanceConfigRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseUpdateInstanceConfig._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseUpdateInstancePartition: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "patch", + "uri": "/v1/{instance_partition.name=projects/*/instances/*/instancePartitions/*}", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner_instance_admin.UpdateInstancePartitionRequest.pb( + request + ) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseInstanceAdminRestTransport._BaseUpdateInstancePartition._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCancelOperation: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}:cancel", + }, + { + "method": "post", + "uri": "/v1/{name=projects/*/instanceConfigs/*/ssdCaches/*/operations/*}:cancel", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + request_kwargs = json_format.MessageToDict(request) + transcoded_request = path_template.transcode(http_options, **request_kwargs) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads(json.dumps(transcoded_request["query_params"])) + return query_params + + class _BaseDeleteOperation: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}", + }, + { + "method": "delete", + "uri": "/v1/{name=projects/*/instanceConfigs/*/ssdCaches/*/operations/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + request_kwargs = json_format.MessageToDict(request) + transcoded_request = path_template.transcode(http_options, **request_kwargs) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads(json.dumps(transcoded_request["query_params"])) + return query_params + + class _BaseGetOperation: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations/*}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/ssdCaches/*/operations/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + request_kwargs = json_format.MessageToDict(request) + transcoded_request = path_template.transcode(http_options, **request_kwargs) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads(json.dumps(transcoded_request["query_params"])) + return query_params + + class _BaseListOperations: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/backups/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/instancePartitions/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/operations}", + }, + { + "method": "get", + "uri": "/v1/{name=projects/*/instanceConfigs/*/ssdCaches/*/operations}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + request_kwargs = json_format.MessageToDict(request) + transcoded_request = path_template.transcode(http_options, **request_kwargs) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads(json.dumps(transcoded_request["query_params"])) + return query_params + + +__all__ = ("_BaseInstanceAdminRestTransport",) diff --git a/google/cloud/spanner_admin_instance_v1/types.py b/google/cloud/spanner_admin_instance_v1/types.py deleted file mode 100644 index a20b479bf0..0000000000 --- a/google/cloud/spanner_admin_instance_v1/types.py +++ /dev/null @@ -1,66 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2018 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from __future__ import absolute_import -import sys - - -from google.api import http_pb2 -from google.iam.v1 import iam_policy_pb2 -from google.iam.v1 import policy_pb2 -from google.iam.v1.logging import audit_data_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import any_pb2 -from google.protobuf import descriptor_pb2 -from google.protobuf import empty_pb2 -from google.protobuf import field_mask_pb2 -from google.protobuf import timestamp_pb2 -from google.rpc import status_pb2 - -from google.api_core.protobuf_helpers import get_messages -from google.cloud.spanner_admin_instance_v1.proto import spanner_instance_admin_pb2 - - -_shared_modules = [ - http_pb2, - iam_policy_pb2, - policy_pb2, - audit_data_pb2, - operations_pb2, - any_pb2, - descriptor_pb2, - empty_pb2, - field_mask_pb2, - timestamp_pb2, - status_pb2, -] - -_local_modules = [spanner_instance_admin_pb2] - -names = [] - -for module in _shared_modules: - for name, message in get_messages(module).items(): - setattr(sys.modules[__name__], name, message) - names.append(name) - -for module in _local_modules: - for name, message in get_messages(module).items(): - message.__module__ = "google.cloud.spanner_admin_instance_v1.types" - setattr(sys.modules[__name__], name, message) - names.append(name) - -__all__ = tuple(sorted(names)) diff --git a/google/cloud/spanner_admin_instance_v1/types/__init__.py b/google/cloud/spanner_admin_instance_v1/types/__init__.py new file mode 100644 index 0000000000..9bd2de3e47 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/types/__init__.py @@ -0,0 +1,104 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from .common import ( + OperationProgress, + ReplicaSelection, + FulfillmentPeriod, +) +from .spanner_instance_admin import ( + AutoscalingConfig, + CreateInstanceConfigMetadata, + CreateInstanceConfigRequest, + CreateInstanceMetadata, + CreateInstancePartitionMetadata, + CreateInstancePartitionRequest, + CreateInstanceRequest, + DeleteInstanceConfigRequest, + DeleteInstancePartitionRequest, + DeleteInstanceRequest, + FreeInstanceMetadata, + GetInstanceConfigRequest, + GetInstancePartitionRequest, + GetInstanceRequest, + Instance, + InstanceConfig, + InstancePartition, + ListInstanceConfigOperationsRequest, + ListInstanceConfigOperationsResponse, + ListInstanceConfigsRequest, + ListInstanceConfigsResponse, + ListInstancePartitionOperationsRequest, + ListInstancePartitionOperationsResponse, + ListInstancePartitionsRequest, + ListInstancePartitionsResponse, + ListInstancesRequest, + ListInstancesResponse, + MoveInstanceMetadata, + MoveInstanceRequest, + MoveInstanceResponse, + ReplicaComputeCapacity, + ReplicaInfo, + UpdateInstanceConfigMetadata, + UpdateInstanceConfigRequest, + UpdateInstanceMetadata, + UpdateInstancePartitionMetadata, + UpdateInstancePartitionRequest, + UpdateInstanceRequest, +) + +__all__ = ( + "OperationProgress", + "ReplicaSelection", + "FulfillmentPeriod", + "AutoscalingConfig", + "CreateInstanceConfigMetadata", + "CreateInstanceConfigRequest", + "CreateInstanceMetadata", + "CreateInstancePartitionMetadata", + "CreateInstancePartitionRequest", + "CreateInstanceRequest", + "DeleteInstanceConfigRequest", + "DeleteInstancePartitionRequest", + "DeleteInstanceRequest", + "FreeInstanceMetadata", + "GetInstanceConfigRequest", + "GetInstancePartitionRequest", + "GetInstanceRequest", + "Instance", + "InstanceConfig", + "InstancePartition", + "ListInstanceConfigOperationsRequest", + "ListInstanceConfigOperationsResponse", + "ListInstanceConfigsRequest", + "ListInstanceConfigsResponse", + "ListInstancePartitionOperationsRequest", + "ListInstancePartitionOperationsResponse", + "ListInstancePartitionsRequest", + "ListInstancePartitionsResponse", + "ListInstancesRequest", + "ListInstancesResponse", + "MoveInstanceMetadata", + "MoveInstanceRequest", + "MoveInstanceResponse", + "ReplicaComputeCapacity", + "ReplicaInfo", + "UpdateInstanceConfigMetadata", + "UpdateInstanceConfigRequest", + "UpdateInstanceMetadata", + "UpdateInstancePartitionMetadata", + "UpdateInstancePartitionRequest", + "UpdateInstanceRequest", +) diff --git a/google/cloud/spanner_admin_instance_v1/types/common.py b/google/cloud/spanner_admin_instance_v1/types/common.py new file mode 100644 index 0000000000..548e61c38e --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/types/common.py @@ -0,0 +1,99 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.protobuf import timestamp_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.admin.instance.v1", + manifest={ + "FulfillmentPeriod", + "OperationProgress", + "ReplicaSelection", + }, +) + + +class FulfillmentPeriod(proto.Enum): + r"""Indicates the expected fulfillment period of an operation. + + Values: + FULFILLMENT_PERIOD_UNSPECIFIED (0): + Not specified. + FULFILLMENT_PERIOD_NORMAL (1): + Normal fulfillment period. The operation is + expected to complete within minutes. + FULFILLMENT_PERIOD_EXTENDED (2): + Extended fulfillment period. It can take up + to an hour for the operation to complete. + """ + FULFILLMENT_PERIOD_UNSPECIFIED = 0 + FULFILLMENT_PERIOD_NORMAL = 1 + FULFILLMENT_PERIOD_EXTENDED = 2 + + +class OperationProgress(proto.Message): + r"""Encapsulates progress related information for a Cloud Spanner + long running instance operations. + + Attributes: + progress_percent (int): + Percent completion of the operation. + Values are between 0 and 100 inclusive. + start_time (google.protobuf.timestamp_pb2.Timestamp): + Time the request was received. + end_time (google.protobuf.timestamp_pb2.Timestamp): + If set, the time at which this operation + failed or was completed successfully. + """ + + progress_percent: int = proto.Field( + proto.INT32, + number=1, + ) + start_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + end_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + + +class ReplicaSelection(proto.Message): + r"""ReplicaSelection identifies replicas with common properties. + + Attributes: + location (str): + Required. Name of the location of the + replicas (e.g., "us-central1"). + """ + + location: str = proto.Field( + proto.STRING, + number=1, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_admin_instance_v1/types/spanner_instance_admin.py b/google/cloud/spanner_admin_instance_v1/types/spanner_instance_admin.py new file mode 100644 index 0000000000..1e1509d1c4 --- /dev/null +++ b/google/cloud/spanner_admin_instance_v1/types/spanner_instance_admin.py @@ -0,0 +1,2375 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_admin_instance_v1.types import common +from google.longrunning import operations_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.admin.instance.v1", + manifest={ + "ReplicaInfo", + "InstanceConfig", + "ReplicaComputeCapacity", + "AutoscalingConfig", + "Instance", + "ListInstanceConfigsRequest", + "ListInstanceConfigsResponse", + "GetInstanceConfigRequest", + "CreateInstanceConfigRequest", + "UpdateInstanceConfigRequest", + "DeleteInstanceConfigRequest", + "ListInstanceConfigOperationsRequest", + "ListInstanceConfigOperationsResponse", + "GetInstanceRequest", + "CreateInstanceRequest", + "ListInstancesRequest", + "ListInstancesResponse", + "UpdateInstanceRequest", + "DeleteInstanceRequest", + "CreateInstanceMetadata", + "UpdateInstanceMetadata", + "FreeInstanceMetadata", + "CreateInstanceConfigMetadata", + "UpdateInstanceConfigMetadata", + "InstancePartition", + "CreateInstancePartitionMetadata", + "CreateInstancePartitionRequest", + "DeleteInstancePartitionRequest", + "GetInstancePartitionRequest", + "UpdateInstancePartitionRequest", + "UpdateInstancePartitionMetadata", + "ListInstancePartitionsRequest", + "ListInstancePartitionsResponse", + "ListInstancePartitionOperationsRequest", + "ListInstancePartitionOperationsResponse", + "MoveInstanceRequest", + "MoveInstanceResponse", + "MoveInstanceMetadata", + }, +) + + +class ReplicaInfo(proto.Message): + r""" + + Attributes: + location (str): + The location of the serving resources, e.g., + "us-central1". + type_ (google.cloud.spanner_admin_instance_v1.types.ReplicaInfo.ReplicaType): + The type of replica. + default_leader_location (bool): + If true, this location is designated as the default leader + location where leader replicas are placed. See the `region + types + documentation `__ + for more details. + """ + + class ReplicaType(proto.Enum): + r"""Indicates the type of replica. See the `replica types + documentation `__ + for more details. + + Values: + TYPE_UNSPECIFIED (0): + Not specified. + READ_WRITE (1): + Read-write replicas support both reads and writes. These + replicas: + + - Maintain a full copy of your data. + - Serve reads. + - Can vote whether to commit a write. + - Participate in leadership election. + - Are eligible to become a leader. + READ_ONLY (2): + Read-only replicas only support reads (not writes). + Read-only replicas: + + - Maintain a full copy of your data. + - Serve reads. + - Do not participate in voting to commit writes. + - Are not eligible to become a leader. + WITNESS (3): + Witness replicas don't support reads but do participate in + voting to commit writes. Witness replicas: + + - Do not maintain a full copy of data. + - Do not serve reads. + - Vote whether to commit writes. + - Participate in leader election but are not eligible to + become leader. + """ + TYPE_UNSPECIFIED = 0 + READ_WRITE = 1 + READ_ONLY = 2 + WITNESS = 3 + + location: str = proto.Field( + proto.STRING, + number=1, + ) + type_: ReplicaType = proto.Field( + proto.ENUM, + number=2, + enum=ReplicaType, + ) + default_leader_location: bool = proto.Field( + proto.BOOL, + number=3, + ) + + +class InstanceConfig(proto.Message): + r"""A possible configuration for a Cloud Spanner instance. + Configurations define the geographic placement of nodes and + their replication. + + Attributes: + name (str): + A unique identifier for the instance configuration. Values + are of the form + ``projects//instanceConfigs/[a-z][-a-z0-9]*``. + + User instance configuration must start with ``custom-``. + display_name (str): + The name of this instance configuration as it + appears in UIs. + config_type (google.cloud.spanner_admin_instance_v1.types.InstanceConfig.Type): + Output only. Whether this instance + configuration is a Google-managed or + user-managed configuration. + replicas (MutableSequence[google.cloud.spanner_admin_instance_v1.types.ReplicaInfo]): + The geographic placement of nodes in this instance + configuration and their replication properties. + + To create user-managed configurations, input ``replicas`` + must include all replicas in ``replicas`` of the + ``base_config`` and include one or more replicas in the + ``optional_replicas`` of the ``base_config``. + optional_replicas (MutableSequence[google.cloud.spanner_admin_instance_v1.types.ReplicaInfo]): + Output only. The available optional replicas + to choose from for user-managed configurations. + Populated for Google-managed configurations. + base_config (str): + Base configuration name, e.g. + projects//instanceConfigs/nam3, based on which + this configuration is created. Only set for user-managed + configurations. ``base_config`` must refer to a + configuration of type ``GOOGLE_MANAGED`` in the same project + as this configuration. + labels (MutableMapping[str, str]): + Cloud Labels are a flexible and lightweight mechanism for + organizing cloud resources into groups that reflect a + customer's organizational needs and deployment strategies. + Cloud Labels can be used to filter collections of resources. + They can be used to control how resource metrics are + aggregated. And they can be used as arguments to policy + management rules (e.g. route, firewall, load balancing, + etc.). + + - Label keys must be between 1 and 63 characters long and + must conform to the following regular expression: + ``[a-z][a-z0-9_-]{0,62}``. + - Label values must be between 0 and 63 characters long and + must conform to the regular expression + ``[a-z0-9_-]{0,63}``. + - No more than 64 labels can be associated with a given + resource. + + See https://goo.gl/xmQnxf for more information on and + examples of labels. + + If you plan to use labels in your own code, please note that + additional characters may be allowed in the future. + Therefore, you are advised to use an internal label + representation, such as JSON, which doesn't rely upon + specific characters being disallowed. For example, + representing labels as the string: name + "*" + value would + prove problematic if we were to allow "*" in a future + release. + etag (str): + etag is used for optimistic concurrency + control as a way to help prevent simultaneous + updates of a instance configuration from + overwriting each other. It is strongly suggested + that systems make use of the etag in the + read-modify-write cycle to perform instance + configuration updates in order to avoid race + conditions: An etag is returned in the response + which contains instance configurations, and + systems are expected to put that etag in the + request to update instance configuration to + ensure that their change is applied to the same + version of the instance configuration. If no + etag is provided in the call to update the + instance configuration, then the existing + instance configuration is overwritten blindly. + leader_options (MutableSequence[str]): + Allowed values of the "default_leader" schema option for + databases in instances that use this instance configuration. + reconciling (bool): + Output only. If true, the instance + configuration is being created or updated. If + false, there are no ongoing operations for the + instance configuration. + state (google.cloud.spanner_admin_instance_v1.types.InstanceConfig.State): + Output only. The current instance configuration state. + Applicable only for ``USER_MANAGED`` configurations. + free_instance_availability (google.cloud.spanner_admin_instance_v1.types.InstanceConfig.FreeInstanceAvailability): + Output only. Describes whether free instances + are available to be created in this instance + configuration. + quorum_type (google.cloud.spanner_admin_instance_v1.types.InstanceConfig.QuorumType): + Output only. The ``QuorumType`` of the instance + configuration. + storage_limit_per_processing_unit (int): + Output only. The storage limit in bytes per + processing unit. + """ + + class Type(proto.Enum): + r"""The type of this configuration. + + Values: + TYPE_UNSPECIFIED (0): + Unspecified. + GOOGLE_MANAGED (1): + Google-managed configuration. + USER_MANAGED (2): + User-managed configuration. + """ + TYPE_UNSPECIFIED = 0 + GOOGLE_MANAGED = 1 + USER_MANAGED = 2 + + class State(proto.Enum): + r"""Indicates the current state of the instance configuration. + + Values: + STATE_UNSPECIFIED (0): + Not specified. + CREATING (1): + The instance configuration is still being + created. + READY (2): + The instance configuration is fully created + and ready to be used to create instances. + """ + STATE_UNSPECIFIED = 0 + CREATING = 1 + READY = 2 + + class FreeInstanceAvailability(proto.Enum): + r"""Describes the availability for free instances to be created + in an instance configuration. + + Values: + FREE_INSTANCE_AVAILABILITY_UNSPECIFIED (0): + Not specified. + AVAILABLE (1): + Indicates that free instances are available + to be created in this instance configuration. + UNSUPPORTED (2): + Indicates that free instances are not + supported in this instance configuration. + DISABLED (3): + Indicates that free instances are currently + not available to be created in this instance + configuration. + QUOTA_EXCEEDED (4): + Indicates that additional free instances + cannot be created in this instance configuration + because the project has reached its limit of + free instances. + """ + FREE_INSTANCE_AVAILABILITY_UNSPECIFIED = 0 + AVAILABLE = 1 + UNSUPPORTED = 2 + DISABLED = 3 + QUOTA_EXCEEDED = 4 + + class QuorumType(proto.Enum): + r"""Indicates the quorum type of this instance configuration. + + Values: + QUORUM_TYPE_UNSPECIFIED (0): + Quorum type not specified. + REGION (1): + An instance configuration tagged with ``REGION`` quorum type + forms a write quorum in a single region. + DUAL_REGION (2): + An instance configuration tagged with the ``DUAL_REGION`` + quorum type forms a write quorum with exactly two read-write + regions in a multi-region configuration. + + This instance configuration requires failover in the event + of regional failures. + MULTI_REGION (3): + An instance configuration tagged with the ``MULTI_REGION`` + quorum type forms a write quorum from replicas that are + spread across more than one region in a multi-region + configuration. + """ + QUORUM_TYPE_UNSPECIFIED = 0 + REGION = 1 + DUAL_REGION = 2 + MULTI_REGION = 3 + + name: str = proto.Field( + proto.STRING, + number=1, + ) + display_name: str = proto.Field( + proto.STRING, + number=2, + ) + config_type: Type = proto.Field( + proto.ENUM, + number=5, + enum=Type, + ) + replicas: MutableSequence["ReplicaInfo"] = proto.RepeatedField( + proto.MESSAGE, + number=3, + message="ReplicaInfo", + ) + optional_replicas: MutableSequence["ReplicaInfo"] = proto.RepeatedField( + proto.MESSAGE, + number=6, + message="ReplicaInfo", + ) + base_config: str = proto.Field( + proto.STRING, + number=7, + ) + labels: MutableMapping[str, str] = proto.MapField( + proto.STRING, + proto.STRING, + number=8, + ) + etag: str = proto.Field( + proto.STRING, + number=9, + ) + leader_options: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=4, + ) + reconciling: bool = proto.Field( + proto.BOOL, + number=10, + ) + state: State = proto.Field( + proto.ENUM, + number=11, + enum=State, + ) + free_instance_availability: FreeInstanceAvailability = proto.Field( + proto.ENUM, + number=12, + enum=FreeInstanceAvailability, + ) + quorum_type: QuorumType = proto.Field( + proto.ENUM, + number=18, + enum=QuorumType, + ) + storage_limit_per_processing_unit: int = proto.Field( + proto.INT64, + number=19, + ) + + +class ReplicaComputeCapacity(proto.Message): + r"""ReplicaComputeCapacity describes the amount of server + resources that are allocated to each replica identified by the + replica selection. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + replica_selection (google.cloud.spanner_admin_instance_v1.types.ReplicaSelection): + Required. Identifies replicas by specified + properties. All replicas in the selection have + the same amount of compute capacity. + node_count (int): + The number of nodes allocated to each replica. + + This may be zero in API responses for instances that are not + yet in state ``READY``. + + This field is a member of `oneof`_ ``compute_capacity``. + processing_units (int): + The number of processing units allocated to each replica. + + This may be zero in API responses for instances that are not + yet in state ``READY``. + + This field is a member of `oneof`_ ``compute_capacity``. + """ + + replica_selection: common.ReplicaSelection = proto.Field( + proto.MESSAGE, + number=1, + message=common.ReplicaSelection, + ) + node_count: int = proto.Field( + proto.INT32, + number=2, + oneof="compute_capacity", + ) + processing_units: int = proto.Field( + proto.INT32, + number=3, + oneof="compute_capacity", + ) + + +class AutoscalingConfig(proto.Message): + r"""Autoscaling configuration for an instance. + + Attributes: + autoscaling_limits (google.cloud.spanner_admin_instance_v1.types.AutoscalingConfig.AutoscalingLimits): + Required. Autoscaling limits for an instance. + autoscaling_targets (google.cloud.spanner_admin_instance_v1.types.AutoscalingConfig.AutoscalingTargets): + Required. The autoscaling targets for an + instance. + asymmetric_autoscaling_options (MutableSequence[google.cloud.spanner_admin_instance_v1.types.AutoscalingConfig.AsymmetricAutoscalingOption]): + Optional. Optional asymmetric autoscaling + options. Replicas matching the replica selection + criteria will be autoscaled independently from + other replicas. The autoscaler will scale the + replicas based on the utilization of replicas + identified by the replica selection. Replica + selections should not overlap with each other. + + Other replicas (those do not match any replica + selection) will be autoscaled together and will + have the same compute capacity allocated to + them. + """ + + class AutoscalingLimits(proto.Message): + r"""The autoscaling limits for the instance. Users can define the + minimum and maximum compute capacity allocated to the instance, and + the autoscaler will only scale within that range. Users can either + use nodes or processing units to specify the limits, but should use + the same unit to set both the min_limit and max_limit. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + min_nodes (int): + Minimum number of nodes allocated to the + instance. If set, this number should be greater + than or equal to 1. + + This field is a member of `oneof`_ ``min_limit``. + min_processing_units (int): + Minimum number of processing units allocated + to the instance. If set, this number should be + multiples of 1000. + + This field is a member of `oneof`_ ``min_limit``. + max_nodes (int): + Maximum number of nodes allocated to the instance. If set, + this number should be greater than or equal to min_nodes. + + This field is a member of `oneof`_ ``max_limit``. + max_processing_units (int): + Maximum number of processing units allocated to the + instance. If set, this number should be multiples of 1000 + and be greater than or equal to min_processing_units. + + This field is a member of `oneof`_ ``max_limit``. + """ + + min_nodes: int = proto.Field( + proto.INT32, + number=1, + oneof="min_limit", + ) + min_processing_units: int = proto.Field( + proto.INT32, + number=2, + oneof="min_limit", + ) + max_nodes: int = proto.Field( + proto.INT32, + number=3, + oneof="max_limit", + ) + max_processing_units: int = proto.Field( + proto.INT32, + number=4, + oneof="max_limit", + ) + + class AutoscalingTargets(proto.Message): + r"""The autoscaling targets for an instance. + + Attributes: + high_priority_cpu_utilization_percent (int): + Required. The target high priority cpu utilization + percentage that the autoscaler should be trying to achieve + for the instance. This number is on a scale from 0 (no + utilization) to 100 (full utilization). The valid range is + [10, 90] inclusive. + storage_utilization_percent (int): + Required. The target storage utilization percentage that the + autoscaler should be trying to achieve for the instance. + This number is on a scale from 0 (no utilization) to 100 + (full utilization). The valid range is [10, 99] inclusive. + """ + + high_priority_cpu_utilization_percent: int = proto.Field( + proto.INT32, + number=1, + ) + storage_utilization_percent: int = proto.Field( + proto.INT32, + number=2, + ) + + class AsymmetricAutoscalingOption(proto.Message): + r"""AsymmetricAutoscalingOption specifies the scaling of replicas + identified by the given selection. + + Attributes: + replica_selection (google.cloud.spanner_admin_instance_v1.types.ReplicaSelection): + Required. Selects the replicas to which this + AsymmetricAutoscalingOption applies. Only + read-only replicas are supported. + overrides (google.cloud.spanner_admin_instance_v1.types.AutoscalingConfig.AsymmetricAutoscalingOption.AutoscalingConfigOverrides): + Optional. Overrides applied to the top-level + autoscaling configuration for the selected + replicas. + """ + + class AutoscalingConfigOverrides(proto.Message): + r"""Overrides the top-level autoscaling configuration for the replicas + identified by ``replica_selection``. All fields in this message are + optional. Any unspecified fields will use the corresponding values + from the top-level autoscaling configuration. + + Attributes: + autoscaling_limits (google.cloud.spanner_admin_instance_v1.types.AutoscalingConfig.AutoscalingLimits): + Optional. If specified, overrides the min/max + limit in the top-level autoscaling configuration + for the selected replicas. + autoscaling_target_high_priority_cpu_utilization_percent (int): + Optional. If specified, overrides the autoscaling target + high_priority_cpu_utilization_percent in the top-level + autoscaling configuration for the selected replicas. + """ + + autoscaling_limits: "AutoscalingConfig.AutoscalingLimits" = proto.Field( + proto.MESSAGE, + number=1, + message="AutoscalingConfig.AutoscalingLimits", + ) + autoscaling_target_high_priority_cpu_utilization_percent: int = proto.Field( + proto.INT32, + number=2, + ) + + replica_selection: common.ReplicaSelection = proto.Field( + proto.MESSAGE, + number=1, + message=common.ReplicaSelection, + ) + overrides: "AutoscalingConfig.AsymmetricAutoscalingOption.AutoscalingConfigOverrides" = proto.Field( + proto.MESSAGE, + number=2, + message="AutoscalingConfig.AsymmetricAutoscalingOption.AutoscalingConfigOverrides", + ) + + autoscaling_limits: AutoscalingLimits = proto.Field( + proto.MESSAGE, + number=1, + message=AutoscalingLimits, + ) + autoscaling_targets: AutoscalingTargets = proto.Field( + proto.MESSAGE, + number=2, + message=AutoscalingTargets, + ) + asymmetric_autoscaling_options: MutableSequence[ + AsymmetricAutoscalingOption + ] = proto.RepeatedField( + proto.MESSAGE, + number=3, + message=AsymmetricAutoscalingOption, + ) + + +class Instance(proto.Message): + r"""An isolated set of Cloud Spanner resources on which databases + can be hosted. + + Attributes: + name (str): + Required. A unique identifier for the instance, which cannot + be changed after the instance is created. Values are of the + form + ``projects//instances/[a-z][-a-z0-9]*[a-z0-9]``. + The final segment of the name must be between 2 and 64 + characters in length. + config (str): + Required. The name of the instance's configuration. Values + are of the form + ``projects//instanceConfigs/``. See + also + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] + and + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + display_name (str): + Required. The descriptive name for this + instance as it appears in UIs. Must be unique + per project and between 4 and 30 characters in + length. + node_count (int): + The number of nodes allocated to this instance. At most, one + of either ``node_count`` or ``processing_units`` should be + present in the message. + + Users can set the ``node_count`` field to specify the target + number of nodes allocated to the instance. + + If autoscaling is enabled, ``node_count`` is treated as an + ``OUTPUT_ONLY`` field and reflects the current number of + nodes allocated to the instance. + + This might be zero in API responses for instances that are + not yet in the ``READY`` state. + + For more information, see `Compute capacity, nodes, and + processing + units `__. + processing_units (int): + The number of processing units allocated to this instance. + At most, one of either ``processing_units`` or + ``node_count`` should be present in the message. + + Users can set the ``processing_units`` field to specify the + target number of processing units allocated to the instance. + + If autoscaling is enabled, ``processing_units`` is treated + as an ``OUTPUT_ONLY`` field and reflects the current number + of processing units allocated to the instance. + + This might be zero in API responses for instances that are + not yet in the ``READY`` state. + + For more information, see `Compute capacity, nodes and + processing + units `__. + replica_compute_capacity (MutableSequence[google.cloud.spanner_admin_instance_v1.types.ReplicaComputeCapacity]): + Output only. Lists the compute capacity per + ReplicaSelection. A replica selection identifies + a set of replicas with common properties. + Replicas identified by a ReplicaSelection are + scaled with the same compute capacity. + autoscaling_config (google.cloud.spanner_admin_instance_v1.types.AutoscalingConfig): + Optional. The autoscaling configuration. Autoscaling is + enabled if this field is set. When autoscaling is enabled, + node_count and processing_units are treated as OUTPUT_ONLY + fields and reflect the current compute capacity allocated to + the instance. + state (google.cloud.spanner_admin_instance_v1.types.Instance.State): + Output only. The current instance state. For + [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance], + the state must be either omitted or set to ``CREATING``. For + [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance], + the state must be either omitted or set to ``READY``. + labels (MutableMapping[str, str]): + Cloud Labels are a flexible and lightweight mechanism for + organizing cloud resources into groups that reflect a + customer's organizational needs and deployment strategies. + Cloud Labels can be used to filter collections of resources. + They can be used to control how resource metrics are + aggregated. And they can be used as arguments to policy + management rules (e.g. route, firewall, load balancing, + etc.). + + - Label keys must be between 1 and 63 characters long and + must conform to the following regular expression: + ``[a-z][a-z0-9_-]{0,62}``. + - Label values must be between 0 and 63 characters long and + must conform to the regular expression + ``[a-z0-9_-]{0,63}``. + - No more than 64 labels can be associated with a given + resource. + + See https://goo.gl/xmQnxf for more information on and + examples of labels. + + If you plan to use labels in your own code, please note that + additional characters may be allowed in the future. And so + you are advised to use an internal label representation, + such as JSON, which doesn't rely upon specific characters + being disallowed. For example, representing labels as the + string: name + "*" + value would prove problematic if we + were to allow "*" in a future release. + instance_type (google.cloud.spanner_admin_instance_v1.types.Instance.InstanceType): + The ``InstanceType`` of the current instance. + endpoint_uris (MutableSequence[str]): + Deprecated. This field is not populated. + create_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The time at which the instance + was created. + update_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The time at which the instance + was most recently updated. + free_instance_metadata (google.cloud.spanner_admin_instance_v1.types.FreeInstanceMetadata): + Free instance metadata. Only populated for + free instances. + edition (google.cloud.spanner_admin_instance_v1.types.Instance.Edition): + Optional. The ``Edition`` of the current instance. + default_backup_schedule_type (google.cloud.spanner_admin_instance_v1.types.Instance.DefaultBackupScheduleType): + Optional. Controls the default backup schedule behavior for + new databases within the instance. By default, a backup + schedule is created automatically when a new database is + created in a new instance. + + Note that the ``AUTOMATIC`` value isn't permitted for free + instances, as backups and backup schedules aren't supported + for free instances. + + In the ``GetInstance`` or ``ListInstances`` response, if the + value of ``default_backup_schedule_type`` isn't set, or set + to ``NONE``, Spanner doesn't create a default backup + schedule for new databases in the instance. + """ + + class State(proto.Enum): + r"""Indicates the current state of the instance. + + Values: + STATE_UNSPECIFIED (0): + Not specified. + CREATING (1): + The instance is still being created. + Resources may not be available yet, and + operations such as database creation may not + work. + READY (2): + The instance is fully created and ready to do + work such as creating databases. + """ + STATE_UNSPECIFIED = 0 + CREATING = 1 + READY = 2 + + class InstanceType(proto.Enum): + r"""The type of this instance. The type can be used to distinguish + product variants, that can affect aspects like: usage restrictions, + quotas and billing. Currently this is used to distinguish + FREE_INSTANCE vs PROVISIONED instances. + + Values: + INSTANCE_TYPE_UNSPECIFIED (0): + Not specified. + PROVISIONED (1): + Provisioned instances have dedicated + resources, standard usage limits and support. + FREE_INSTANCE (2): + Free instances provide no guarantee for dedicated resources, + [node_count, processing_units] should be 0. They come with + stricter usage limits and limited support. + """ + INSTANCE_TYPE_UNSPECIFIED = 0 + PROVISIONED = 1 + FREE_INSTANCE = 2 + + class Edition(proto.Enum): + r"""The edition selected for this instance. Different editions + provide different capabilities at different price points. + + Values: + EDITION_UNSPECIFIED (0): + Edition not specified. + STANDARD (1): + Standard edition. + ENTERPRISE (2): + Enterprise edition. + ENTERPRISE_PLUS (3): + Enterprise Plus edition. + """ + EDITION_UNSPECIFIED = 0 + STANDARD = 1 + ENTERPRISE = 2 + ENTERPRISE_PLUS = 3 + + class DefaultBackupScheduleType(proto.Enum): + r"""Indicates the `default backup + schedule `__ + behavior for new databases within the instance. + + Values: + DEFAULT_BACKUP_SCHEDULE_TYPE_UNSPECIFIED (0): + Not specified. + NONE (1): + A default backup schedule isn't created + automatically when a new database is created in + the instance. + AUTOMATIC (2): + A default backup schedule is created + automatically when a new database is created in + the instance. The default backup schedule + creates a full backup every 24 hours. These full + backups are retained for 7 days. You can edit or + delete the default backup schedule once it's + created. + """ + DEFAULT_BACKUP_SCHEDULE_TYPE_UNSPECIFIED = 0 + NONE = 1 + AUTOMATIC = 2 + + name: str = proto.Field( + proto.STRING, + number=1, + ) + config: str = proto.Field( + proto.STRING, + number=2, + ) + display_name: str = proto.Field( + proto.STRING, + number=3, + ) + node_count: int = proto.Field( + proto.INT32, + number=5, + ) + processing_units: int = proto.Field( + proto.INT32, + number=9, + ) + replica_compute_capacity: MutableSequence[ + "ReplicaComputeCapacity" + ] = proto.RepeatedField( + proto.MESSAGE, + number=19, + message="ReplicaComputeCapacity", + ) + autoscaling_config: "AutoscalingConfig" = proto.Field( + proto.MESSAGE, + number=17, + message="AutoscalingConfig", + ) + state: State = proto.Field( + proto.ENUM, + number=6, + enum=State, + ) + labels: MutableMapping[str, str] = proto.MapField( + proto.STRING, + proto.STRING, + number=7, + ) + instance_type: InstanceType = proto.Field( + proto.ENUM, + number=10, + enum=InstanceType, + ) + endpoint_uris: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=8, + ) + create_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=11, + message=timestamp_pb2.Timestamp, + ) + update_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=12, + message=timestamp_pb2.Timestamp, + ) + free_instance_metadata: "FreeInstanceMetadata" = proto.Field( + proto.MESSAGE, + number=13, + message="FreeInstanceMetadata", + ) + edition: Edition = proto.Field( + proto.ENUM, + number=20, + enum=Edition, + ) + default_backup_schedule_type: DefaultBackupScheduleType = proto.Field( + proto.ENUM, + number=23, + enum=DefaultBackupScheduleType, + ) + + +class ListInstanceConfigsRequest(proto.Message): + r"""The request for + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + + Attributes: + parent (str): + Required. The name of the project for which a list of + supported instance configurations is requested. Values are + of the form ``projects/``. + page_size (int): + Number of instance configurations to be + returned in the response. If 0 or less, defaults + to the server's maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.instance.v1.ListInstanceConfigsResponse.next_page_token] + from a previous + [ListInstanceConfigsResponse][google.spanner.admin.instance.v1.ListInstanceConfigsResponse]. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + page_size: int = proto.Field( + proto.INT32, + number=2, + ) + page_token: str = proto.Field( + proto.STRING, + number=3, + ) + + +class ListInstanceConfigsResponse(proto.Message): + r"""The response for + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + + Attributes: + instance_configs (MutableSequence[google.cloud.spanner_admin_instance_v1.types.InstanceConfig]): + The list of requested instance + configurations. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs] + call to fetch more of the matching instance configurations. + """ + + @property + def raw_page(self): + return self + + instance_configs: MutableSequence["InstanceConfig"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="InstanceConfig", + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class GetInstanceConfigRequest(proto.Message): + r"""The request for + [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. + + Attributes: + name (str): + Required. The name of the requested instance configuration. + Values are of the form + ``projects//instanceConfigs/``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class CreateInstanceConfigRequest(proto.Message): + r"""The request for + [CreateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstanceConfig]. + + Attributes: + parent (str): + Required. The name of the project in which to create the + instance configuration. Values are of the form + ``projects/``. + instance_config_id (str): + Required. The ID of the instance configuration to create. + Valid identifiers are of the form + ``custom-[-a-z0-9]*[a-z0-9]`` and must be between 2 and 64 + characters in length. The ``custom-`` prefix is required to + avoid name conflicts with Google-managed configurations. + instance_config (google.cloud.spanner_admin_instance_v1.types.InstanceConfig): + Required. The ``InstanceConfig`` proto of the configuration + to create. ``instance_config.name`` must be + ``/instanceConfigs/``. + ``instance_config.base_config`` must be a Google-managed + configuration name, e.g. /instanceConfigs/us-east1, + /instanceConfigs/nam3. + validate_only (bool): + An option to validate, but not actually + execute, a request, and provide the same + response. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + instance_config_id: str = proto.Field( + proto.STRING, + number=2, + ) + instance_config: "InstanceConfig" = proto.Field( + proto.MESSAGE, + number=3, + message="InstanceConfig", + ) + validate_only: bool = proto.Field( + proto.BOOL, + number=4, + ) + + +class UpdateInstanceConfigRequest(proto.Message): + r"""The request for + [UpdateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstanceConfig]. + + Attributes: + instance_config (google.cloud.spanner_admin_instance_v1.types.InstanceConfig): + Required. The user instance configuration to update, which + must always include the instance configuration name. + Otherwise, only fields mentioned in + [update_mask][google.spanner.admin.instance.v1.UpdateInstanceConfigRequest.update_mask] + need be included. To prevent conflicts of concurrent + updates, + [etag][google.spanner.admin.instance.v1.InstanceConfig.reconciling] + can be used. + update_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields in + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] + should be updated. The field mask must always be specified; + this prevents any future fields in + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] + from being erased accidentally by clients that do not know + about them. Only display_name and labels can be updated. + validate_only (bool): + An option to validate, but not actually + execute, a request, and provide the same + response. + """ + + instance_config: "InstanceConfig" = proto.Field( + proto.MESSAGE, + number=1, + message="InstanceConfig", + ) + update_mask: field_mask_pb2.FieldMask = proto.Field( + proto.MESSAGE, + number=2, + message=field_mask_pb2.FieldMask, + ) + validate_only: bool = proto.Field( + proto.BOOL, + number=3, + ) + + +class DeleteInstanceConfigRequest(proto.Message): + r"""The request for + [DeleteInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstanceConfig]. + + Attributes: + name (str): + Required. The name of the instance configuration to be + deleted. Values are of the form + ``projects//instanceConfigs/`` + etag (str): + Used for optimistic concurrency control as a + way to help prevent simultaneous deletes of an + instance configuration from overwriting each + other. If not empty, the API + only deletes the instance configuration when the + etag provided matches the current status of the + requested instance configuration. Otherwise, + deletes the instance configuration without + checking the current status of the requested + instance configuration. + validate_only (bool): + An option to validate, but not actually + execute, a request, and provide the same + response. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + etag: str = proto.Field( + proto.STRING, + number=2, + ) + validate_only: bool = proto.Field( + proto.BOOL, + number=3, + ) + + +class ListInstanceConfigOperationsRequest(proto.Message): + r"""The request for + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations]. + + Attributes: + parent (str): + Required. The project of the instance configuration + operations. Values are of the form ``projects/``. + filter (str): + An expression that filters the list of returned operations. + + A filter expression consists of a field name, a comparison + operator, and a value for filtering. The value must be a + string, a number, or a boolean. The comparison operator must + be one of: ``<``, ``>``, ``<=``, ``>=``, ``!=``, ``=``, or + ``:``. Colon ``:`` is the contains operator. Filter rules + are not case sensitive. + + The following fields in the Operation are eligible for + filtering: + + - ``name`` - The name of the long-running operation + - ``done`` - False if the operation is in progress, else + true. + - ``metadata.@type`` - the type of metadata. For example, + the type string for + [CreateInstanceConfigMetadata][google.spanner.admin.instance.v1.CreateInstanceConfigMetadata] + is + ``type.googleapis.com/google.spanner.admin.instance.v1.CreateInstanceConfigMetadata``. + - ``metadata.`` - any field in metadata.value. + ``metadata.@type`` must be specified first, if filtering + on metadata fields. + - ``error`` - Error associated with the long-running + operation. + - ``response.@type`` - the type of response. + - ``response.`` - any field in response.value. + + You can combine multiple expressions by enclosing each + expression in parentheses. By default, expressions are + combined with AND logic. However, you can specify AND, OR, + and NOT logic explicitly. + + Here are a few examples: + + - ``done:true`` - The operation is complete. + - ``(metadata.@type=`` + ``type.googleapis.com/google.spanner.admin.instance.v1.CreateInstanceConfigMetadata) AND`` + ``(metadata.instance_config.name:custom-config) AND`` + ``(metadata.progress.start_time < \"2021-03-28T14:50:00Z\") AND`` + ``(error:*)`` - Return operations where: + + - The operation's metadata type is + [CreateInstanceConfigMetadata][google.spanner.admin.instance.v1.CreateInstanceConfigMetadata]. + - The instance configuration name contains + "custom-config". + - The operation started before 2021-03-28T14:50:00Z. + - The operation resulted in an error. + page_size (int): + Number of operations to be returned in the + response. If 0 or less, defaults to the server's + maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.instance.v1.ListInstanceConfigOperationsResponse.next_page_token] + from a previous + [ListInstanceConfigOperationsResponse][google.spanner.admin.instance.v1.ListInstanceConfigOperationsResponse] + to the same ``parent`` and with the same ``filter``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + filter: str = proto.Field( + proto.STRING, + number=2, + ) + page_size: int = proto.Field( + proto.INT32, + number=3, + ) + page_token: str = proto.Field( + proto.STRING, + number=4, + ) + + +class ListInstanceConfigOperationsResponse(proto.Message): + r"""The response for + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations]. + + Attributes: + operations (MutableSequence[google.longrunning.operations_pb2.Operation]): + The list of matching instance configuration long-running + operations. Each operation's name will be prefixed by the + name of the instance configuration. The operation's metadata + field type ``metadata.type_url`` describes the type of the + metadata. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListInstanceConfigOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations] + call to fetch more of the matching metadata. + """ + + @property + def raw_page(self): + return self + + operations: MutableSequence[operations_pb2.Operation] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message=operations_pb2.Operation, + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class GetInstanceRequest(proto.Message): + r"""The request for + [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. + + Attributes: + name (str): + Required. The name of the requested instance. Values are of + the form ``projects//instances/``. + field_mask (google.protobuf.field_mask_pb2.FieldMask): + If field_mask is present, specifies the subset of + [Instance][google.spanner.admin.instance.v1.Instance] fields + that should be returned. If absent, all + [Instance][google.spanner.admin.instance.v1.Instance] fields + are returned. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + field_mask: field_mask_pb2.FieldMask = proto.Field( + proto.MESSAGE, + number=2, + message=field_mask_pb2.FieldMask, + ) + + +class CreateInstanceRequest(proto.Message): + r"""The request for + [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. + + Attributes: + parent (str): + Required. The name of the project in which to create the + instance. Values are of the form ``projects/``. + instance_id (str): + Required. The ID of the instance to create. Valid + identifiers are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` and + must be between 2 and 64 characters in length. + instance (google.cloud.spanner_admin_instance_v1.types.Instance): + Required. The instance to create. The name may be omitted, + but if specified must be + ``/instances/``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + instance_id: str = proto.Field( + proto.STRING, + number=2, + ) + instance: "Instance" = proto.Field( + proto.MESSAGE, + number=3, + message="Instance", + ) + + +class ListInstancesRequest(proto.Message): + r"""The request for + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. + + Attributes: + parent (str): + Required. The name of the project for which a list of + instances is requested. Values are of the form + ``projects/``. + page_size (int): + Number of instances to be returned in the + response. If 0 or less, defaults to the server's + maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.instance.v1.ListInstancesResponse.next_page_token] + from a previous + [ListInstancesResponse][google.spanner.admin.instance.v1.ListInstancesResponse]. + filter (str): + An expression for filtering the results of the request. + Filter rules are case insensitive. The fields eligible for + filtering are: + + - ``name`` + - ``display_name`` + - ``labels.key`` where key is the name of a label + + Some examples of using filters are: + + - ``name:*`` --> The instance has a name. + - ``name:Howl`` --> The instance's name contains the string + "howl". + - ``name:HOWL`` --> Equivalent to above. + - ``NAME:howl`` --> Equivalent to above. + - ``labels.env:*`` --> The instance has the label "env". + - ``labels.env:dev`` --> The instance has the label "env" + and the value of the label contains the string "dev". + - ``name:howl labels.env:dev`` --> The instance's name + contains "howl" and it has the label "env" with its value + containing "dev". + instance_deadline (google.protobuf.timestamp_pb2.Timestamp): + Deadline used while retrieving metadata for instances. + Instances whose metadata cannot be retrieved within this + deadline will be added to + [unreachable][google.spanner.admin.instance.v1.ListInstancesResponse.unreachable] + in + [ListInstancesResponse][google.spanner.admin.instance.v1.ListInstancesResponse]. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + page_size: int = proto.Field( + proto.INT32, + number=2, + ) + page_token: str = proto.Field( + proto.STRING, + number=3, + ) + filter: str = proto.Field( + proto.STRING, + number=4, + ) + instance_deadline: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=5, + message=timestamp_pb2.Timestamp, + ) + + +class ListInstancesResponse(proto.Message): + r"""The response for + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. + + Attributes: + instances (MutableSequence[google.cloud.spanner_admin_instance_v1.types.Instance]): + The list of requested instances. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances] + call to fetch more of the matching instances. + unreachable (MutableSequence[str]): + The list of unreachable instances. It includes the names of + instances whose metadata could not be retrieved within + [instance_deadline][google.spanner.admin.instance.v1.ListInstancesRequest.instance_deadline]. + """ + + @property + def raw_page(self): + return self + + instances: MutableSequence["Instance"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="Instance", + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + unreachable: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + +class UpdateInstanceRequest(proto.Message): + r"""The request for + [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. + + Attributes: + instance (google.cloud.spanner_admin_instance_v1.types.Instance): + Required. The instance to update, which must always include + the instance name. Otherwise, only fields mentioned in + [field_mask][google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask] + need be included. + field_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields in + [Instance][google.spanner.admin.instance.v1.Instance] should + be updated. The field mask must always be specified; this + prevents any future fields in + [Instance][google.spanner.admin.instance.v1.Instance] from + being erased accidentally by clients that do not know about + them. + """ + + instance: "Instance" = proto.Field( + proto.MESSAGE, + number=1, + message="Instance", + ) + field_mask: field_mask_pb2.FieldMask = proto.Field( + proto.MESSAGE, + number=2, + message=field_mask_pb2.FieldMask, + ) + + +class DeleteInstanceRequest(proto.Message): + r"""The request for + [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. + + Attributes: + name (str): + Required. The name of the instance to be deleted. Values are + of the form ``projects//instances/`` + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class CreateInstanceMetadata(proto.Message): + r"""Metadata type for the operation returned by + [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. + + Attributes: + instance (google.cloud.spanner_admin_instance_v1.types.Instance): + The instance being created. + start_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which the + [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance] + request was received. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation was + cancelled. If set, this operation is in the + process of undoing itself (which is guaranteed + to succeed) and cannot be cancelled again. + end_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation failed or + was completed successfully. + expected_fulfillment_period (google.cloud.spanner_admin_instance_v1.types.FulfillmentPeriod): + The expected fulfillment period of this + create operation. + """ + + instance: "Instance" = proto.Field( + proto.MESSAGE, + number=1, + message="Instance", + ) + start_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + end_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + expected_fulfillment_period: common.FulfillmentPeriod = proto.Field( + proto.ENUM, + number=5, + enum=common.FulfillmentPeriod, + ) + + +class UpdateInstanceMetadata(proto.Message): + r"""Metadata type for the operation returned by + [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. + + Attributes: + instance (google.cloud.spanner_admin_instance_v1.types.Instance): + The desired end state of the update. + start_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which + [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance] + request was received. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation was + cancelled. If set, this operation is in the + process of undoing itself (which is guaranteed + to succeed) and cannot be cancelled again. + end_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation failed or + was completed successfully. + expected_fulfillment_period (google.cloud.spanner_admin_instance_v1.types.FulfillmentPeriod): + The expected fulfillment period of this + update operation. + """ + + instance: "Instance" = proto.Field( + proto.MESSAGE, + number=1, + message="Instance", + ) + start_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + end_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + expected_fulfillment_period: common.FulfillmentPeriod = proto.Field( + proto.ENUM, + number=5, + enum=common.FulfillmentPeriod, + ) + + +class FreeInstanceMetadata(proto.Message): + r"""Free instance specific metadata that is kept even after an + instance has been upgraded for tracking purposes. + + Attributes: + expire_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. Timestamp after which the + instance will either be upgraded or scheduled + for deletion after a grace period. + ExpireBehavior is used to choose between + upgrading or scheduling the free instance for + deletion. This timestamp is set during the + creation of a free instance. + upgrade_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. If present, the timestamp at + which the free instance was upgraded to a + provisioned instance. + expire_behavior (google.cloud.spanner_admin_instance_v1.types.FreeInstanceMetadata.ExpireBehavior): + Specifies the expiration behavior of a free instance. The + default of ExpireBehavior is ``REMOVE_AFTER_GRACE_PERIOD``. + This can be modified during or after creation, and before + expiration. + """ + + class ExpireBehavior(proto.Enum): + r"""Allows users to change behavior when a free instance expires. + + Values: + EXPIRE_BEHAVIOR_UNSPECIFIED (0): + Not specified. + FREE_TO_PROVISIONED (1): + When the free instance expires, upgrade the + instance to a provisioned instance. + REMOVE_AFTER_GRACE_PERIOD (2): + When the free instance expires, disable the + instance, and delete it after the grace period + passes if it has not been upgraded. + """ + EXPIRE_BEHAVIOR_UNSPECIFIED = 0 + FREE_TO_PROVISIONED = 1 + REMOVE_AFTER_GRACE_PERIOD = 2 + + expire_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=1, + message=timestamp_pb2.Timestamp, + ) + upgrade_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + expire_behavior: ExpireBehavior = proto.Field( + proto.ENUM, + number=3, + enum=ExpireBehavior, + ) + + +class CreateInstanceConfigMetadata(proto.Message): + r"""Metadata type for the operation returned by + [CreateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstanceConfig]. + + Attributes: + instance_config (google.cloud.spanner_admin_instance_v1.types.InstanceConfig): + The target instance configuration end state. + progress (google.cloud.spanner_admin_instance_v1.types.OperationProgress): + The progress of the + [CreateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstanceConfig] + operation. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation was + cancelled. + """ + + instance_config: "InstanceConfig" = proto.Field( + proto.MESSAGE, + number=1, + message="InstanceConfig", + ) + progress: common.OperationProgress = proto.Field( + proto.MESSAGE, + number=2, + message=common.OperationProgress, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + + +class UpdateInstanceConfigMetadata(proto.Message): + r"""Metadata type for the operation returned by + [UpdateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstanceConfig]. + + Attributes: + instance_config (google.cloud.spanner_admin_instance_v1.types.InstanceConfig): + The desired instance configuration after + updating. + progress (google.cloud.spanner_admin_instance_v1.types.OperationProgress): + The progress of the + [UpdateInstanceConfig][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstanceConfig] + operation. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation was + cancelled. + """ + + instance_config: "InstanceConfig" = proto.Field( + proto.MESSAGE, + number=1, + message="InstanceConfig", + ) + progress: common.OperationProgress = proto.Field( + proto.MESSAGE, + number=2, + message=common.OperationProgress, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + + +class InstancePartition(proto.Message): + r"""An isolated set of Cloud Spanner resources that databases can + define placements on. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + name (str): + Required. A unique identifier for the instance partition. + Values are of the form + ``projects//instances//instancePartitions/[a-z][-a-z0-9]*[a-z0-9]``. + The final segment of the name must be between 2 and 64 + characters in length. An instance partition's name cannot be + changed after the instance partition is created. + config (str): + Required. The name of the instance partition's + configuration. Values are of the form + ``projects//instanceConfigs/``. See + also + [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] + and + [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. + display_name (str): + Required. The descriptive name for this + instance partition as it appears in UIs. Must be + unique per project and between 4 and 30 + characters in length. + node_count (int): + The number of nodes allocated to this instance partition. + + Users can set the ``node_count`` field to specify the target + number of nodes allocated to the instance partition. + + This may be zero in API responses for instance partitions + that are not yet in state ``READY``. + + This field is a member of `oneof`_ ``compute_capacity``. + processing_units (int): + The number of processing units allocated to this instance + partition. + + Users can set the ``processing_units`` field to specify the + target number of processing units allocated to the instance + partition. + + This might be zero in API responses for instance partitions + that are not yet in the ``READY`` state. + + This field is a member of `oneof`_ ``compute_capacity``. + state (google.cloud.spanner_admin_instance_v1.types.InstancePartition.State): + Output only. The current instance partition + state. + create_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The time at which the instance + partition was created. + update_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The time at which the instance + partition was most recently updated. + referencing_databases (MutableSequence[str]): + Output only. The names of the databases that + reference this instance partition. Referencing + databases should share the parent instance. The + existence of any referencing database prevents + the instance partition from being deleted. + referencing_backups (MutableSequence[str]): + Output only. Deprecated: This field is not + populated. Output only. The names of the backups + that reference this instance partition. + Referencing backups should share the parent + instance. The existence of any referencing + backup prevents the instance partition from + being deleted. + etag (str): + Used for optimistic concurrency control as a + way to help prevent simultaneous updates of a + instance partition from overwriting each other. + It is strongly suggested that systems make use + of the etag in the read-modify-write cycle to + perform instance partition updates in order to + avoid race conditions: An etag is returned in + the response which contains instance partitions, + and systems are expected to put that etag in the + request to update instance partitions to ensure + that their change will be applied to the same + version of the instance partition. If no etag is + provided in the call to update instance + partition, then the existing instance partition + is overwritten blindly. + """ + + class State(proto.Enum): + r"""Indicates the current state of the instance partition. + + Values: + STATE_UNSPECIFIED (0): + Not specified. + CREATING (1): + The instance partition is still being + created. Resources may not be available yet, and + operations such as creating placements using + this instance partition may not work. + READY (2): + The instance partition is fully created and + ready to do work such as creating placements and + using in databases. + """ + STATE_UNSPECIFIED = 0 + CREATING = 1 + READY = 2 + + name: str = proto.Field( + proto.STRING, + number=1, + ) + config: str = proto.Field( + proto.STRING, + number=2, + ) + display_name: str = proto.Field( + proto.STRING, + number=3, + ) + node_count: int = proto.Field( + proto.INT32, + number=5, + oneof="compute_capacity", + ) + processing_units: int = proto.Field( + proto.INT32, + number=6, + oneof="compute_capacity", + ) + state: State = proto.Field( + proto.ENUM, + number=7, + enum=State, + ) + create_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=8, + message=timestamp_pb2.Timestamp, + ) + update_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=9, + message=timestamp_pb2.Timestamp, + ) + referencing_databases: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=10, + ) + referencing_backups: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=11, + ) + etag: str = proto.Field( + proto.STRING, + number=12, + ) + + +class CreateInstancePartitionMetadata(proto.Message): + r"""Metadata type for the operation returned by + [CreateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstancePartition]. + + Attributes: + instance_partition (google.cloud.spanner_admin_instance_v1.types.InstancePartition): + The instance partition being created. + start_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which the + [CreateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstancePartition] + request was received. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation was + cancelled. If set, this operation is in the + process of undoing itself (which is guaranteed + to succeed) and cannot be cancelled again. + end_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation failed or + was completed successfully. + """ + + instance_partition: "InstancePartition" = proto.Field( + proto.MESSAGE, + number=1, + message="InstancePartition", + ) + start_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + end_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + + +class CreateInstancePartitionRequest(proto.Message): + r"""The request for + [CreateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstancePartition]. + + Attributes: + parent (str): + Required. The name of the instance in which to create the + instance partition. Values are of the form + ``projects//instances/``. + instance_partition_id (str): + Required. The ID of the instance partition to create. Valid + identifiers are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` and + must be between 2 and 64 characters in length. + instance_partition (google.cloud.spanner_admin_instance_v1.types.InstancePartition): + Required. The instance partition to create. The + instance_partition.name may be omitted, but if specified + must be + ``/instancePartitions/``. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + instance_partition_id: str = proto.Field( + proto.STRING, + number=2, + ) + instance_partition: "InstancePartition" = proto.Field( + proto.MESSAGE, + number=3, + message="InstancePartition", + ) + + +class DeleteInstancePartitionRequest(proto.Message): + r"""The request for + [DeleteInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstancePartition]. + + Attributes: + name (str): + Required. The name of the instance partition to be deleted. + Values are of the form + ``projects/{project}/instances/{instance}/instancePartitions/{instance_partition}`` + etag (str): + Optional. If not empty, the API only deletes + the instance partition when the etag provided + matches the current status of the requested + instance partition. Otherwise, deletes the + instance partition without checking the current + status of the requested instance partition. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + etag: str = proto.Field( + proto.STRING, + number=2, + ) + + +class GetInstancePartitionRequest(proto.Message): + r"""The request for + [GetInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.GetInstancePartition]. + + Attributes: + name (str): + Required. The name of the requested instance partition. + Values are of the form + ``projects/{project}/instances/{instance}/instancePartitions/{instance_partition}``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class UpdateInstancePartitionRequest(proto.Message): + r"""The request for + [UpdateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstancePartition]. + + Attributes: + instance_partition (google.cloud.spanner_admin_instance_v1.types.InstancePartition): + Required. The instance partition to update, which must + always include the instance partition name. Otherwise, only + fields mentioned in + [field_mask][google.spanner.admin.instance.v1.UpdateInstancePartitionRequest.field_mask] + need be included. + field_mask (google.protobuf.field_mask_pb2.FieldMask): + Required. A mask specifying which fields in + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition] + should be updated. The field mask must always be specified; + this prevents any future fields in + [InstancePartition][google.spanner.admin.instance.v1.InstancePartition] + from being erased accidentally by clients that do not know + about them. + """ + + instance_partition: "InstancePartition" = proto.Field( + proto.MESSAGE, + number=1, + message="InstancePartition", + ) + field_mask: field_mask_pb2.FieldMask = proto.Field( + proto.MESSAGE, + number=2, + message=field_mask_pb2.FieldMask, + ) + + +class UpdateInstancePartitionMetadata(proto.Message): + r"""Metadata type for the operation returned by + [UpdateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstancePartition]. + + Attributes: + instance_partition (google.cloud.spanner_admin_instance_v1.types.InstancePartition): + The desired end state of the update. + start_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which + [UpdateInstancePartition][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstancePartition] + request was received. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation was + cancelled. If set, this operation is in the + process of undoing itself (which is guaranteed + to succeed) and cannot be cancelled again. + end_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation failed or + was completed successfully. + """ + + instance_partition: "InstancePartition" = proto.Field( + proto.MESSAGE, + number=1, + message="InstancePartition", + ) + start_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + end_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + + +class ListInstancePartitionsRequest(proto.Message): + r"""The request for + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions]. + + Attributes: + parent (str): + Required. The instance whose instance partitions should be + listed. Values are of the form + ``projects//instances/``. Use + ``{instance} = '-'`` to list instance partitions for all + Instances in a project, e.g., + ``projects/myproject/instances/-``. + page_size (int): + Number of instance partitions to be returned + in the response. If 0 or less, defaults to the + server's maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.instance.v1.ListInstancePartitionsResponse.next_page_token] + from a previous + [ListInstancePartitionsResponse][google.spanner.admin.instance.v1.ListInstancePartitionsResponse]. + instance_partition_deadline (google.protobuf.timestamp_pb2.Timestamp): + Optional. Deadline used while retrieving metadata for + instance partitions. Instance partitions whose metadata + cannot be retrieved within this deadline will be added to + [unreachable][google.spanner.admin.instance.v1.ListInstancePartitionsResponse.unreachable] + in + [ListInstancePartitionsResponse][google.spanner.admin.instance.v1.ListInstancePartitionsResponse]. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + page_size: int = proto.Field( + proto.INT32, + number=2, + ) + page_token: str = proto.Field( + proto.STRING, + number=3, + ) + instance_partition_deadline: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + + +class ListInstancePartitionsResponse(proto.Message): + r"""The response for + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions]. + + Attributes: + instance_partitions (MutableSequence[google.cloud.spanner_admin_instance_v1.types.InstancePartition]): + The list of requested instancePartitions. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListInstancePartitions][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions] + call to fetch more of the matching instance partitions. + unreachable (MutableSequence[str]): + The list of unreachable instances or instance partitions. It + includes the names of instances or instance partitions whose + metadata could not be retrieved within + [instance_partition_deadline][google.spanner.admin.instance.v1.ListInstancePartitionsRequest.instance_partition_deadline]. + """ + + @property + def raw_page(self): + return self + + instance_partitions: MutableSequence["InstancePartition"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="InstancePartition", + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + unreachable: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + +class ListInstancePartitionOperationsRequest(proto.Message): + r"""The request for + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations]. + + Attributes: + parent (str): + Required. The parent instance of the instance partition + operations. Values are of the form + ``projects//instances/``. + filter (str): + Optional. An expression that filters the list of returned + operations. + + A filter expression consists of a field name, a comparison + operator, and a value for filtering. The value must be a + string, a number, or a boolean. The comparison operator must + be one of: ``<``, ``>``, ``<=``, ``>=``, ``!=``, ``=``, or + ``:``. Colon ``:`` is the contains operator. Filter rules + are not case sensitive. + + The following fields in the Operation are eligible for + filtering: + + - ``name`` - The name of the long-running operation + - ``done`` - False if the operation is in progress, else + true. + - ``metadata.@type`` - the type of metadata. For example, + the type string for + [CreateInstancePartitionMetadata][google.spanner.admin.instance.v1.CreateInstancePartitionMetadata] + is + ``type.googleapis.com/google.spanner.admin.instance.v1.CreateInstancePartitionMetadata``. + - ``metadata.`` - any field in metadata.value. + ``metadata.@type`` must be specified first, if filtering + on metadata fields. + - ``error`` - Error associated with the long-running + operation. + - ``response.@type`` - the type of response. + - ``response.`` - any field in response.value. + + You can combine multiple expressions by enclosing each + expression in parentheses. By default, expressions are + combined with AND logic. However, you can specify AND, OR, + and NOT logic explicitly. + + Here are a few examples: + + - ``done:true`` - The operation is complete. + - ``(metadata.@type=`` + ``type.googleapis.com/google.spanner.admin.instance.v1.CreateInstancePartitionMetadata) AND`` + ``(metadata.instance_partition.name:custom-instance-partition) AND`` + ``(metadata.start_time < \"2021-03-28T14:50:00Z\") AND`` + ``(error:*)`` - Return operations where: + + - The operation's metadata type is + [CreateInstancePartitionMetadata][google.spanner.admin.instance.v1.CreateInstancePartitionMetadata]. + - The instance partition name contains + "custom-instance-partition". + - The operation started before 2021-03-28T14:50:00Z. + - The operation resulted in an error. + page_size (int): + Optional. Number of operations to be returned + in the response. If 0 or less, defaults to the + server's maximum allowed page size. + page_token (str): + Optional. If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.admin.instance.v1.ListInstancePartitionOperationsResponse.next_page_token] + from a previous + [ListInstancePartitionOperationsResponse][google.spanner.admin.instance.v1.ListInstancePartitionOperationsResponse] + to the same ``parent`` and with the same ``filter``. + instance_partition_deadline (google.protobuf.timestamp_pb2.Timestamp): + Optional. Deadline used while retrieving metadata for + instance partition operations. Instance partitions whose + operation metadata cannot be retrieved within this deadline + will be added to + [unreachable_instance_partitions][google.spanner.admin.instance.v1.ListInstancePartitionOperationsResponse.unreachable_instance_partitions] + in + [ListInstancePartitionOperationsResponse][google.spanner.admin.instance.v1.ListInstancePartitionOperationsResponse]. + """ + + parent: str = proto.Field( + proto.STRING, + number=1, + ) + filter: str = proto.Field( + proto.STRING, + number=2, + ) + page_size: int = proto.Field( + proto.INT32, + number=3, + ) + page_token: str = proto.Field( + proto.STRING, + number=4, + ) + instance_partition_deadline: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=5, + message=timestamp_pb2.Timestamp, + ) + + +class ListInstancePartitionOperationsResponse(proto.Message): + r"""The response for + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations]. + + Attributes: + operations (MutableSequence[google.longrunning.operations_pb2.Operation]): + The list of matching instance partition long-running + operations. Each operation's name will be prefixed by the + instance partition's name. The operation's metadata field + type ``metadata.type_url`` describes the type of the + metadata. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListInstancePartitionOperations][google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations] + call to fetch more of the matching metadata. + unreachable_instance_partitions (MutableSequence[str]): + The list of unreachable instance partitions. It includes the + names of instance partitions whose operation metadata could + not be retrieved within + [instance_partition_deadline][google.spanner.admin.instance.v1.ListInstancePartitionOperationsRequest.instance_partition_deadline]. + """ + + @property + def raw_page(self): + return self + + operations: MutableSequence[operations_pb2.Operation] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message=operations_pb2.Operation, + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + unreachable_instance_partitions: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + +class MoveInstanceRequest(proto.Message): + r"""The request for + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance]. + + Attributes: + name (str): + Required. The instance to move. Values are of the form + ``projects//instances/``. + target_config (str): + Required. The target instance configuration where to move + the instance. Values are of the form + ``projects//instanceConfigs/``. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + target_config: str = proto.Field( + proto.STRING, + number=2, + ) + + +class MoveInstanceResponse(proto.Message): + r"""The response for + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance]. + + """ + + +class MoveInstanceMetadata(proto.Message): + r"""Metadata type for the operation returned by + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance]. + + Attributes: + target_config (str): + The target instance configuration where to move the + instance. Values are of the form + ``projects//instanceConfigs/``. + progress (google.cloud.spanner_admin_instance_v1.types.OperationProgress): + The progress of the + [MoveInstance][google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance] + operation. + [progress_percent][google.spanner.admin.instance.v1.OperationProgress.progress_percent] + is reset when cancellation is requested. + cancel_time (google.protobuf.timestamp_pb2.Timestamp): + The time at which this operation was + cancelled. + """ + + target_config: str = proto.Field( + proto.STRING, + number=1, + ) + progress: common.OperationProgress = proto.Field( + proto.MESSAGE, + number=2, + message=common.OperationProgress, + ) + cancel_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_dbapi/__init__.py b/google/cloud/spanner_dbapi/__init__.py new file mode 100644 index 0000000000..e94ecdc0ed --- /dev/null +++ b/google/cloud/spanner_dbapi/__init__.py @@ -0,0 +1,93 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Connection-based DB API for Cloud Spanner.""" + +from google.cloud.spanner_dbapi.connection import Connection +from google.cloud.spanner_dbapi.connection import connect + +from google.cloud.spanner_dbapi.cursor import Cursor + +from google.cloud.spanner_dbapi.exceptions import DatabaseError +from google.cloud.spanner_dbapi.exceptions import DataError +from google.cloud.spanner_dbapi.exceptions import Error +from google.cloud.spanner_dbapi.exceptions import IntegrityError +from google.cloud.spanner_dbapi.exceptions import InterfaceError +from google.cloud.spanner_dbapi.exceptions import InternalError +from google.cloud.spanner_dbapi.exceptions import NotSupportedError +from google.cloud.spanner_dbapi.exceptions import OperationalError +from google.cloud.spanner_dbapi.exceptions import ProgrammingError +from google.cloud.spanner_dbapi.exceptions import Warning + +from google.cloud.spanner_dbapi.parse_utils import get_param_types + +from google.cloud.spanner_dbapi.types import BINARY +from google.cloud.spanner_dbapi.types import DATETIME +from google.cloud.spanner_dbapi.types import NUMBER +from google.cloud.spanner_dbapi.types import ROWID +from google.cloud.spanner_dbapi.types import STRING +from google.cloud.spanner_dbapi.types import Binary +from google.cloud.spanner_dbapi.types import Date +from google.cloud.spanner_dbapi.types import DateFromTicks +from google.cloud.spanner_dbapi.types import Time +from google.cloud.spanner_dbapi.types import TimeFromTicks +from google.cloud.spanner_dbapi.types import Timestamp +from google.cloud.spanner_dbapi.types import TimestampStr +from google.cloud.spanner_dbapi.types import TimestampFromTicks + +from google.cloud.spanner_dbapi.version import DEFAULT_USER_AGENT + +apilevel = "2.0" # supports DP-API 2.0 level. +paramstyle = "format" # ANSI C printf format codes, e.g. ...WHERE name=%s. + +# Threads may share the module, but not connections. This is a paranoid threadsafety +# level, but it is necessary for starters to use when debugging failures. +# Eventually once transactions are working properly, we'll update the +# threadsafety level. +threadsafety = 1 + + +__all__ = [ + "Connection", + "connect", + "Cursor", + "DatabaseError", + "DataError", + "Error", + "IntegrityError", + "InterfaceError", + "InternalError", + "NotSupportedError", + "OperationalError", + "ProgrammingError", + "Warning", + "DEFAULT_USER_AGENT", + "apilevel", + "paramstyle", + "threadsafety", + "get_param_types", + "Binary", + "Date", + "DateFromTicks", + "Time", + "TimeFromTicks", + "Timestamp", + "TimestampFromTicks", + "BINARY", + "STRING", + "NUMBER", + "DATETIME", + "ROWID", + "TimestampStr", +] diff --git a/google/cloud/spanner_dbapi/_helpers.py b/google/cloud/spanner_dbapi/_helpers.py new file mode 100644 index 0000000000..3f88eda4dd --- /dev/null +++ b/google/cloud/spanner_dbapi/_helpers.py @@ -0,0 +1,113 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.cloud.spanner_v1 import param_types + + +SQL_LIST_TABLES = """ +SELECT table_name +FROM information_schema.tables +WHERE table_catalog = '' +AND table_schema = @table_schema +AND table_type = 'BASE TABLE' +""" + +SQL_LIST_TABLES_AND_VIEWS = """ +SELECT table_name +FROM information_schema.tables +WHERE table_catalog = '' AND table_schema = @table_schema +""" + +SQL_GET_TABLE_COLUMN_SCHEMA = """ +SELECT COLUMN_NAME, IS_NULLABLE, SPANNER_TYPE +FROM INFORMATION_SCHEMA.COLUMNS +WHERE TABLE_SCHEMA = @schema_name AND TABLE_NAME = @table_name +""" + +# This table maps spanner_types to Spanner's data type sizes as per +# https://cloud.google.com/spanner/docs/data-types#allowable-types +# It is used to map `display_size` to a known type for Cursor.description +# after a row fetch. +# Since ResultMetadata +# https://cloud.google.com/spanner/docs/reference/rest/v1/ResultSetMetadata +# does not send back the actual size, we have to lookup the respective size. +# Some fields' sizes are dependent upon the dynamic data hence aren't sent back +# by Cloud Spanner. +CODE_TO_DISPLAY_SIZE = { + param_types.BOOL.code: 1, + param_types.DATE.code: 4, + param_types.FLOAT64.code: 8, + param_types.FLOAT32.code: 4, + param_types.INT64.code: 8, + param_types.TIMESTAMP.code: 12, +} + + +class ColumnInfo: + """Row column description object.""" + + def __init__( + self, + name, + type_code, + display_size=None, + internal_size=None, + precision=None, + scale=None, + null_ok=False, + ): + self.name = name + self.type_code = type_code + self.display_size = display_size + self.internal_size = internal_size + self.precision = precision + self.scale = scale + self.null_ok = null_ok + + self.fields = ( + self.name, + self.type_code, + self.display_size, + self.internal_size, + self.precision, + self.scale, + self.null_ok, + ) + + def __repr__(self): + return self.__str__() + + def __getitem__(self, index): + return self.fields[index] + + def __str__(self): + str_repr = ", ".join( + filter( + lambda part: part is not None, + [ + "name='%s'" % self.name, + "type_code=%d" % self.type_code, + "display_size=%d" % self.display_size + if self.display_size + else None, + "internal_size=%d" % self.internal_size + if self.internal_size + else None, + "precision='%s'" % self.precision if self.precision else None, + "scale='%s'" % self.scale if self.scale else None, + "null_ok='%s'" % self.null_ok if self.null_ok else None, + ], + ) + ) + return "ColumnInfo(%s)" % str_repr diff --git a/google/cloud/spanner_dbapi/batch_dml_executor.py b/google/cloud/spanner_dbapi/batch_dml_executor.py new file mode 100644 index 0000000000..a3ff606295 --- /dev/null +++ b/google/cloud/spanner_dbapi/batch_dml_executor.py @@ -0,0 +1,135 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import annotations + +from enum import Enum +from typing import TYPE_CHECKING, List +from google.cloud.spanner_dbapi.parsed_statement import ( + ParsedStatement, + StatementType, + Statement, +) +from google.rpc.code_pb2 import ABORTED, OK +from google.api_core.exceptions import Aborted + +from google.cloud.spanner_dbapi.utils import StreamedManyResultSets + +if TYPE_CHECKING: + from google.cloud.spanner_dbapi.cursor import Cursor + + +class BatchDmlExecutor: + """Executor that is used when a DML batch is started. These batches only + accept DML statements. All DML statements are buffered locally and sent to + Spanner when runBatch() is called. + + :type "Cursor": :class:`~google.cloud.spanner_dbapi.cursor.Cursor` + :param cursor: + """ + + def __init__(self, cursor: "Cursor"): + self._cursor = cursor + self._connection = cursor.connection + self._statements: List[Statement] = [] + + def execute_statement(self, parsed_statement: ParsedStatement): + """Executes the statement when dml batch is active by buffering the + statement in-memory. + + :type parsed_statement: ParsedStatement + :param parsed_statement: parsed statement containing sql query and query + params + """ + from google.cloud.spanner_dbapi import ProgrammingError + + # Note: Let the server handle it if the client-side parser did not + # recognize the type of statement. + if ( + parsed_statement.statement_type != StatementType.UPDATE + and parsed_statement.statement_type != StatementType.INSERT + and parsed_statement.statement_type != StatementType.UNKNOWN + ): + raise ProgrammingError("Only DML statements are allowed in batch DML mode.") + self._statements.append(parsed_statement.statement) + + def run_batch_dml(self): + """Executes all the buffered statements on the active dml batch by + making a call to Spanner. + """ + return run_batch_dml(self._cursor, self._statements) + + +def run_batch_dml(cursor: "Cursor", statements: List[Statement]): + """Executes all the dml statements by making a batch call to Spanner. + + :type cursor: Cursor + :param cursor: Database Cursor object + + :type statements: List[Statement] + :param statements: list of statements to execute in batch + """ + from google.cloud.spanner_dbapi import OperationalError + + many_result_set = StreamedManyResultSets() + if not statements: + return many_result_set + connection = cursor.connection + statements_tuple = [] + for statement in statements: + statements_tuple.append(statement.get_tuple()) + if not connection._client_transaction_started: + res = connection.database.run_in_transaction( + _do_batch_update_autocommit, statements_tuple + ) + many_result_set.add_iter(res) + cursor._row_count = sum([max(val, 0) for val in res]) + else: + while True: + try: + transaction = connection.transaction_checkout() + status, res = transaction.batch_update(statements_tuple) + if status.code == ABORTED: + connection._transaction = None + raise Aborted(status.message) + elif status.code != OK: + raise OperationalError(status.message) + + cursor._batch_dml_rows_count = res + many_result_set.add_iter(res) + cursor._row_count = sum([max(val, 0) for val in res]) + return many_result_set + except Aborted: + # We are raising it so it could be handled in transaction_helper.py and is retried + if cursor._in_retry_mode: + raise + else: + connection._transaction_helper.retry_transaction() + + +def _do_batch_update_autocommit(transaction, statements): + from google.cloud.spanner_dbapi import OperationalError + + status, res = transaction.batch_update(statements, last_statement=True) + if status.code == ABORTED: + raise Aborted(status.message) + elif status.code != OK: + raise OperationalError(status.message) + return res + + +class BatchMode(Enum): + DML = 1 + DDL = 2 + NONE = 3 diff --git a/google/cloud/spanner_dbapi/checksum.py b/google/cloud/spanner_dbapi/checksum.py new file mode 100644 index 0000000000..b2b3297db2 --- /dev/null +++ b/google/cloud/spanner_dbapi/checksum.py @@ -0,0 +1,80 @@ +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""API to calculate checksums of SQL statements results.""" + +import hashlib +import pickle + +from google.cloud.spanner_dbapi.exceptions import RetryAborted + + +class ResultsChecksum: + """Cumulative checksum. + + Used to calculate a total checksum of all the results + returned by operations executed within transaction. + Includes methods for checksums comparison. + These checksums are used while retrying an aborted + transaction to check if the results of a retried transaction + are equal to the results of the original transaction. + """ + + def __init__(self): + self.checksum = hashlib.sha256() + self.count = 0 # counter of consumed results + + def __len__(self): + """Return the number of consumed results. + + :rtype: :class:`int` + :returns: The number of results. + """ + return self.count + + def __eq__(self, other): + """Check if checksums are equal. + + :type other: :class:`google.cloud.spanner_dbapi.checksum.ResultsChecksum` + :param other: Another checksum to compare with this one. + """ + return self.checksum.digest() == other.checksum.digest() + + def consume_result(self, result): + """Add the given result into the checksum. + + :type result: Union[int, list] + :param result: Streamed row or row count from an UPDATE operation. + """ + self.checksum.update(pickle.dumps(result)) + self.count += 1 + + +def _compare_checksums(original, retried): + from google.cloud.spanner_dbapi.transaction_helper import RETRY_ABORTED_ERROR + + """Compare the given checksums. + + Raise an error if the given checksums are not equal. + + :type original: :class:`~google.cloud.spanner_dbapi.checksum.ResultsChecksum` + :param original: results checksum of the original transaction. + + :type retried: :class:`~google.cloud.spanner_dbapi.checksum.ResultsChecksum` + :param retried: results checksum of the retried transaction. + + :raises: :exc:`google.cloud.spanner_dbapi.exceptions.RetryAborted` in case if checksums are not equal. + """ + if retried != original: + raise RetryAborted(RETRY_ABORTED_ERROR) diff --git a/google/cloud/spanner_dbapi/client_side_statement_executor.py b/google/cloud/spanner_dbapi/client_side_statement_executor.py new file mode 100644 index 0000000000..ffda11f8b8 --- /dev/null +++ b/google/cloud/spanner_dbapi/client_side_statement_executor.py @@ -0,0 +1,140 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from typing import TYPE_CHECKING, Union +from google.cloud.spanner_v1 import TransactionOptions + +if TYPE_CHECKING: + from google.cloud.spanner_dbapi.cursor import Cursor + from google.cloud.spanner_dbapi import ProgrammingError + +from google.cloud.spanner_dbapi.parsed_statement import ( + ParsedStatement, + ClientSideStatementType, +) +from google.cloud.spanner_v1 import ( + Type, + StructType, + TypeCode, + ResultSetMetadata, + PartialResultSet, +) + +from google.cloud.spanner_v1._helpers import _make_value_pb +from google.cloud.spanner_v1.streamed import StreamedResultSet + +CONNECTION_CLOSED_ERROR = "This connection is closed" +TRANSACTION_NOT_STARTED_WARNING = ( + "This method is non-operational as a transaction has not been started." +) + + +def execute(cursor: "Cursor", parsed_statement: ParsedStatement): + """Executes the client side statements by calling the relevant method. + + It is an internal method that can make backwards-incompatible changes. + + :type cursor: Cursor + :param cursor: Cursor object of the dbApi + + :type parsed_statement: ParsedStatement + :param parsed_statement: parsed_statement based on the sql query + """ + connection = cursor.connection + column_values = [] + if connection.is_closed: + raise ProgrammingError(CONNECTION_CLOSED_ERROR) + statement_type = parsed_statement.client_side_statement_type + if statement_type == ClientSideStatementType.COMMIT: + connection.commit() + return None + if statement_type == ClientSideStatementType.BEGIN: + connection.begin(isolation_level=_get_isolation_level(parsed_statement)) + return None + if statement_type == ClientSideStatementType.ROLLBACK: + connection.rollback() + return None + if statement_type == ClientSideStatementType.SHOW_COMMIT_TIMESTAMP: + if ( + connection._transaction is not None + and connection._transaction.committed is not None + ): + column_values.append(connection._transaction.committed) + return _get_streamed_result_set( + ClientSideStatementType.SHOW_COMMIT_TIMESTAMP.name, + TypeCode.TIMESTAMP, + column_values, + ) + if statement_type == ClientSideStatementType.SHOW_READ_TIMESTAMP: + if ( + connection._snapshot is not None + and connection._snapshot._transaction_read_timestamp is not None + ): + column_values.append(connection._snapshot._transaction_read_timestamp) + return _get_streamed_result_set( + ClientSideStatementType.SHOW_READ_TIMESTAMP.name, + TypeCode.TIMESTAMP, + column_values, + ) + if statement_type == ClientSideStatementType.START_BATCH_DML: + connection.start_batch_dml(cursor) + return None + if statement_type == ClientSideStatementType.RUN_BATCH: + return connection.run_batch() + if statement_type == ClientSideStatementType.ABORT_BATCH: + return connection.abort_batch() + if statement_type == ClientSideStatementType.PARTITION_QUERY: + partition_ids = connection.partition_query(parsed_statement) + return _get_streamed_result_set( + "PARTITION", + TypeCode.STRING, + partition_ids, + ) + if statement_type == ClientSideStatementType.RUN_PARTITION: + return connection.run_partition( + parsed_statement.client_side_statement_params[0] + ) + if statement_type == ClientSideStatementType.RUN_PARTITIONED_QUERY: + return connection.run_partitioned_query(parsed_statement) + if statement_type == ClientSideStatementType.SET_AUTOCOMMIT_DML_MODE: + return connection._set_autocommit_dml_mode(parsed_statement) + + +def _get_streamed_result_set(column_name, type_code, column_values): + struct_type_pb = StructType( + fields=[StructType.Field(name=column_name, type_=Type(code=type_code))] + ) + + result_set = PartialResultSet(metadata=ResultSetMetadata(row_type=struct_type_pb)) + if len(column_values) > 0: + column_values_pb = [] + for column_value in column_values: + column_values_pb.append(_make_value_pb(column_value)) + result_set.values.extend(column_values_pb) + return StreamedResultSet(iter([result_set])) + + +def _get_isolation_level( + statement: ParsedStatement, +) -> Union[TransactionOptions.IsolationLevel, None]: + if ( + statement.client_side_statement_params is None + or len(statement.client_side_statement_params) == 0 + ): + return None + level = statement.client_side_statement_params[0] + if not isinstance(level, str) or level == "": + return None + # Replace (duplicate) whitespaces in the string with an underscore. + level = "_".join(level.split()).upper() + return TransactionOptions.IsolationLevel[level] diff --git a/google/cloud/spanner_dbapi/client_side_statement_parser.py b/google/cloud/spanner_dbapi/client_side_statement_parser.py new file mode 100644 index 0000000000..7c26c2a98d --- /dev/null +++ b/google/cloud/spanner_dbapi/client_side_statement_parser.py @@ -0,0 +1,106 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re + +from google.cloud.spanner_dbapi.parsed_statement import ( + ParsedStatement, + StatementType, + ClientSideStatementType, + Statement, +) + +RE_BEGIN = re.compile( + r"^\s*(?:BEGIN|START)(?:\s+TRANSACTION)?(?:\s+ISOLATION\s+LEVEL\s+(REPEATABLE\s+READ|SERIALIZABLE))?\s*$", + re.IGNORECASE, +) +RE_COMMIT = re.compile(r"^\s*(COMMIT)(\s+TRANSACTION)?\s*$", re.IGNORECASE) +RE_ROLLBACK = re.compile(r"^\s*(ROLLBACK)(\s+TRANSACTION)?\s*$", re.IGNORECASE) +RE_SHOW_COMMIT_TIMESTAMP = re.compile( + r"^\s*(SHOW)\s+(VARIABLE)\s+(COMMIT_TIMESTAMP)\s*$", re.IGNORECASE +) +RE_SHOW_READ_TIMESTAMP = re.compile( + r"^\s*(SHOW)\s+(VARIABLE)\s+(READ_TIMESTAMP)\s*$", re.IGNORECASE +) +RE_START_BATCH_DML = re.compile(r"^\s*(START)\s+(BATCH)\s+(DML)\s*$", re.IGNORECASE) +RE_RUN_BATCH = re.compile(r"^\s*(RUN)\s+(BATCH)\s*$", re.IGNORECASE) +RE_ABORT_BATCH = re.compile(r"^\s*(ABORT)\s+(BATCH)\s*$", re.IGNORECASE) +RE_PARTITION_QUERY = re.compile(r"^\s*(PARTITION)\s+(.+)", re.IGNORECASE) +RE_RUN_PARTITION = re.compile(r"^\s*(RUN)\s+(PARTITION)\s+(.+)", re.IGNORECASE) +RE_RUN_PARTITIONED_QUERY = re.compile( + r"^\s*(RUN)\s+(PARTITIONED)\s+(QUERY)\s+(.+)", re.IGNORECASE +) +RE_SET_AUTOCOMMIT_DML_MODE = re.compile( + r"^\s*(SET)\s+(AUTOCOMMIT_DML_MODE)\s+(=)\s+(.+)", re.IGNORECASE +) + + +def parse_stmt(query): + """Parses the sql query to check if it matches with any of the client side + statement regex. + + It is an internal method that can make backwards-incompatible changes. + + :type query: str + :param query: sql query + + :rtype: ParsedStatement + :returns: ParsedStatement object. + """ + client_side_statement_type = None + client_side_statement_params = [] + if RE_COMMIT.match(query): + client_side_statement_type = ClientSideStatementType.COMMIT + elif RE_ROLLBACK.match(query): + client_side_statement_type = ClientSideStatementType.ROLLBACK + elif RE_SHOW_COMMIT_TIMESTAMP.match(query): + client_side_statement_type = ClientSideStatementType.SHOW_COMMIT_TIMESTAMP + elif RE_SHOW_READ_TIMESTAMP.match(query): + client_side_statement_type = ClientSideStatementType.SHOW_READ_TIMESTAMP + elif RE_START_BATCH_DML.match(query): + client_side_statement_type = ClientSideStatementType.START_BATCH_DML + elif RE_BEGIN.match(query): + match = re.search(RE_BEGIN, query) + isolation_level = match.group(1) + if isolation_level is not None: + client_side_statement_params.append(isolation_level) + client_side_statement_type = ClientSideStatementType.BEGIN + elif RE_RUN_BATCH.match(query): + client_side_statement_type = ClientSideStatementType.RUN_BATCH + elif RE_ABORT_BATCH.match(query): + client_side_statement_type = ClientSideStatementType.ABORT_BATCH + elif RE_RUN_PARTITIONED_QUERY.match(query): + match = re.search(RE_RUN_PARTITIONED_QUERY, query) + client_side_statement_params.append(match.group(4)) + client_side_statement_type = ClientSideStatementType.RUN_PARTITIONED_QUERY + elif RE_PARTITION_QUERY.match(query): + match = re.search(RE_PARTITION_QUERY, query) + client_side_statement_params.append(match.group(2)) + client_side_statement_type = ClientSideStatementType.PARTITION_QUERY + elif RE_RUN_PARTITION.match(query): + match = re.search(RE_RUN_PARTITION, query) + client_side_statement_params.append(match.group(3)) + client_side_statement_type = ClientSideStatementType.RUN_PARTITION + elif RE_SET_AUTOCOMMIT_DML_MODE.match(query): + match = re.search(RE_SET_AUTOCOMMIT_DML_MODE, query) + client_side_statement_params.append(match.group(4)) + client_side_statement_type = ClientSideStatementType.SET_AUTOCOMMIT_DML_MODE + if client_side_statement_type is not None: + return ParsedStatement( + StatementType.CLIENT_SIDE, + Statement(query), + client_side_statement_type, + client_side_statement_params, + ) + return None diff --git a/google/cloud/spanner_dbapi/connection.py b/google/cloud/spanner_dbapi/connection.py new file mode 100644 index 0000000000..db18f44067 --- /dev/null +++ b/google/cloud/spanner_dbapi/connection.py @@ -0,0 +1,830 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""DB-API Connection for the Google Cloud Spanner.""" +import warnings + +from google.api_core.exceptions import Aborted +from google.api_core.gapic_v1.client_info import ClientInfo +from google.auth.credentials import AnonymousCredentials + +from google.cloud import spanner_v1 as spanner +from google.cloud.spanner_dbapi import partition_helper +from google.cloud.spanner_dbapi.batch_dml_executor import BatchMode, BatchDmlExecutor +from google.cloud.spanner_dbapi.parsed_statement import AutocommitDmlMode +from google.cloud.spanner_dbapi.partition_helper import PartitionId +from google.cloud.spanner_dbapi.parsed_statement import ParsedStatement, Statement +from google.cloud.spanner_dbapi.transaction_helper import TransactionRetryHelper +from google.cloud.spanner_dbapi.cursor import Cursor +from google.cloud.spanner_v1 import RequestOptions, TransactionOptions +from google.cloud.spanner_v1.database_sessions_manager import TransactionType +from google.cloud.spanner_v1.snapshot import Snapshot + +from google.cloud.spanner_dbapi.exceptions import ( + InterfaceError, + OperationalError, + ProgrammingError, +) +from google.cloud.spanner_dbapi.version import DEFAULT_USER_AGENT +from google.cloud.spanner_dbapi.version import PY_VERSION + + +CLIENT_TRANSACTION_NOT_STARTED_WARNING = ( + "This method is non-operational as a transaction has not been started." +) + + +def check_not_closed(function): + """`Connection` class methods decorator. + + Raise an exception if the connection is closed. + + :raises: :class:`InterfaceError` if the connection is closed. + """ + + def wrapper(connection, *args, **kwargs): + if connection.is_closed: + raise InterfaceError("Connection is already closed") + + return function(connection, *args, **kwargs) + + return wrapper + + +class Connection: + """Representation of a DB-API connection to a Cloud Spanner database. + + You most likely don't need to instantiate `Connection` objects + directly, use the `connect` module function instead. + + :type instance: :class:`~google.cloud.spanner_v1.instance.Instance` + :param instance: Cloud Spanner instance to connect to. + + :type database: :class:`~google.cloud.spanner_v1.database.Database` + :param database: The database to which the connection is linked. + + :type read_only: bool + :param read_only: + Flag to indicate that the connection may only execute queries and no update or DDL statements. + If True, the connection will use a single use read-only transaction with strong timestamp + bound for each new statement, and will immediately see any changes that have been committed by + any other transaction. + If autocommit is false, the connection will automatically start a new multi use read-only transaction + with strong timestamp bound when the first statement is executed. This read-only transaction will be + used for all subsequent statements until either commit() or rollback() is called on the connection. The + read-only transaction will read from a consistent snapshot of the database at the time that the + transaction started. This means that the transaction will not see any changes that have been + committed by other transactions since the start of the read-only transaction. Commit or rolling back + the read-only transaction is semantically the same, and only indicates that the read-only transaction + should end a that a new one should be started when the next statement is executed. + + **kwargs: Initial value for connection variables. + """ + + def __init__(self, instance, database=None, read_only=False, **kwargs): + self._instance = instance + self._database = database + self._ddl_statements = [] + + self._transaction = None + self._session = None + self._snapshot = None + + self.is_closed = False + self._autocommit = False + # indicator to know if the session pool used by + # this connection should be cleared on the + # connection close + self._own_pool = True + self._read_only = read_only + self._staleness = None + self.request_priority = None + self._transaction_begin_marked = False + self._transaction_isolation_level = None + # whether transaction started at Spanner. This means that we had + # made at least one call to Spanner. + self._spanner_transaction_started = False + self._batch_mode = BatchMode.NONE + self._batch_dml_executor: BatchDmlExecutor = None + self._transaction_helper = TransactionRetryHelper(self) + self._autocommit_dml_mode: AutocommitDmlMode = AutocommitDmlMode.TRANSACTIONAL + self._connection_variables = kwargs + + @property + def spanner_client(self): + """Client for interacting with Cloud Spanner API. This property exposes + the spanner client so that underlying methods can be accessed. + """ + return self._instance._client + + @property + def current_schema(self): + """schema name for this connection. + + :rtype: str + :returns: the current default schema of this connection. Currently, this + is always "" for GoogleSQL and "public" for PostgreSQL databases. + """ + if self.database is None: + raise ValueError("database property not set on the connection") + return self.database.default_schema_name + + @property + def autocommit(self): + """Autocommit mode flag for this connection. + + :rtype: bool + :returns: Autocommit mode flag value. + """ + return self._autocommit + + @autocommit.setter + def autocommit(self, value): + """Change this connection autocommit mode. Setting this value to True + while a transaction is active will commit the current transaction. + + :type value: bool + :param value: New autocommit mode state. + """ + if value and not self._autocommit and self._spanner_transaction_started: + self.commit() + + self._autocommit = value + + @property + def database(self): + """Database to which this connection relates. + + :rtype: :class:`~google.cloud.spanner_v1.database.Database` + :returns: The related database object. + """ + return self._database + + @property + def autocommit_dml_mode(self): + """Modes for executing DML statements in autocommit mode for this connection. + + The DML autocommit modes are: + 1) TRANSACTIONAL - DML statements are executed as single read-write transaction. + After successful execution, the DML statement is guaranteed to have been applied + exactly once to the database. + + 2) PARTITIONED_NON_ATOMIC - DML statements are executed as partitioned DML transactions. + If an error occurs during the execution of the DML statement, it is possible that the + statement has been applied to some but not all of the rows specified in the statement. + + :rtype: :class:`~google.cloud.spanner_dbapi.parsed_statement.AutocommitDmlMode` + """ + return self._autocommit_dml_mode + + @property + def inside_transaction(self): + warnings.warn( + "This method is deprecated. Use _spanner_transaction_started field", + DeprecationWarning, + ) + return ( + self._transaction + and not self._transaction.committed + and not self._transaction.rolled_back + ) + + @property + def _client_transaction_started(self): + """Flag: whether transaction started at client side. + + Returns: + bool: True if transaction started, False otherwise. + """ + return (not self._autocommit) or self._transaction_begin_marked + + @property + def _ignore_transaction_warnings(self): + return self._connection_variables.get("ignore_transaction_warnings", False) + + @property + def instance(self): + """Instance to which this connection relates. + + :rtype: :class:`~google.cloud.spanner_v1.instance.Instance` + :returns: The related instance object. + """ + return self._instance + + @property + def read_only(self): + """Flag: the connection can be used only for database reads. + + Returns: + bool: + True if the connection may only be used for database reads. + """ + return self._read_only + + @read_only.setter + def read_only(self, value): + """`read_only` flag setter. + + Args: + value (bool): True for ReadOnly mode, False for ReadWrite. + """ + if self._read_only != value and self._spanner_transaction_started: + raise ValueError( + "Connection read/write mode can't be changed while a transaction is in progress. " + "Commit or rollback the current transaction and try again." + ) + self._read_only = value + + @property + def request_options(self): + """Options for the next SQL operations. + + Returns: + google.cloud.spanner_v1.RequestOptions: + Request options. + """ + if self.request_priority is None: + return + + req_opts = RequestOptions(priority=self.request_priority) + self.request_priority = None + return req_opts + + @property + def transaction_tag(self): + """The transaction tag that will be applied to the next read/write + transaction on this `Connection`. This property is automatically cleared + when a new transaction is started. + + Returns: + str: The transaction tag that will be applied to the next read/write transaction. + """ + return self._connection_variables.get("transaction_tag", None) + + @transaction_tag.setter + def transaction_tag(self, value): + """Sets the transaction tag for the next read/write transaction on this + `Connection`. This property is automatically cleared when a new transaction + is started. + + Args: + value (str): The transaction tag for the next read/write transaction. + """ + self._connection_variables["transaction_tag"] = value + + @property + def isolation_level(self): + """The default isolation level that is used for all read/write + transactions on this `Connection`. + + Returns: + google.cloud.spanner_v1.types.TransactionOptions.IsolationLevel: + The isolation level that is used for read/write transactions on + this `Connection`. + """ + return self._connection_variables.get( + "isolation_level", + TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + ) + + @isolation_level.setter + def isolation_level(self, value: TransactionOptions.IsolationLevel): + """Sets the isolation level that is used for all read/write + transactions on this `Connection`. + + Args: + value (google.cloud.spanner_v1.types.TransactionOptions.IsolationLevel): + The isolation level for all read/write transactions on this + `Connection`. + """ + self._connection_variables["isolation_level"] = value + + @property + def staleness(self): + """Current read staleness option value of this `Connection`. + + Returns: + dict: Staleness type and value. + """ + return self._staleness or {} + + @staleness.setter + def staleness(self, value): + """Read staleness option setter. + + Args: + value (dict): Staleness type and value. + """ + if self._spanner_transaction_started and value != self._staleness: + raise ValueError( + "`staleness` option can't be changed while a transaction is in progress. " + "Commit or rollback the current transaction and try again." + ) + + possible_opts = ( + "read_timestamp", + "min_read_timestamp", + "max_staleness", + "exact_staleness", + ) + if value is not None and sum([opt in value for opt in possible_opts]) != 1: + raise ValueError( + "Expected one of the following staleness options: " + "read_timestamp, min_read_timestamp, max_staleness, exact_staleness." + ) + + self._staleness = value + + def _session_checkout(self): + """Get a Cloud Spanner session from the pool. + + If there is already a session associated with + this connection, it'll be used instead. + + :rtype: :class:`google.cloud.spanner_v1.session.Session` + :returns: Cloud Spanner session object ready to use. + """ + if self.database is None: + raise ValueError("Database needs to be passed for this operation") + + if not self._session: + transaction_type = ( + TransactionType.READ_ONLY + if self.read_only + else TransactionType.READ_WRITE + ) + self._session = self.database._sessions_manager.get_session( + transaction_type + ) + + return self._session + + def _release_session(self): + """Release the currently used Spanner session. + + The session will be returned into the sessions pool. + """ + if self._session is None: + return + + if self.database is None: + raise ValueError("Database needs to be passed for this operation") + + self.database._sessions_manager.put_session(self._session) + self._session = None + + def transaction_checkout(self): + """Get a Cloud Spanner transaction. + + Begin a new transaction, if there is no transaction in + this connection yet. Return the started one otherwise. + + This method is a no-op if the connection is in autocommit mode and no + explicit transaction has been started + + :rtype: :class:`google.cloud.spanner_v1.transaction.Transaction` + :returns: A Cloud Spanner transaction object, ready to use. + """ + if not self.read_only and self._client_transaction_started: + if not self._spanner_transaction_started: + self._transaction = self._session_checkout().transaction() + self._transaction.transaction_tag = self.transaction_tag + if self._transaction_isolation_level: + self._transaction.isolation_level = ( + self._transaction_isolation_level + ) + else: + self._transaction.isolation_level = self.isolation_level + self.transaction_tag = None + self._snapshot = None + self._spanner_transaction_started = True + self._transaction.begin() + + return self._transaction + + def snapshot_checkout(self): + """Get a Cloud Spanner snapshot. + + Initiate a new multi-use snapshot, if there is no snapshot in + this connection yet. Return the existing one otherwise. + + :rtype: :class:`google.cloud.spanner_v1.snapshot.Snapshot` + :returns: A Cloud Spanner snapshot object, ready to use. + """ + if self.read_only and self._client_transaction_started: + if not self._spanner_transaction_started: + self._snapshot = Snapshot( + self._session_checkout(), multi_use=True, **self.staleness + ) + self._transaction = None + self._snapshot.begin() + self._spanner_transaction_started = True + + return self._snapshot + + def close(self): + """Closes this connection. + + The connection will be unusable from this point forward. If the + connection has an active transaction, it will be rolled back. + """ + if self._spanner_transaction_started and not self._read_only: + self._transaction.rollback() + + if self._own_pool and self.database: + self.database._sessions_manager._pool.clear() + + self.is_closed = True + + @check_not_closed + def begin(self, isolation_level=None): + """ + Marks the transaction as started. + + :raises: :class:`InterfaceError`: if this connection is closed. + :raises: :class:`OperationalError`: if there is an existing transaction + that has been started + """ + if self._transaction_begin_marked: + raise OperationalError("A transaction has already started") + if self._spanner_transaction_started: + raise OperationalError( + "Beginning a new transaction is not allowed when a transaction " + "is already running" + ) + self._transaction_begin_marked = True + self._transaction_isolation_level = isolation_level + + def commit(self): + """Commits any pending transaction to the database. + This is a no-op if there is no active client transaction. + """ + if self.database is None: + raise ValueError("Database needs to be passed for this operation") + if not self._client_transaction_started: + if not self._ignore_transaction_warnings: + warnings.warn( + CLIENT_TRANSACTION_NOT_STARTED_WARNING, UserWarning, stacklevel=2 + ) + return + + self.run_prior_DDL_statements() + try: + if self._spanner_transaction_started and not self._read_only: + self._transaction.commit() + except Aborted: + self._transaction_helper.retry_transaction() + self.commit() + finally: + self._reset_post_commit_or_rollback() + + def rollback(self): + """Rolls back any pending transaction. + This is a no-op if there is no active client transaction. + """ + if not self._client_transaction_started: + if not self._ignore_transaction_warnings: + warnings.warn( + CLIENT_TRANSACTION_NOT_STARTED_WARNING, UserWarning, stacklevel=2 + ) + return + try: + if self._spanner_transaction_started and not self._read_only: + self._transaction.rollback() + finally: + self._reset_post_commit_or_rollback() + + def _reset_post_commit_or_rollback(self): + self._release_session() + self._transaction_helper.reset() + self._transaction_begin_marked = False + self._transaction_isolation_level = None + self._spanner_transaction_started = False + + @check_not_closed + def cursor(self): + """Factory to create a DB API Cursor.""" + return Cursor(self) + + @check_not_closed + def run_prior_DDL_statements(self): + if self.database is None: + raise ValueError("Database needs to be passed for this operation") + if self._ddl_statements: + ddl_statements = self._ddl_statements + self._ddl_statements = [] + + return self.database.update_ddl(ddl_statements).result() + + def run_statement( + self, statement: Statement, request_options: RequestOptions = None + ): + """Run single SQL statement in begun transaction. + + This method is never used in autocommit mode. In + !autocommit mode however it remembers every executed + SQL statement with its parameters. + + :type statement: :class:`Statement` + :param statement: SQL statement to execute. + + :type retried: bool + :param retried: (Optional) Retry the SQL statement if statement + execution failed. Defaults to false. + + :type request_options: :class:`RequestOptions` + :param request_options: Request options to use for this statement. + + :rtype: :class:`google.cloud.spanner_v1.streamed.StreamedResultSet`, + :class:`google.cloud.spanner_dbapi.checksum.ResultsChecksum` + :returns: Streamed result set of the statement and a + checksum of this statement results. + """ + transaction = self.transaction_checkout() + return transaction.execute_sql( + statement.sql, + statement.params, + param_types=statement.param_types, + request_options=request_options or self.request_options, + ) + + @check_not_closed + def validate(self): + """ + Execute a minimal request to check if the connection + is valid and the related database is reachable. + + Raise an exception in case if the connection is closed, + invalid, target database is not found, or the request result + is incorrect. + + :raises: :class:`InterfaceError`: if this connection is closed. + :raises: :class:`OperationalError`: if the request result is incorrect. + :raises: :class:`google.cloud.exceptions.NotFound`: if the linked instance + or database doesn't exist. + """ + if self.database is None: + raise ValueError("Database needs to be passed for this operation") + with self.database.snapshot() as snapshot: + result = list(snapshot.execute_sql("SELECT 1")) + if result != [[1]]: + raise OperationalError( + "The checking query (SELECT 1) returned an unexpected result: %s. " + "Expected: [[1]]" % result + ) + + @check_not_closed + def start_batch_dml(self, cursor): + if self._batch_mode is not BatchMode.NONE: + raise ProgrammingError( + "Cannot start a DML batch when a batch is already active" + ) + if self.read_only: + raise ProgrammingError( + "Cannot start a DML batch when the connection is in read-only mode" + ) + self._batch_mode = BatchMode.DML + self._batch_dml_executor = BatchDmlExecutor(cursor) + + @check_not_closed + def execute_batch_dml_statement(self, parsed_statement: ParsedStatement): + if self._batch_mode is not BatchMode.DML: + raise ProgrammingError( + "Cannot execute statement when the BatchMode is not DML" + ) + self._batch_dml_executor.execute_statement(parsed_statement) + + @check_not_closed + def run_batch(self): + if self._batch_mode is BatchMode.NONE: + raise ProgrammingError("Cannot run a batch when the BatchMode is not set") + try: + if self._batch_mode is BatchMode.DML: + many_result_set = self._batch_dml_executor.run_batch_dml() + finally: + self._batch_mode = BatchMode.NONE + self._batch_dml_executor = None + return many_result_set + + @check_not_closed + def abort_batch(self): + if self._batch_mode is BatchMode.NONE: + raise ProgrammingError("Cannot abort a batch when the BatchMode is not set") + if self._batch_mode is BatchMode.DML: + self._batch_dml_executor = None + self._batch_mode = BatchMode.NONE + + @check_not_closed + def partition_query( + self, + parsed_statement: ParsedStatement, + query_options=None, + ): + statement = parsed_statement.statement + partitioned_query = parsed_statement.client_side_statement_params[0] + self._partitioned_query_validation(partitioned_query, statement) + + batch_snapshot = self._database.batch_snapshot() + partition_ids = [] + partitions = list( + batch_snapshot.generate_query_batches( + partitioned_query, + statement.params, + statement.param_types, + query_options=query_options, + ) + ) + + batch_transaction_id = batch_snapshot.get_batch_transaction_id() + for partition in partitions: + partition_ids.append( + partition_helper.encode_to_string(batch_transaction_id, partition) + ) + return partition_ids + + @check_not_closed + def run_partition(self, encoded_partition_id): + partition_id: PartitionId = partition_helper.decode_from_string( + encoded_partition_id + ) + batch_transaction_id = partition_id.batch_transaction_id + batch_snapshot = self._database.batch_snapshot( + read_timestamp=batch_transaction_id.read_timestamp, + session_id=batch_transaction_id.session_id, + transaction_id=batch_transaction_id.transaction_id, + ) + return batch_snapshot.process(partition_id.partition_result) + + @check_not_closed + def run_partitioned_query( + self, + parsed_statement: ParsedStatement, + ): + statement = parsed_statement.statement + partitioned_query = parsed_statement.client_side_statement_params[0] + self._partitioned_query_validation(partitioned_query, statement) + batch_snapshot = self._database.batch_snapshot() + return batch_snapshot.run_partitioned_query( + partitioned_query, statement.params, statement.param_types + ) + + @check_not_closed + def _set_autocommit_dml_mode( + self, + parsed_statement: ParsedStatement, + ): + autocommit_dml_mode_str = parsed_statement.client_side_statement_params[0] + autocommit_dml_mode = AutocommitDmlMode[autocommit_dml_mode_str.upper()] + self.set_autocommit_dml_mode(autocommit_dml_mode) + + def set_autocommit_dml_mode( + self, + autocommit_dml_mode, + ): + """ + Sets the mode for executing DML statements in autocommit mode for this connection. + This mode is only used when the connection is in autocommit mode, and may only + be set while the transaction is in autocommit mode and not in a temporary transaction. + """ + + if self._client_transaction_started is True: + raise ProgrammingError( + "Cannot set autocommit DML mode while not in autocommit mode or while a transaction is active." + ) + if self.read_only is True: + raise ProgrammingError( + "Cannot set autocommit DML mode for a read-only connection." + ) + if self._batch_mode is not BatchMode.NONE: + raise ProgrammingError("Cannot set autocommit DML mode while in a batch.") + self._autocommit_dml_mode = autocommit_dml_mode + + def _partitioned_query_validation(self, partitioned_query, statement): + if self.read_only is not True and self._client_transaction_started is True: + raise ProgrammingError( + "Partitioned query is not supported, because the connection is in a read/write transaction." + ) + + def __enter__(self): + return self + + def __exit__(self, etype, value, traceback): + self.commit() + self.close() + + +def connect( + instance_id, + database_id=None, + project=None, + credentials=None, + pool=None, + user_agent=None, + client=None, + route_to_leader_enabled=True, + database_role=None, + **kwargs, +): + """Creates a connection to a Google Cloud Spanner database. + + :type instance_id: str + :param instance_id: The ID of the instance to connect to. + + :type database_id: str + :param database_id: (Optional) The ID of the database to connect to. + + :type project: str + :param project: (Optional) The ID of the project which owns the + instances, tables and data. If not provided, will + attempt to determine from the environment. + + :type credentials: Union[:class:`~google.auth.credentials.Credentials`, str] + :param credentials: (Optional) The authorization credentials to attach to + requests. These credentials identify this application + to the service. These credentials may be specified as + a file path indicating where to retrieve the service + account JSON for the credentials to connect to + Cloud Spanner. If none are specified, the client will + attempt to ascertain the credentials from the + environment. + + :type pool: Concrete subclass of + :class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`. + :param pool: (Optional). Session pool to be used by database. + + :type user_agent: str + :param user_agent: (Optional) User agent to be used with this connection's + requests. + + :type client: Concrete subclass of + :class:`~google.cloud.spanner_v1.Client`. + :param client: (Optional) Custom user provided Client Object + + :type route_to_leader_enabled: boolean + :param route_to_leader_enabled: + (Optional) Default True. Set route_to_leader_enabled as False to + disable leader aware routing. Disabling leader aware routing would + route all requests in RW/PDML transactions to the closest region. + + :type database_role: str + :param database_role: (Optional) The database role to connect as when using + fine-grained access controls. + + **kwargs: Initial value for connection variables. + + + :rtype: :class:`google.cloud.spanner_dbapi.connection.Connection` + :returns: Connection object associated with the given Google Cloud Spanner + resource. + """ + if client is None: + client_info = ClientInfo( + user_agent=user_agent or DEFAULT_USER_AGENT, + python_version=PY_VERSION, + client_library_version=spanner.__version__, + ) + if isinstance(credentials, str): + client = spanner.Client.from_service_account_json( + credentials, + project=project, + client_info=client_info, + route_to_leader_enabled=route_to_leader_enabled, + ) + else: + client_options = None + if isinstance(credentials, AnonymousCredentials): + client_options = kwargs.get("client_options") + client = spanner.Client( + project=project, + credentials=credentials, + client_info=client_info, + route_to_leader_enabled=route_to_leader_enabled, + client_options=client_options, + ) + else: + if project is not None and client.project != project: + raise ValueError("project in url does not match client object project") + + instance = client.instance(instance_id) + database = None + if database_id: + logger = kwargs.get("logger") + database = instance.database( + database_id, pool=pool, database_role=database_role, logger=logger + ) + conn = Connection(instance, database, **kwargs) + if pool is not None: + conn._own_pool = False + + return conn diff --git a/google/cloud/spanner_dbapi/cursor.py b/google/cloud/spanner_dbapi/cursor.py new file mode 100644 index 0000000000..75a368c89f --- /dev/null +++ b/google/cloud/spanner_dbapi/cursor.py @@ -0,0 +1,609 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Database cursor for Google Cloud Spanner DB API.""" +from collections import namedtuple + +import sqlparse + +from google.api_core.exceptions import Aborted +from google.api_core.exceptions import AlreadyExists +from google.api_core.exceptions import FailedPrecondition +from google.api_core.exceptions import InternalServerError +from google.api_core.exceptions import InvalidArgument +from google.api_core.exceptions import OutOfRange + +from google.cloud import spanner_v1 as spanner +from google.cloud.spanner_dbapi.batch_dml_executor import BatchMode +from google.cloud.spanner_dbapi.exceptions import IntegrityError +from google.cloud.spanner_dbapi.exceptions import InterfaceError +from google.cloud.spanner_dbapi.exceptions import OperationalError +from google.cloud.spanner_dbapi.exceptions import ProgrammingError + +from google.cloud.spanner_dbapi import ( + _helpers, + client_side_statement_executor, + batch_dml_executor, +) +from google.cloud.spanner_dbapi._helpers import ColumnInfo +from google.cloud.spanner_dbapi._helpers import CODE_TO_DISPLAY_SIZE + +from google.cloud.spanner_dbapi import parse_utils +from google.cloud.spanner_dbapi.parse_utils import get_param_types +from google.cloud.spanner_dbapi.parsed_statement import ( + StatementType, + Statement, + ParsedStatement, + AutocommitDmlMode, +) +from google.cloud.spanner_dbapi.transaction_helper import CursorStatementType +from google.cloud.spanner_dbapi.utils import PeekIterator +from google.cloud.spanner_dbapi.utils import StreamedManyResultSets +from google.cloud.spanner_v1 import RequestOptions +from google.cloud.spanner_v1.merged_result_set import MergedResultSet + +ColumnDetails = namedtuple("column_details", ["null_ok", "spanner_type"]) + + +def check_not_closed(function): + """`Cursor` class methods decorator. + + Raise an exception if the cursor is closed, or not bound to a + connection, or the parent connection is closed. + + :raises: :class:`InterfaceError` if this cursor is closed. + :raises: :class:`ProgrammingError` if this cursor is not bound to a connection. + """ + + def wrapper(cursor, *args, **kwargs): + if not cursor.connection: + raise ProgrammingError("Cursor is not connected to the database") + + if cursor.is_closed: + raise InterfaceError("Cursor and/or connection is already closed.") + + return function(cursor, *args, **kwargs) + + return wrapper + + +class Cursor(object): + """Database cursor to manage the context of a fetch operation. + + :type connection: :class:`~google.cloud.spanner_dbapi.connection.Connection` + :param connection: A DB-API connection to Google Cloud Spanner. + """ + + def __init__(self, connection): + self._itr = None + self._result_set = None + self._row_count = None + self.lastrowid = None + self.connection = connection + self.transaction_helper = self.connection._transaction_helper + self._is_closed = False + # the number of rows to fetch at a time with fetchmany() + self.arraysize = 1 + self._parsed_statement: ParsedStatement = None + self._in_retry_mode = False + self._batch_dml_rows_count = None + self._request_tag = None + + @property + def request_tag(self): + """The request tag that will be applied to the next statement on this + cursor. This property is automatically cleared when a statement is + executed. + + Returns: + str: The request tag that will be applied to the next statement on + this cursor. + """ + return self._request_tag + + @request_tag.setter + def request_tag(self, value): + """Sets the request tag for the next statement on this cursor. This + property is automatically cleared when a statement is executed. + + Args: + value (str): The request tag for the statement. + """ + self._request_tag = value + + @property + def request_options(self): + options = self.connection.request_options + if self._request_tag: + if not options: + options = RequestOptions() + options.request_tag = self._request_tag + self._request_tag = None + return options + + @property + def is_closed(self): + """The cursor close indicator. + + :rtype: bool + :returns: True if the cursor or the parent connection is closed, + otherwise False. + """ + return self._is_closed or self.connection.is_closed + + @property + def description(self): + """ + Read-only attribute containing the result columns description + of a form: + + - ``name`` + - ``type_code`` + - ``display_size`` + - ``internal_size`` + - ``precision`` + - ``scale`` + - ``null_ok`` + + :rtype: tuple + :returns: The result columns' description. + """ + if ( + self._result_set is None + or not getattr(self._result_set, "metadata", None) + or self._result_set.metadata.row_type is None + or self._result_set.metadata.row_type.fields is None + or len(self._result_set.metadata.row_type.fields) == 0 + ): + return + + columns = [] + for field in self._result_set.metadata.row_type.fields: + columns.append( + ColumnInfo( + name=field.name, + type_code=field.type_.code, + # Size of the SQL type of the column. + display_size=CODE_TO_DISPLAY_SIZE.get(field.type_.code), + # Client perceived size of the column. + internal_size=field._pb.ByteSize(), + ) + ) + return tuple(columns) + + @property + def rowcount(self): + """The number of rows updated by the last INSERT, UPDATE, DELETE request's `execute()` call. + For SELECT requests the rowcount returns -1. + + :rtype: int + :returns: The number of rows updated by the last INSERT, UPDATE, DELETE request's .execute*() call. + """ + + if self._row_count is not None or self._result_set is None: + return self._row_count + + stats = getattr(self._result_set, "stats", None) + if stats is not None and "row_count_exact" in stats: + return stats.row_count_exact + + return -1 + + @check_not_closed + def callproc(self, procname, args=None): + """A no-op, raising an error if the cursor or connection is closed.""" + pass + + @check_not_closed + def nextset(self): + """A no-op, raising an error if the cursor or connection is closed.""" + pass + + @check_not_closed + def setinputsizes(self, sizes): + """A no-op, raising an error if the cursor or connection is closed.""" + pass + + @check_not_closed + def setoutputsize(self, size, column=None): + """A no-op, raising an error if the cursor or connection is closed.""" + pass + + def close(self): + """Closes this cursor.""" + self._is_closed = True + + def _do_execute_update_in_autocommit(self, transaction, sql, params): + """This function should only be used in autocommit mode.""" + self.connection._transaction = transaction + self.connection._snapshot = None + self._result_set = transaction.execute_sql( + sql, + params=params, + param_types=get_param_types(params), + last_statement=True, + ) + self._itr = PeekIterator(self._result_set) + self._row_count = None + + def _batch_DDLs(self, sql): + """ + Check that the given operation contains only DDL + statements and batch them into an internal list. + + :type sql: str + :param sql: A SQL query statement. + + :raises: :class:`ValueError` in case not a DDL statement + present in the operation. + """ + statements = [] + for ddl in sqlparse.split(sql): + if ddl: + ddl = ddl.rstrip(";") + if ( + parse_utils.classify_statement(ddl).statement_type + != StatementType.DDL + ): + raise ValueError("Only DDL statements may be batched.") + + statements.append(ddl) + + # Only queue DDL statements if they are all correctly classified. + self.connection._ddl_statements.extend(statements) + + def _reset(self): + if self.connection.database is None: + raise ValueError("Database needs to be passed for this operation") + self._itr = None + self._result_set = None + self._row_count = None + self._batch_dml_rows_count = None + + @check_not_closed + def execute(self, sql, args=None): + self._execute(sql, args, False) + + def _execute(self, sql, args=None, call_from_execute_many=False): + """Prepares and executes a Spanner database operation. + + :type sql: str + :param sql: A SQL query statement. + + :type args: list + :param args: Additional parameters to supplement the SQL query. + """ + self._reset() + exception = None + try: + self._parsed_statement = parse_utils.classify_statement(sql, args) + if self._parsed_statement is None: + raise ProgrammingError("Invalid Statement.") + + if self._parsed_statement.statement_type == StatementType.CLIENT_SIDE: + self._result_set = client_side_statement_executor.execute( + self, self._parsed_statement + ) + if self._result_set is not None: + if isinstance( + self._result_set, StreamedManyResultSets + ) or isinstance(self._result_set, MergedResultSet): + self._itr = self._result_set + else: + self._itr = PeekIterator(self._result_set) + elif self.connection._batch_mode == BatchMode.DML: + self.connection.execute_batch_dml_statement(self._parsed_statement) + elif self.connection.read_only or ( + not self.connection._client_transaction_started + and self._parsed_statement.statement_type == StatementType.QUERY + ): + self._handle_DQL(sql, args or None) + elif self._parsed_statement.statement_type == StatementType.DDL: + self._batch_DDLs(sql) + if not self.connection._client_transaction_started: + self.connection.run_prior_DDL_statements() + elif ( + self.connection.autocommit_dml_mode + is AutocommitDmlMode.PARTITIONED_NON_ATOMIC + ): + self._row_count = self.connection.database.execute_partitioned_dml( + sql, + params=args, + param_types=self._parsed_statement.statement.param_types, + request_options=self.request_options, + ) + self._result_set = None + else: + self._execute_in_rw_transaction() + + except (AlreadyExists, FailedPrecondition, OutOfRange) as e: + exception = e + raise IntegrityError(getattr(e, "details", e)) from e + except InvalidArgument as e: + exception = e + raise ProgrammingError(getattr(e, "details", e)) from e + except InternalServerError as e: + exception = e + raise OperationalError(getattr(e, "details", e)) from e + except Exception as e: + exception = e + raise + finally: + if not self._in_retry_mode and not call_from_execute_many: + self.transaction_helper.add_execute_statement_for_retry( + self, sql, args, exception, False + ) + if self.connection._client_transaction_started is False: + self.connection._spanner_transaction_started = False + + def _execute_in_rw_transaction(self): + # For every other operation, we've got to ensure that + # any prior DDL statements were run. + self.connection.run_prior_DDL_statements() + statement = self._parsed_statement.statement + if self.connection._client_transaction_started: + while True: + try: + self._result_set = self.connection.run_statement( + statement, self.request_options + ) + self._itr = PeekIterator(self._result_set) + return + except Aborted: + # We are raising it so it could be handled in transaction_helper.py and is retried + if self._in_retry_mode: + raise + else: + self.transaction_helper.retry_transaction() + else: + self.connection.database.run_in_transaction( + self._do_execute_update_in_autocommit, + statement.sql, + statement.params or None, + ) + + @check_not_closed + def executemany(self, operation, seq_of_params): + """Execute the given SQL with every parameters set + from the given sequence of parameters. + + :type operation: str + :param operation: SQL code to execute. + + :type seq_of_params: list + :param seq_of_params: Sequence of additional parameters to run + the query with. + """ + self._reset() + exception = None + try: + self._parsed_statement = parse_utils.classify_statement(operation) + if self._parsed_statement.statement_type == StatementType.DDL: + raise ProgrammingError( + "Executing DDL statements with executemany() method is not allowed." + ) + + if self._parsed_statement.statement_type == StatementType.CLIENT_SIDE: + raise ProgrammingError( + "Executing the following operation: " + + operation + + ", with executemany() method is not allowed." + ) + + # For every operation, we've got to ensure that any prior DDL + # statements were run. + self.connection.run_prior_DDL_statements() + # Treat UNKNOWN statements as if they are DML and let the server + # determine what is wrong with it. + if self._parsed_statement.statement_type in ( + StatementType.INSERT, + StatementType.UPDATE, + StatementType.UNKNOWN, + ): + statements = [] + for params in seq_of_params: + sql, params = parse_utils.sql_pyformat_args_to_spanner( + operation, params + ) + statements.append(Statement(sql, params, get_param_types(params))) + many_result_set = batch_dml_executor.run_batch_dml(self, statements) + else: + many_result_set = StreamedManyResultSets() + for params in seq_of_params: + self._execute(operation, params, True) + many_result_set.add_iter(self._itr) + + self._result_set = many_result_set + self._itr = many_result_set + except Exception as e: + exception = e + raise + finally: + if not self._in_retry_mode: + self.transaction_helper.add_execute_statement_for_retry( + self, + operation, + seq_of_params, + exception, + True, + ) + if self.connection._client_transaction_started is False: + self.connection._spanner_transaction_started = False + + @check_not_closed + def fetchone(self): + """Fetch the next row of a query result set, returning a single + sequence, or None when no more data is available.""" + rows = self._fetch(CursorStatementType.FETCH_ONE) + if not rows: + return + return rows[0] + + @check_not_closed + def fetchall(self): + """Fetch all (remaining) rows of a query result, returning them as + a sequence of sequences. + """ + return self._fetch(CursorStatementType.FETCH_ALL) + + @check_not_closed + def fetchmany(self, size=None): + """Fetch the next set of rows of a query result, returning a sequence + of sequences. An empty sequence is returned when no more rows are available. + + :type size: int + :param size: (Optional) The maximum number of results to fetch. + + :raises InterfaceError: + if the previous call to .execute*() did not produce any result set + or if no call was issued yet. + """ + if size is None: + size = self.arraysize + return self._fetch(CursorStatementType.FETCH_MANY, size) + + def _fetch(self, cursor_statement_type, size=None): + exception = None + rows = [] + is_fetch_all = False + try: + while True: + rows = [] + try: + if cursor_statement_type == CursorStatementType.FETCH_ALL: + is_fetch_all = True + for row in self: + rows.append(row) + elif cursor_statement_type == CursorStatementType.FETCH_MANY: + for _ in range(size): + try: + row = next(self) + rows.append(row) + except StopIteration: + break + elif cursor_statement_type == CursorStatementType.FETCH_ONE: + try: + row = next(self) + rows.append(row) + except StopIteration: + return + break + except Aborted: + if not self.connection.read_only: + if self._in_retry_mode: + raise + else: + self.transaction_helper.retry_transaction() + except Exception as e: + exception = e + raise + finally: + if not self._in_retry_mode: + self.transaction_helper.add_fetch_statement_for_retry( + self, rows, exception, is_fetch_all + ) + return rows + + def _handle_DQL_with_snapshot(self, snapshot, sql, params): + self._result_set = snapshot.execute_sql( + sql, + params, + get_param_types(params), + request_options=self.request_options, + ) + # Read the first element so that the StreamedResultSet can + # return the metadata after a DQL statement. + self._itr = PeekIterator(self._result_set) + # Unfortunately, Spanner doesn't seem to send back + # information about the number of rows available. + self._row_count = None + if self._result_set.metadata.transaction.read_timestamp is not None: + snapshot._transaction_read_timestamp = ( + self._result_set.metadata.transaction.read_timestamp + ) + + def _handle_DQL(self, sql, params): + if self.connection.database is None: + raise ValueError("Database needs to be passed for this operation") + sql, params = parse_utils.sql_pyformat_args_to_spanner(sql, params) + if self.connection.read_only and self.connection._client_transaction_started: + # initiate or use the existing multi-use snapshot + self._handle_DQL_with_snapshot( + self.connection.snapshot_checkout(), sql, params + ) + else: + # execute with single-use snapshot + with self.connection.database.snapshot( + **self.connection.staleness + ) as snapshot: + self.connection._snapshot = snapshot + self.connection._transaction = None + self._handle_DQL_with_snapshot(snapshot, sql, params) + + def __enter__(self): + return self + + def __exit__(self, etype, value, traceback): + self.close() + + def __next__(self): + if self._itr is None: + raise ProgrammingError("no results to return") + return next(self._itr) + + def __iter__(self): + if self._itr is None: + raise ProgrammingError("no results to return") + return self._itr + + def list_tables(self, schema_name="", include_views=True): + """List the tables of the linked Database. + + :rtype: list + :returns: The list of tables within the Database. + """ + return self.run_sql_in_snapshot( + sql=_helpers.SQL_LIST_TABLES_AND_VIEWS + if include_views + else _helpers.SQL_LIST_TABLES, + params={"table_schema": schema_name}, + param_types={"table_schema": spanner.param_types.STRING}, + ) + + def run_sql_in_snapshot(self, sql, params=None, param_types=None): + # Some SQL e.g. for INFORMATION_SCHEMA cannot be run in read-write transactions + # hence this method exists to circumvent that limit. + if self.connection.database is None: + raise ValueError("Database needs to be passed for this operation") + self.connection.run_prior_DDL_statements() + + with self.connection.database.snapshot() as snapshot: + return list(snapshot.execute_sql(sql, params, param_types)) + + def get_table_column_schema(self, table_name, schema_name=""): + rows = self.run_sql_in_snapshot( + sql=_helpers.SQL_GET_TABLE_COLUMN_SCHEMA, + params={"schema_name": schema_name, "table_name": table_name}, + param_types={ + "schema_name": spanner.param_types.STRING, + "table_name": spanner.param_types.STRING, + }, + ) + + column_details = {} + for column_name, is_nullable, spanner_type in rows: + column_details[column_name] = ColumnDetails( + null_ok=is_nullable == "YES", spanner_type=spanner_type + ) + return column_details diff --git a/google/cloud/spanner_dbapi/exceptions.py b/google/cloud/spanner_dbapi/exceptions.py new file mode 100644 index 0000000000..723ee34fd2 --- /dev/null +++ b/google/cloud/spanner_dbapi/exceptions.py @@ -0,0 +1,172 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Spanner DB API exceptions.""" + +from google.api_core.exceptions import GoogleAPICallError + + +class Warning(Exception): + """Important DB API warning.""" + + pass + + +class Error(Exception): + """The base class for all the DB API exceptions. + + Does not include :class:`Warning`. + """ + + def _is_error_cause_instance_of_google_api_exception(self): + return isinstance(self.__cause__, GoogleAPICallError) + + @property + def reason(self): + """The reason of the error. + Reference: + https://cloud.google.com/apis/design/errors#error_info + Returns: + Union[str, None]: An optional string containing reason of the error. + """ + return ( + self.__cause__.reason + if self._is_error_cause_instance_of_google_api_exception() + else None + ) + + @property + def domain(self): + """The logical grouping to which the "reason" belongs. + Reference: + https://cloud.google.com/apis/design/errors#error_info + Returns: + Union[str, None]: An optional string containing a logical grouping to which the "reason" belongs. + """ + return ( + self.__cause__.domain + if self._is_error_cause_instance_of_google_api_exception() + else None + ) + + @property + def metadata(self): + """Additional structured details about this error. + Reference: + https://cloud.google.com/apis/design/errors#error_info + Returns: + Union[Dict[str, str], None]: An optional object containing structured details about the error. + """ + return ( + self.__cause__.metadata + if self._is_error_cause_instance_of_google_api_exception() + else None + ) + + @property + def details(self): + """Information contained in google.rpc.status.details. + Reference: + https://cloud.google.com/apis/design/errors#error_model + https://cloud.google.com/apis/design/errors#error_details + Returns: + Sequence[Any]: A list of structured objects from error_details.proto + """ + return ( + self.__cause__.details + if self._is_error_cause_instance_of_google_api_exception() + else None + ) + + +class InterfaceError(Error): + """ + Error related to the database interface + rather than the database itself. + """ + + pass + + +class DatabaseError(Error): + """Error related to the database.""" + + pass + + +class DataError(DatabaseError): + """ + Error due to problems with the processed data like + division by zero, numeric value out of range, etc. + """ + + pass + + +class OperationalError(DatabaseError): + """ + Error related to the database's operation, e.g. an + unexpected disconnect, the data source name is not + found, a transaction could not be processed, a + memory allocation error, etc. + """ + + pass + + +class IntegrityError(DatabaseError): + """ + Error for cases of relational integrity of the database + is affected, e.g. a foreign key check fails. + """ + + pass + + +class InternalError(DatabaseError): + """ + Internal database error, e.g. the cursor is not valid + anymore, the transaction is out of sync, etc. + """ + + pass + + +class ProgrammingError(DatabaseError): + """ + Programming error, e.g. table not found or already + exists, syntax error in the SQL statement, wrong + number of parameters specified, etc. + """ + + pass + + +class NotSupportedError(DatabaseError): + """ + Error for case of a method or database API not + supported by the database was used. + """ + + pass + + +class RetryAborted(OperationalError): + """ + Error for case of no aborted transaction retry + is available, because of underlying data being + changed during a retry. + """ + + pass diff --git a/google/cloud/spanner_dbapi/parse_utils.py b/google/cloud/spanner_dbapi/parse_utils.py new file mode 100644 index 0000000000..66741eb264 --- /dev/null +++ b/google/cloud/spanner_dbapi/parse_utils.py @@ -0,0 +1,392 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"SQL parsing and classification utils." + +import datetime +import decimal +import re +import warnings + +import sqlparse +from google.cloud import spanner_v1 as spanner +from google.cloud.spanner_v1 import JsonObject +from . import client_side_statement_parser + +from .exceptions import Error +from .parsed_statement import ParsedStatement, StatementType, Statement +from .types import DateStr, TimestampStr +from .utils import sanitize_literals_for_upload + +# Note: This mapping deliberately does not contain a value for float. +# The reason for that is that it is better to just let Spanner determine +# the parameter type instead of specifying one explicitly. The reason for +# this is that if the client specifies FLOAT64, and the actual column that +# the parameter is used for is of type FLOAT32, then Spanner will return an +# error. If however the client does not specify a type, then Spanner will +# automatically choose the appropriate type based on the column where the +# value will be inserted/updated or that it will be compared with. +TYPES_MAP = { + bool: spanner.param_types.BOOL, + bytes: spanner.param_types.BYTES, + str: spanner.param_types.STRING, + int: spanner.param_types.INT64, + datetime.datetime: spanner.param_types.TIMESTAMP, + datetime.date: spanner.param_types.DATE, + DateStr: spanner.param_types.DATE, + TimestampStr: spanner.param_types.TIMESTAMP, + decimal.Decimal: spanner.param_types.NUMERIC, + JsonObject: spanner.param_types.JSON, +} + +SPANNER_RESERVED_KEYWORDS = { + "ALL", + "AND", + "ANY", + "ARRAY", + "AS", + "ASC", + "ASSERT_ROWS_MODIFIED", + "AT", + "BETWEEN", + "BY", + "CASE", + "CAST", + "COLLATE", + "CONTAINS", + "CREATE", + "CROSS", + "CUBE", + "CURRENT", + "DEFAULT", + "DEFINE", + "DESC", + "DISTINCT", + "DROP", + "ELSE", + "END", + "ENUM", + "ESCAPE", + "EXCEPT", + "EXCLUDE", + "EXISTS", + "EXTRACT", + "FALSE", + "FETCH", + "FOLLOWING", + "FOR", + "FROM", + "FULL", + "GROUP", + "GROUPING", + "GROUPS", + "HASH", + "HAVING", + "IF", + "IGNORE", + "IN", + "INNER", + "INTERSECT", + "INTERVAL", + "INTO", + "IS", + "JOIN", + "LATERAL", + "LEFT", + "LIKE", + "LIMIT", + "LOOKUP", + "MERGE", + "NATURAL", + "NEW", + "NO", + "NOT", + "NULL", + "NULLS", + "OF", + "ON", + "OR", + "ORDER", + "OUTER", + "OVER", + "PARTITION", + "PRECEDING", + "PROTO", + "RANGE", + "RECURSIVE", + "RESPECT", + "RIGHT", + "ROLLUP", + "ROWS", + "SELECT", + "SET", + "SOME", + "STRUCT", + "TABLESAMPLE", + "THEN", + "TO", + "TREAT", + "TRUE", + "UNBOUNDED", + "UNION", + "UNNEST", + "USING", + "WHEN", + "WHERE", + "WINDOW", + "WITH", + "WITHIN", +} + +STMT_DDL = "DDL" +STMT_NON_UPDATING = "NON_UPDATING" +STMT_UPDATING = "UPDATING" +STMT_INSERT = "INSERT" + +# Heuristic for identifying statements that don't need to be run as updates. +# TODO: This and the other regexes do not match statements that start with a hint. +RE_NON_UPDATE = re.compile(r"^\W*(SELECT|GRAPH|FROM)", re.IGNORECASE) + +RE_WITH = re.compile(r"^\s*(WITH)", re.IGNORECASE) + +# DDL statements follow +# https://cloud.google.com/spanner/docs/data-definition-language +RE_DDL = re.compile( + r"^\s*(CREATE|ALTER|DROP|GRANT|REVOKE|RENAME|ANALYZE)", re.IGNORECASE | re.DOTALL +) + +# TODO: These do not match statements that start with a hint. +RE_IS_INSERT = re.compile(r"^\s*(INSERT\s+)", re.IGNORECASE | re.DOTALL) +RE_IS_UPDATE = re.compile(r"^\s*(UPDATE\s+)", re.IGNORECASE | re.DOTALL) +RE_IS_DELETE = re.compile(r"^\s*(DELETE\s+)", re.IGNORECASE | re.DOTALL) + +RE_INSERT = re.compile( + # Only match the `INSERT INTO (columns...) + # otherwise the rest of the statement could be a complex + # operation. + r"^\s*INSERT(?:\s+INTO)?\s+(?P[^\s\(\)]+)\s*\((?P[^\(\)]+)\)", + re.IGNORECASE | re.DOTALL, +) +"""Deprecated: Use the RE_IS_INSERT, RE_IS_UPDATE, and RE_IS_DELETE regexes""" + +RE_VALUES_TILL_END = re.compile(r"VALUES\s*\(.+$", re.IGNORECASE | re.DOTALL) + +RE_VALUES_PYFORMAT = re.compile( + # To match: (%s, %s,....%s) + r"(\(\s*%s[^\(\)]+\))", + re.DOTALL, +) + +RE_PYFORMAT = re.compile(r"(%s|%\([^\(\)]+\)s)+", re.DOTALL) + + +def classify_stmt(query): + """Determine SQL query type. + :type query: str + :param query: A SQL query. + :rtype: str + :returns: The query type name. + """ + warnings.warn( + "This method is deprecated. Use _classify_stmt method", DeprecationWarning + ) + + # sqlparse will strip Cloud Spanner comments, + # still, special commenting styles, like + # PostgreSQL dollar quoted comments are not + # supported and will not be stripped. + query = sqlparse.format(query, strip_comments=True).strip() + + if RE_DDL.match(query): + return STMT_DDL + + if RE_IS_INSERT.match(query): + return STMT_INSERT + + if RE_NON_UPDATE.match(query) or RE_WITH.match(query): + # As of 13-March-2020, Cloud Spanner only supports WITH for DQL + # statements and doesn't yet support WITH for DML statements. + return STMT_NON_UPDATING + + return STMT_UPDATING + + +def classify_statement(query, args=None): + """Determine SQL query type. + + It is an internal method that can make backwards-incompatible changes. + + :type query: str + :param query: A SQL query. + + :rtype: ParsedStatement + :returns: parsed statement attributes. + """ + # sqlparse will strip Cloud Spanner comments, + # still, special commenting styles, like + # PostgreSQL dollar quoted comments are not + # supported and will not be stripped. + query = sqlparse.format(query, strip_comments=True).strip() + if query == "": + return None + parsed_statement: ParsedStatement = client_side_statement_parser.parse_stmt(query) + if parsed_statement is not None: + return parsed_statement + query, args = sql_pyformat_args_to_spanner(query, args or None) + statement = Statement( + query, + args, + get_param_types(args or None), + ) + statement_type = _get_statement_type(statement) + return ParsedStatement(statement_type, statement) + + +def _get_statement_type(statement): + query = statement.sql + if RE_DDL.match(query): + return StatementType.DDL + if RE_IS_INSERT.match(query): + return StatementType.INSERT + if RE_NON_UPDATE.match(query) or RE_WITH.match(query): + # As of 13-March-2020, Cloud Spanner only supports WITH for DQL + # statements and doesn't yet support WITH for DML statements. + return StatementType.QUERY + + if RE_IS_UPDATE.match(query) or RE_IS_DELETE.match(query): + # TODO: Remove this? It makes more sense to have this in SQLAlchemy and + # Django than here. + statement.sql = ensure_where_clause(query) + return StatementType.UPDATE + + return StatementType.UNKNOWN + + +def sql_pyformat_args_to_spanner(sql, params): + """ + Transform pyformat set SQL to named arguments for Cloud Spanner. + It will also unescape previously escaped format specifiers + like %%s to %s. + For example: + SQL: 'SELECT * from t where f1=%s, f2=%s, f3=%s' + Params: ('a', 23, '888***') + becomes: + SQL: 'SELECT * from t where f1=@a0, f2=@a1, f3=@a2' + Params: {'a0': 'a', 'a1': 23, 'a2': '888***'} + + OR + SQL: 'SELECT * from t where f1=%(f1)s, f2=%(f2)s, f3=%(f3)s' + Params: {'f1': 'a', 'f2': 23, 'f3': '888***', 'extra': 'aye') + becomes: + SQL: 'SELECT * from t where f1=@a0, f2=@a1, f3=@a2' + Params: {'a0': 'a', 'a1': 23, 'a2': '888***'} + + :type sql: str + :param sql: A SQL request. + + :type params: list + :param params: A list of parameters. + + :rtype: tuple(str, dict) + :returns: A tuple of the sanitized SQL and a dictionary of the named + arguments. + """ + if not params: + return sanitize_literals_for_upload(sql), None + + found_pyformat_placeholders = RE_PYFORMAT.findall(sql) + params_is_dict = isinstance(params, dict) + + if params_is_dict: + if not found_pyformat_placeholders: + return sanitize_literals_for_upload(sql), params + else: + n_params = len(params) if params else 0 + n_matches = len(found_pyformat_placeholders) + if n_matches != n_params: + raise Error( + "pyformat_args mismatch\ngot %d args from %s\n" + "want %d args in %s" + % (n_matches, found_pyformat_placeholders, n_params, params) + ) + + named_args = {} + # We've now got for example: + # Case a) Params is a non-dict + # SQL: 'SELECT * from t where f1=%s, f2=%s, f3=%s' + # Params: ('a', 23, '888***') + # Case b) Params is a dict and the matches are %(value)s' + for i, pyfmt in enumerate(found_pyformat_placeholders): + key = "a%d" % i + sql = sql.replace(pyfmt, "@" + key, 1) + if params_is_dict: + # The '%(key)s' case, so interpolate it. + resolved_value = pyfmt % params + named_args[key] = resolved_value + else: + named_args[key] = params[i] + + return sanitize_literals_for_upload(sql), named_args + + +def get_param_types(params): + """Determine Cloud Spanner types for the given parameters. + + :type params: dict + :param params: Parameters requiring to find Cloud Spanner types. + + :rtype: dict + :returns: The types index for the given parameters. + """ + if params is None: + return + + param_types = {} + + for key, value in params.items(): + type_ = type(value) + if type_ in TYPES_MAP: + param_types[key] = TYPES_MAP[type_] + + return param_types + + +def ensure_where_clause(sql): + """ + Cloud Spanner requires a WHERE clause on UPDATE and DELETE statements. + Add a dummy WHERE clause if not detected. + + :type sql: str + :param sql: SQL code to check. + """ + if any(isinstance(token, sqlparse.sql.Where) for token in sqlparse.parse(sql)[0]): + return sql + + return sql + " WHERE 1=1" + + +def escape_name(name): + """ + Apply backticks to the name that either contain '-' or + ' ', or is a Cloud Spanner's reserved keyword. + + :type name: str + :param name: Name to escape. + + :rtype: str + :returns: Name escaped if it has to be escaped. + """ + if "-" in name or " " in name or name.upper() in SPANNER_RESERVED_KEYWORDS: + return "`" + name + "`" + return name diff --git a/google/cloud/spanner_dbapi/parsed_statement.py b/google/cloud/spanner_dbapi/parsed_statement.py new file mode 100644 index 0000000000..a8d03f6fa4 --- /dev/null +++ b/google/cloud/spanner_dbapi/parsed_statement.py @@ -0,0 +1,63 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from dataclasses import dataclass +from enum import Enum +from typing import Any, List + + +class StatementType(Enum): + UNKNOWN = 0 + CLIENT_SIDE = 1 + DDL = 2 + QUERY = 3 + UPDATE = 4 + INSERT = 5 + + +class ClientSideStatementType(Enum): + COMMIT = 1 + BEGIN = 2 + ROLLBACK = 3 + SHOW_COMMIT_TIMESTAMP = 4 + SHOW_READ_TIMESTAMP = 5 + START_BATCH_DML = 6 + RUN_BATCH = 7 + ABORT_BATCH = 8 + PARTITION_QUERY = 9 + RUN_PARTITION = 10 + RUN_PARTITIONED_QUERY = 11 + SET_AUTOCOMMIT_DML_MODE = 12 + + +class AutocommitDmlMode(Enum): + TRANSACTIONAL = 1 + PARTITIONED_NON_ATOMIC = 2 + + +@dataclass +class Statement: + sql: str + params: Any = None + param_types: Any = None + + def get_tuple(self): + return self.sql, self.params, self.param_types + + +@dataclass +class ParsedStatement: + statement_type: StatementType + statement: Statement + client_side_statement_type: ClientSideStatementType = None + client_side_statement_params: List[Any] = None diff --git a/google/cloud/spanner_dbapi/parser.py b/google/cloud/spanner_dbapi/parser.py new file mode 100644 index 0000000000..f5c1d0edf7 --- /dev/null +++ b/google/cloud/spanner_dbapi/parser.py @@ -0,0 +1,258 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +Grammar for parsing VALUES: + VALUES := `VALUES(` + ARGS + `)` + ARGS := [EXPR,]*EXPR + EXPR := TERMINAL / FUNC + TERMINAL := `%s` + FUNC := alphanum + `(` + ARGS + `)` + alphanum := (a-zA-Z_)[0-9a-ZA-Z_]* + +thus given: + statement: 'VALUES (%s, %s), (%s, LOWER(UPPER(%s))) , (%s)' + It'll parse: + VALUES + |- ARGS + |- (TERMINAL, TERMINAL) + |- (TERMINAL, FUNC + |- FUNC + |- (TERMINAL) + |- (TERMINAL) +""" + +from .exceptions import ProgrammingError + +ARGS = "ARGS" +FUNC = "FUNC" +VALUES = "VALUES" + + +class func(object): + def __init__(self, func_name, args): + self.name = func_name + self.args = args + + def __str__(self): + return "%s%s" % (self.name, self.args) + + def __repr__(self): + return self.__str__() + + def __eq__(self, other): + if type(self) is not type(other): + return False + if self.name != other.name: + return False + if not isinstance(other.args, type(self.args)): + return False + if len(self.args) != len(other.args): + return False + return self.args == other.args + + def __len__(self): + return len(self.args) + + +class terminal(str): + """Represent the unit symbol that can be part of a SQL values clause.""" + + pass + + +class a_args(object): + """Expression arguments. + + :type argv: list + :param argv: A List of expression arguments. + """ + + def __init__(self, argv): + self.argv = argv + + def __str__(self): + return "(" + ", ".join([str(arg) for arg in self.argv]) + ")" + + def __repr__(self): + return self.__str__() + + def has_expr(self): + return any([token for token in self.argv if not isinstance(token, terminal)]) + + def __len__(self): + return len(self.argv) + + def __eq__(self, other): + if type(self) is not type(other): + return False + + if len(self) != len(other): + return False + + for i, item in enumerate(self): + if item != other[i]: + return False + + return True + + def __getitem__(self, index): + return self.argv[index] + + def homogenous(self): + """Check arguments of the expression to be homogeneous. + + :rtype: bool + :return: True if all the arguments of the expression are in pyformat + and each has the same length, False otherwise. + """ + if not self._is_equal_length(): + return False + + for arg in self.argv: + if isinstance(arg, terminal): + continue + elif isinstance(arg, a_args): + if not arg.homogenous(): + return False + else: + return False + return True + + def _is_equal_length(self): + """Return False if all the arguments have the same length. + + :rtype: bool + :return: False if the sequences of the arguments have the same length. + """ + if len(self) == 0: + return True + + arg0_len = len(self.argv[0]) + for arg in self.argv[1:]: + if len(arg) != arg0_len: + return False + + return True + + +class values(a_args): + """A wrapper for values. + + :rtype: str + :returns: A string of the values expression in a tree view. + """ + + def __str__(self): + return "VALUES%s" % super().__str__() + + +pyfmt_str = terminal("%s") + + +def expect(word, token): + """Parse the given expression recursively. + + :type word: str + :param word: A string expression. + + :type token: str + :param token: An expression token. + + :rtype: `Tuple(str, Any)` + :returns: A tuple containing the rest of the expression string and the + parse tree for the part of the expression that has already been + parsed. + + :raises :class:`ProgrammingError`: If there is a parsing error. + """ + word = word.strip() + if token == VALUES: + if not word.startswith("VALUES"): + raise ProgrammingError("VALUES: `%s` does not start with VALUES" % word) + + word = word[len("VALUES") :].lstrip() + + all_args = [] + while word: + word = word.strip() + + word, arg = expect(word, ARGS) + all_args.append(arg) + word = word.strip() + + if word and not word.startswith(","): + raise ProgrammingError( + "VALUES: expected `,` got %s in %s" % (word[0], word) + ) + word = word[1:] + return "", values(all_args) + + elif token == FUNC: + begins_with_letter = word and (word[0].isalpha() or word[0] == "_") + if not begins_with_letter: + raise ProgrammingError( + "FUNC: `%s` does not begin with `a-zA-z` nor a `_`" % word + ) + + rest = word[1:] + end = 0 + for ch in rest: + if ch.isalnum() or ch == "_": + end += 1 + else: + break + + func_name, rest = word[: end + 1], word[end + 1 :].strip() + + word, args = expect(rest, ARGS) + return word, func(func_name, args) + + elif token == ARGS: + # The form should be: + # (%s) + # (%s, %s...) + # (FUNC, %s...) + # (%s, %s...) + if not (word and word.startswith("(")): + raise ProgrammingError("ARGS: supposed to begin with `(` in `%s`" % word) + + word = word[1:] + + terms = [] + while True: + word = word.strip() + if not word or word.startswith(")"): + break + + if word == "%s": + terms.append(pyfmt_str) + word = "" + elif not word.startswith("%s"): + word, parsed = expect(word, FUNC) + terms.append(parsed) + else: + terms.append(pyfmt_str) + word = word[2:].strip() + + if word.startswith(","): + word = word[1:] + + if not (word and word.startswith(")")): + raise ProgrammingError("ARGS: supposed to end with `)` in `%s`" % word) + + word = word[1:] + return word, a_args(terms) + + raise ProgrammingError("Unknown token `%s`" % token) diff --git a/google/cloud/spanner_dbapi/partition_helper.py b/google/cloud/spanner_dbapi/partition_helper.py new file mode 100644 index 0000000000..a130e29721 --- /dev/null +++ b/google/cloud/spanner_dbapi/partition_helper.py @@ -0,0 +1,41 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from dataclasses import dataclass +from typing import Any + +import gzip +import pickle +import base64 + +from google.cloud.spanner_v1 import BatchTransactionId + + +def decode_from_string(encoded_partition_id): + gzip_bytes = base64.b64decode(bytes(encoded_partition_id, "utf-8")) + partition_id_bytes = gzip.decompress(gzip_bytes) + return pickle.loads(partition_id_bytes) + + +def encode_to_string(batch_transaction_id, partition_result): + partition_id = PartitionId(batch_transaction_id, partition_result) + partition_id_bytes = pickle.dumps(partition_id) + gzip_bytes = gzip.compress(partition_id_bytes) + return str(base64.b64encode(gzip_bytes), "utf-8") + + +@dataclass +class PartitionId: + batch_transaction_id: BatchTransactionId + partition_result: Any diff --git a/google/cloud/spanner_dbapi/transaction_helper.py b/google/cloud/spanner_dbapi/transaction_helper.py new file mode 100644 index 0000000000..744aeb7b43 --- /dev/null +++ b/google/cloud/spanner_dbapi/transaction_helper.py @@ -0,0 +1,294 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from dataclasses import dataclass +from enum import Enum +from typing import TYPE_CHECKING, List, Any, Dict +from google.api_core.exceptions import Aborted + +import time + +from google.cloud.spanner_dbapi.batch_dml_executor import BatchMode +from google.cloud.spanner_dbapi.exceptions import RetryAborted +from google.cloud.spanner_v1._helpers import _get_retry_delay + +if TYPE_CHECKING: + from google.cloud.spanner_dbapi import Connection, Cursor +from google.cloud.spanner_dbapi.checksum import ResultsChecksum, _compare_checksums + +MAX_INTERNAL_RETRIES = 50 +RETRY_ABORTED_ERROR = "The transaction was aborted and could not be retried due to a concurrent modification." + + +class TransactionRetryHelper: + def __init__(self, connection: "Connection"): + """Helper class used in retrying the transaction when aborted This will + maintain all the statements executed on original transaction and replay + them again in the retried transaction. + + :type connection: :class:`~google.cloud.spanner_dbapi.connection.Connection` + :param connection: A DB-API connection to Google Cloud Spanner. + """ + + self._connection = connection + # list of all statements in the same order as executed in original + # transaction along with their results + self._statement_result_details_list: List[StatementDetails] = [] + # Map of last StatementDetails that was added to a particular cursor + self._last_statement_details_per_cursor: Dict[Cursor, StatementDetails] = {} + # 1-1 map from original cursor object on which transaction ran to the + # new cursor object used in the retry + self._cursor_map: Dict[Cursor, Cursor] = {} + + def _set_connection_for_retry(self): + self._connection._spanner_transaction_started = False + self._connection._transaction_begin_marked = False + self._connection._batch_mode = BatchMode.NONE + + def reset(self): + """ + Resets the state of the class when the ongoing transaction is committed + or aborted + """ + self._statement_result_details_list = [] + self._last_statement_details_per_cursor = {} + self._cursor_map = {} + + def add_fetch_statement_for_retry( + self, cursor, result_rows, exception, is_fetch_all + ): + """ + StatementDetails to be added to _statement_result_details_list whenever fetchone, fetchmany or + fetchall method is called on the cursor. + If fetchone is consecutively called n times then it is stored as fetchmany with size as n. + Same for fetchmany, so consecutive fetchone and fetchmany statements are stored as one + fetchmany statement in _statement_result_details_list with size param appropriately set + + :param cursor: original Cursor object on which statement executed in the transaction + :param result_rows: All the rows from the resultSet from fetch statement execution + :param exception: Not none in case non-aborted exception is thrown on the original + statement execution + :param is_fetch_all: True in case of fetchall statement execution + """ + if not self._connection._client_transaction_started: + return + + last_statement_result_details = self._last_statement_details_per_cursor.get( + cursor + ) + if ( + last_statement_result_details is not None + and last_statement_result_details.statement_type + == CursorStatementType.FETCH_MANY + ): + if exception is not None: + last_statement_result_details.result_type = ResultType.EXCEPTION + last_statement_result_details.result_details = exception + else: + for row in result_rows: + last_statement_result_details.result_details.consume_result(row) + last_statement_result_details.size += len(result_rows) + else: + result_details = _get_statement_result_checksum(result_rows) + if is_fetch_all: + statement_type = CursorStatementType.FETCH_ALL + size = None + else: + statement_type = CursorStatementType.FETCH_MANY + size = len(result_rows) + + last_statement_result_details = FetchStatement( + cursor=cursor, + statement_type=statement_type, + result_type=ResultType.CHECKSUM, + result_details=result_details, + size=size, + ) + self._last_statement_details_per_cursor[ + cursor + ] = last_statement_result_details + self._statement_result_details_list.append(last_statement_result_details) + + def add_execute_statement_for_retry( + self, cursor, sql, args, exception, is_execute_many + ): + """ + StatementDetails to be added to _statement_result_details_list whenever execute or + executemany method is called on the cursor. + + :param cursor: original Cursor object on which statement executed in the transaction + :param sql: Input param of the execute/executemany method + :param args: Input param of the execute/executemany method + :param exception: Not none in case non-aborted exception is thrown on the original + statement execution + :param is_execute_many: True in case of executemany statement execution + """ + if not self._connection._client_transaction_started: + return + statement_type = CursorStatementType.EXECUTE + if is_execute_many: + statement_type = CursorStatementType.EXECUTE_MANY + + result_type = ResultType.NONE + result_details = None + if exception is not None: + result_type = ResultType.EXCEPTION + result_details = exception + elif cursor._batch_dml_rows_count is not None: + result_type = ResultType.BATCH_DML_ROWS_COUNT + result_details = cursor._batch_dml_rows_count + elif cursor._row_count is not None: + result_type = ResultType.ROW_COUNT + result_details = cursor.rowcount + + last_statement_result_details = ExecuteStatement( + cursor=cursor, + statement_type=statement_type, + sql=sql, + args=args, + result_type=result_type, + result_details=result_details, + ) + self._last_statement_details_per_cursor[cursor] = last_statement_result_details + self._statement_result_details_list.append(last_statement_result_details) + + def retry_transaction(self, default_retry_delay=None): + """Retry the aborted transaction. + + All the statements executed in the original transaction + will be re-executed in new one. Results checksums of the + original statements and the retried ones will be compared. + + :raises: :class:`google.cloud.spanner_dbapi.exceptions.RetryAborted` + If results checksum of the retried statement is + not equal to the checksum of the original one. + """ + attempt = 0 + while True: + attempt += 1 + if attempt > MAX_INTERNAL_RETRIES: + raise + self._set_connection_for_retry() + try: + for statement_result_details in self._statement_result_details_list: + if statement_result_details.cursor in self._cursor_map: + cursor = self._cursor_map.get(statement_result_details.cursor) + else: + cursor = self._connection.cursor() + cursor._in_retry_mode = True + self._cursor_map[statement_result_details.cursor] = cursor + try: + _handle_statement(statement_result_details, cursor) + except Aborted: + raise + except RetryAborted: + raise + except Exception as ex: + if ( + type(statement_result_details.result_details) + is not type(ex) + or ex.args != statement_result_details.result_details.args + ): + raise RetryAborted(RETRY_ABORTED_ERROR, ex) + return + except Aborted as ex: + delay = _get_retry_delay( + ex.errors[0], attempt, default_retry_delay=default_retry_delay + ) + if delay: + time.sleep(delay) + + +def _handle_statement(statement_result_details, cursor): + statement_type = statement_result_details.statement_type + if _is_execute_type_statement(statement_type): + if statement_type == CursorStatementType.EXECUTE: + cursor.execute(statement_result_details.sql, statement_result_details.args) + if ( + statement_result_details.result_type == ResultType.ROW_COUNT + and statement_result_details.result_details != cursor.rowcount + ): + raise RetryAborted(RETRY_ABORTED_ERROR) + else: + cursor.executemany( + statement_result_details.sql, statement_result_details.args + ) + if ( + statement_result_details.result_type == ResultType.BATCH_DML_ROWS_COUNT + and statement_result_details.result_details != cursor._batch_dml_rows_count + ): + raise RetryAborted(RETRY_ABORTED_ERROR) + else: + if statement_type == CursorStatementType.FETCH_ALL: + res = cursor.fetchall() + else: + res = cursor.fetchmany(statement_result_details.size) + checksum = _get_statement_result_checksum(res) + _compare_checksums(checksum, statement_result_details.result_details) + if statement_result_details.result_type == ResultType.EXCEPTION: + raise RetryAborted(RETRY_ABORTED_ERROR) + + +def _is_execute_type_statement(statement_type): + return statement_type in ( + CursorStatementType.EXECUTE, + CursorStatementType.EXECUTE_MANY, + ) + + +def _get_statement_result_checksum(res_iter): + retried_checksum = ResultsChecksum() + for res in res_iter: + retried_checksum.consume_result(res) + return retried_checksum + + +class CursorStatementType(Enum): + EXECUTE = 1 + EXECUTE_MANY = 2 + FETCH_ONE = 3 + FETCH_ALL = 4 + FETCH_MANY = 5 + + +class ResultType(Enum): + # checksum of ResultSet in case of fetch call on query statement + CHECKSUM = 1 + # None in case of execute call on query statement + NONE = 2 + # Exception details in case of any statement execution throws exception + EXCEPTION = 3 + # Total rows updated in case of execute call on DML statement + ROW_COUNT = 4 + # Total rows updated in case of Batch DML statement execution + BATCH_DML_ROWS_COUNT = 5 + + +@dataclass +class StatementDetails: + statement_type: CursorStatementType + # The cursor object on which this statement was executed + cursor: "Cursor" + result_type: ResultType + result_details: Any + + +@dataclass +class ExecuteStatement(StatementDetails): + sql: str + args: Any = None + + +@dataclass +class FetchStatement(StatementDetails): + size: int = None diff --git a/google/cloud/spanner_dbapi/types.py b/google/cloud/spanner_dbapi/types.py new file mode 100644 index 0000000000..363accdfa2 --- /dev/null +++ b/google/cloud/spanner_dbapi/types.py @@ -0,0 +1,106 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Implementation of the type objects and constructors according to the + PEP-0249 specification. + + See + https://www.python.org/dev/peps/pep-0249/#type-objects-and-constructors +""" + +import datetime +import time +from base64 import b64encode + + +def _date_from_ticks(ticks): + """Based on PEP-249 Implementation Hints for Module Authors: + + https://www.python.org/dev/peps/pep-0249/#implementation-hints-for-module-authors + """ + return Date(*time.localtime(ticks)[:3]) + + +def _time_from_ticks(ticks): + """Based on PEP-249 Implementation Hints for Module Authors: + + https://www.python.org/dev/peps/pep-0249/#implementation-hints-for-module-authors + """ + return Time(*time.localtime(ticks)[3:6]) + + +def _timestamp_from_ticks(ticks): + """Based on PEP-249 Implementation Hints for Module Authors: + + https://www.python.org/dev/peps/pep-0249/#implementation-hints-for-module-authors + """ + return Timestamp(*time.localtime(ticks)[:6]) + + +class _DBAPITypeObject(object): + """Implementation of a helper class used for type comparison among similar + but possibly different types. + + See + https://www.python.org/dev/peps/pep-0249/#implementation-hints-for-module-authors + """ + + def __init__(self, *values): + self.values = values + + def __eq__(self, other): + return other in self.values + + +Date = datetime.date +Time = datetime.time +Timestamp = datetime.datetime +DateFromTicks = _date_from_ticks +TimeFromTicks = _time_from_ticks +TimestampFromTicks = _timestamp_from_ticks +Binary = b64encode + +STRING = "STRING" +BINARY = _DBAPITypeObject("TYPE_CODE_UNSPECIFIED", "BYTES", "ARRAY", "STRUCT") +NUMBER = _DBAPITypeObject("BOOL", "INT64", "FLOAT64", "FLOAT32", "NUMERIC") +DATETIME = _DBAPITypeObject("TIMESTAMP", "DATE") +ROWID = "STRING" + + +class TimestampStr(str): + """[inherited from the alpha release] + + TODO: Decide whether this class is necessary + + TimestampStr exists so that we can purposefully format types as timestamps + compatible with Cloud Spanner's TIMESTAMP type, but right before making + queries, it'll help differentiate between normal strings and the case of + types that should be TIMESTAMP. + """ + + pass + + +class DateStr(str): + """[inherited from the alpha release] + + TODO: Decide whether this class is necessary + + DateStr is a sentinel type to help format Django dates as + compatible with Cloud Spanner's DATE type, but right before making + queries, it'll help differentiate between normal strings and the case of + types that should be DATE. + """ + + pass diff --git a/google/cloud/spanner_dbapi/utils.py b/google/cloud/spanner_dbapi/utils.py new file mode 100644 index 0000000000..bfb97346cf --- /dev/null +++ b/google/cloud/spanner_dbapi/utils.py @@ -0,0 +1,147 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import re + +re_UNICODE_POINTS = re.compile(r"([^\s]*[\u0080-\uFFFF]+[^\s]*)") + + +class PeekIterator: + """ + Peek at the first element out of an iterator for the sake of operations + like auto-population of fields on reading the first element. + If next's result is an instance of list, it'll be converted into a tuple to + conform with DBAPI v2's sequence expectations. + + :type source: list + :param source: A list of source for the Iterator. + """ + + def __init__(self, source): + itr_src = iter(source) + + self.__iters = [] + self.__index = 0 + + try: + head = next(itr_src) + # Restitch and prepare to read from multiple iterators. + self.__iters = [iter(itr) for itr in [[head], itr_src]] + except StopIteration: + pass + + def __next__(self): + if self.__index >= len(self.__iters): + raise StopIteration + + iterator = self.__iters[self.__index] + try: + head = next(iterator) + except StopIteration: + # That iterator has been exhausted, try with the next one. + self.__index += 1 + return self.__next__() + else: + return tuple(head) if isinstance(head, list) else head + + def __iter__(self): + return self + + +class StreamedManyResultSets: + """Iterator to walk through several `StreamedResultsSet` iterators. + This type of iterator is used by `Cursor.executemany()` + method to iterate through several `StreamedResultsSet` + iterators like they all are merged into single iterator. + """ + + def __init__(self): + self._iterators = [] + self._index = 0 + + def add_iter(self, iterator): + """Add new iterator into this one. + :type iterator: :class:`google.cloud.spanner_v1.streamed.StreamedResultSet` + :param iterator: Iterator to merge into this one. + """ + self._iterators.append(iterator) + + def __next__(self): + """Return the next value from the currently streamed iterator. + If the current iterator is streamed to the end, + start to stream the next one. + :rtype: list + :returns: The next result row. + """ + try: + res = next(self._iterators[self._index]) + except StopIteration: + self._index += 1 + res = self.__next__() + except IndexError: + raise StopIteration + + return res + + def __iter__(self): + return self + + +def backtick_unicode(sql): + """Check the SQL to be valid and split it by segments. + + :type sql: str + :param sql: A SQL request. + + :rtype: str + :returns: A SQL parsed by segments in unicode if initial SQL is valid, + initial string otherwise. + """ + matches = list(re_UNICODE_POINTS.finditer(sql)) + if not matches: + return sql + + segments = [] + + last_end = 0 + for match in matches: + start, end = match.span() + if sql[start] != "`" and sql[end - 1] != "`": + segments.append(sql[last_end:start] + "`" + sql[start:end] + "`") + else: + segments.append(sql[last_end:end]) + + last_end = end + + return "".join(segments) + + +def sanitize_literals_for_upload(s): + """Convert literals in s, to be fit for consumption by Cloud Spanner. + + * Convert %% (escaped percent literals) to %. Percent signs must be escaped + when values like %s are used as SQL parameter placeholders but Spanner's + query language uses placeholders like @a0 and doesn't expect percent + signs to be escaped. + * Quote words containing non-ASCII, with backticks, for example föö to + `föö`. + + :type s: str + :param s: A string with literals to escaped for consumption by Cloud + Spanner. + + :rtype: str + :returns: A sanitized string for uploading. + """ + return backtick_unicode(s.replace("%%", "%")) diff --git a/google/cloud/spanner_dbapi/version.py b/google/cloud/spanner_dbapi/version.py new file mode 100644 index 0000000000..6fbb80eb90 --- /dev/null +++ b/google/cloud/spanner_dbapi/version.py @@ -0,0 +1,20 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import platform +from google.cloud.spanner_v1 import gapic_version as package_version + +PY_VERSION = platform.python_version() +VERSION = package_version.__version__ +DEFAULT_USER_AGENT = "gl-dbapi/" + VERSION diff --git a/google/cloud/spanner_v1/__init__.py b/google/cloud/spanner_v1/__init__.py index 8611405cd6..48b11d9342 100644 --- a/google/cloud/spanner_v1/__init__.py +++ b/google/cloud/spanner_v1/__init__.py @@ -1,4 +1,6 @@ -# Copyright 2017, Google LLC All rights reserved. +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -11,17 +13,61 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - +# from __future__ import absolute_import -import pkg_resources +from google.cloud.spanner_v1 import gapic_version as package_version -__version__ = pkg_resources.get_distribution("google-cloud-spanner").version +__version__: str = package_version.__version__ + +from .services.spanner import SpannerClient +from .services.spanner import SpannerAsyncClient +from .types.commit_response import CommitResponse +from .types.keys import KeyRange as KeyRangePB +from .types.keys import KeySet as KeySetPB +from .types.mutation import Mutation +from .types.query_plan import PlanNode +from .types.query_plan import QueryPlan +from .types.result_set import PartialResultSet +from .types import RequestOptions +from .types.result_set import ResultSet +from .types.result_set import ResultSetMetadata +from .types.result_set import ResultSetStats +from .types.spanner import BatchCreateSessionsRequest +from .types.spanner import BatchCreateSessionsResponse +from .types.spanner import BatchWriteRequest +from .types.spanner import BatchWriteResponse +from .types.spanner import BeginTransactionRequest +from .types.spanner import CommitRequest +from .types.spanner import CreateSessionRequest +from .types.spanner import DeleteSessionRequest +from .types.spanner import DirectedReadOptions +from .types.spanner import ExecuteBatchDmlRequest +from .types.spanner import ExecuteBatchDmlResponse +from .types.spanner import ExecuteSqlRequest +from .types.spanner import GetSessionRequest +from .types.spanner import ListSessionsRequest +from .types.spanner import ListSessionsResponse +from .types.spanner import Partition +from .types.spanner import PartitionOptions +from .types.spanner import PartitionQueryRequest +from .types.spanner import PartitionReadRequest +from .types.spanner import PartitionResponse +from .types.spanner import ReadRequest +from .types.spanner import RollbackRequest +from .types.spanner import Session +from .types.transaction import Transaction +from .types.transaction import TransactionOptions +from .types.transaction import TransactionSelector +from .types.type import StructType +from .types.type import Type +from .types.type import TypeAnnotationCode +from .types.type import TypeCode +from .data_types import JsonObject, Interval +from .transaction import BatchTransactionId, DefaultTransactionOptions from google.cloud.spanner_v1 import param_types -from google.cloud.spanner_v1 import types from google.cloud.spanner_v1.client import Client -from google.cloud.spanner_v1.gapic import enums from google.cloud.spanner_v1.keyset import KeyRange from google.cloud.spanner_v1.keyset import KeySet from google.cloud.spanner_v1.pool import AbstractSessionPool @@ -33,7 +79,6 @@ COMMIT_TIMESTAMP = "spanner.commit_timestamp()" """Placeholder be used to store commit timestamp of a transaction in a column. - This value can only be used for timestamp columns that have set the option ``(allow_commit_timestamp=true)`` in the schema. """ @@ -43,7 +88,6 @@ # google.cloud.spanner_v1 "__version__", "param_types", - "types", # google.cloud.spanner_v1.client "Client", # google.cloud.spanner_v1.keyset @@ -55,8 +99,56 @@ "FixedSizePool", "PingingPool", "TransactionPingingPool", - # google.cloud.spanner_v1.gapic - "enums", # local "COMMIT_TIMESTAMP", + # google.cloud.spanner_v1.types + "BatchCreateSessionsRequest", + "BatchCreateSessionsResponse", + "BatchWriteRequest", + "BatchWriteResponse", + "BeginTransactionRequest", + "CommitRequest", + "CommitResponse", + "CreateSessionRequest", + "DeleteSessionRequest", + "DirectedReadOptions", + "ExecuteBatchDmlRequest", + "ExecuteBatchDmlResponse", + "ExecuteSqlRequest", + "GetSessionRequest", + "KeyRangePB", + "KeySetPB", + "ListSessionsRequest", + "ListSessionsResponse", + "Mutation", + "PartialResultSet", + "Partition", + "PartitionOptions", + "PartitionQueryRequest", + "PartitionReadRequest", + "PartitionResponse", + "PlanNode", + "QueryPlan", + "ReadRequest", + "RequestOptions", + "ResultSet", + "ResultSetMetadata", + "ResultSetStats", + "RollbackRequest", + "Session", + "StructType", + "Transaction", + "TransactionOptions", + "TransactionSelector", + "Type", + "TypeAnnotationCode", + "TypeCode", + # Custom spanner related data types + "JsonObject", + "Interval", + # google.cloud.spanner_v1.services + "SpannerClient", + "SpannerAsyncClient", + "BatchTransactionId", + "DefaultTransactionOptions", ) diff --git a/google/cloud/spanner_v1/_helpers.py b/google/cloud/spanner_v1/_helpers.py index 91e8c8d29c..00a69d462b 100644 --- a/google/cloud/spanner_v1/_helpers.py +++ b/google/cloud/spanner_v1/_helpers.py @@ -15,18 +15,68 @@ """Helper functions for Cloud Spanner.""" import datetime +import decimal import math - -import six +import time +import base64 +import threading from google.protobuf.struct_pb2 import ListValue from google.protobuf.struct_pb2 import Value +from google.protobuf.message import Message +from google.protobuf.internal.enum_type_wrapper import EnumTypeWrapper from google.api_core import datetime_helpers +from google.api_core.exceptions import Aborted from google.cloud._helpers import _date_from_iso8601_date -from google.cloud._helpers import _datetime_to_rfc3339 -from google.cloud.spanner_v1.proto import type_pb2 -from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest +from google.cloud.spanner_v1 import TypeCode +from google.cloud.spanner_v1 import ExecuteSqlRequest +from google.cloud.spanner_v1 import JsonObject, Interval +from google.cloud.spanner_v1 import TransactionOptions +from google.cloud.spanner_v1.request_id_header import with_request_id +from google.rpc.error_details_pb2 import RetryInfo + +try: + from opentelemetry.propagate import inject + from opentelemetry.propagators.textmap import Setter + + HAS_OPENTELEMETRY_INSTALLED = True +except ImportError: + HAS_OPENTELEMETRY_INSTALLED = False +from typing import List, Tuple +import random + +# Validation error messages +NUMERIC_MAX_SCALE_ERR_MSG = ( + "Max scale for a numeric is 9. The requested numeric has scale {}" +) +NUMERIC_MAX_PRECISION_ERR_MSG = ( + "Max precision for the whole component of a numeric is 29. The requested " + + "numeric has a whole component with precision {}" +) + + +if HAS_OPENTELEMETRY_INSTALLED: + + class OpenTelemetryContextSetter(Setter): + """ + Used by Open Telemetry for context propagation. + """ + + def set(self, carrier: List[Tuple[str, str]], key: str, value: str) -> None: + """ + Injects trace context into Spanner metadata + + Args: + carrier(PubsubMessage): The Pub/Sub message which is the carrier of Open Telemetry + data. + key(str): The key for which the Open Telemetry context data needs to be set. + value(str): The Open Telemetry context value to be set. + + Returns: + None + """ + carrier.append((key, value)) def _try_to_coerce_bytes(bytestring): @@ -52,41 +102,100 @@ def _merge_query_options(base, merge): """Merge higher precedence QueryOptions with current QueryOptions. :type base: - :class:`google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or :class:`dict` or None :param base: The current QueryOptions that is intended for use. :type merge: - :class:`google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or :class:`dict` or None :param merge: The QueryOptions that have a higher priority than base. These options should overwrite the fields in base. :rtype: - :class:`google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or None :returns: QueryOptions object formed by merging the two given QueryOptions. If the resultant object only has empty fields, returns None. """ combined = base or ExecuteSqlRequest.QueryOptions() - if type(combined) == dict: + if type(combined) is dict: combined = ExecuteSqlRequest.QueryOptions( - optimizer_version=combined.get("optimizer_version", "") + optimizer_version=combined.get("optimizer_version", ""), + optimizer_statistics_package=combined.get( + "optimizer_statistics_package", "" + ), ) merge = merge or ExecuteSqlRequest.QueryOptions() - if type(merge) == dict: + if type(merge) is dict: merge = ExecuteSqlRequest.QueryOptions( - optimizer_version=merge.get("optimizer_version", "") + optimizer_version=merge.get("optimizer_version", ""), + optimizer_statistics_package=merge.get("optimizer_statistics_package", ""), ) - combined.MergeFrom(merge) - if not combined.optimizer_version: + type(combined).pb(combined).MergeFrom(type(merge).pb(merge)) + if not combined.optimizer_version and not combined.optimizer_statistics_package: return None return combined -# pylint: disable=too-many-return-statements,too-many-branches +def _assert_numeric_precision_and_scale(value): + """ + Asserts that input numeric field is within Spanner supported range. + + Spanner supports fixed 38 digits of precision and 9 digits of scale. + This number can be optionally prefixed with a plus or minus sign. + Read more here: https://cloud.google.com/spanner/docs/data-types#numeric_type + + :type value: decimal.Decimal + :param value: The value to check for Cloud Spanner compatibility. + + :raises NotSupportedError: If value is not within supported precision or scale of Spanner. + """ + scale = value.as_tuple().exponent + precision = len(value.as_tuple().digits) + + if scale < -9: + raise ValueError(NUMERIC_MAX_SCALE_ERR_MSG.format(abs(scale))) + if precision + scale > 29: + raise ValueError(NUMERIC_MAX_PRECISION_ERR_MSG.format(precision + scale)) + + +def _datetime_to_rfc3339(value): + """Format the provided datatime in the RFC 3339 format. + + :type value: datetime.datetime + :param value: value to format + + :rtype: str + :returns: RFC 3339 formatted datetime string + """ + # Convert to UTC and then drop the timezone so we can append "Z" in lieu of + # allowing isoformat to append the "+00:00" zone offset. + value = value.astimezone(datetime.timezone.utc).replace(tzinfo=None) + return value.isoformat(sep="T", timespec="microseconds") + "Z" + + +def _datetime_to_rfc3339_nanoseconds(value): + """Format the provided datatime in the RFC 3339 format. + + :type value: datetime_helpers.DatetimeWithNanoseconds + :param value: value to format + + :rtype: str + :returns: RFC 3339 formatted datetime string + """ + + if value.nanosecond == 0: + return _datetime_to_rfc3339(value) + nanos = str(value.nanosecond).rjust(9, "0").rstrip("0") + # Convert to UTC and then drop the timezone so we can append "Z" in lieu of + # allowing isoformat to append the "+00:00" zone offset. + value = value.astimezone(datetime.timezone.utc).replace(tzinfo=None) + return "{}.{}Z".format(value.isoformat(sep="T", timespec="seconds"), nanos) + + def _make_value_pb(value): """Helper for :func:`_make_list_value_pbs`. @@ -103,7 +212,7 @@ def _make_value_pb(value): return Value(list_value=_make_list_value_pb(value)) if isinstance(value, bool): return Value(bool_value=value) - if isinstance(value, six.integer_types): + if isinstance(value, int): return Value(string_value=str(value)) if isinstance(value, float): if math.isnan(value): @@ -115,22 +224,37 @@ def _make_value_pb(value): return Value(string_value="-Infinity") return Value(number_value=value) if isinstance(value, datetime_helpers.DatetimeWithNanoseconds): - return Value(string_value=value.rfc3339()) + return Value(string_value=_datetime_to_rfc3339_nanoseconds(value)) if isinstance(value, datetime.datetime): return Value(string_value=_datetime_to_rfc3339(value)) if isinstance(value, datetime.date): return Value(string_value=value.isoformat()) - if isinstance(value, six.binary_type): + if isinstance(value, bytes): value = _try_to_coerce_bytes(value) return Value(string_value=value) - if isinstance(value, six.text_type): + if isinstance(value, str): return Value(string_value=value) if isinstance(value, ListValue): return Value(list_value=value) - raise ValueError("Unknown type: %s" % (value,)) - + if isinstance(value, decimal.Decimal): + _assert_numeric_precision_and_scale(value) + return Value(string_value=str(value)) + if isinstance(value, JsonObject): + value = value.serialize() + if value is None: + return Value(null_value="NULL_VALUE") + else: + return Value(string_value=value) + if isinstance(value, Message): + value = value.SerializeToString() + if value is None: + return Value(null_value="NULL_VALUE") + else: + return Value(string_value=base64.b64encode(value)) + if isinstance(value, Interval): + return Value(string_value=str(value)) -# pylint: enable=too-many-return-statements,too-many-branches + raise ValueError("Unknown type: %s" % (value,)) def _make_list_value_pb(values): @@ -157,56 +281,98 @@ def _make_list_value_pbs(values): return [_make_list_value_pb(row) for row in values] -# pylint: disable=too-many-branches -def _parse_value_pb(value_pb, field_type): +def _parse_value_pb(value_pb, field_type, field_name, column_info=None): """Convert a Value protobuf to cell data. :type value_pb: :class:`~google.protobuf.struct_pb2.Value` :param value_pb: protobuf to convert - :type field_type: :class:`~google.cloud.spanner_v1.proto.type_pb2.Type` + :type field_type: :class:`~google.cloud.spanner_v1.types.Type` :param field_type: type code for the value + :type field_name: str + :param field_name: column name + + :type column_info: dict + :param column_info: (Optional) dict of column name and column information. + An object where column names as keys and custom objects as corresponding + values for deserialization. It's specifically useful for data types like + protobuf where deserialization logic is on user-specific code. When provided, + the custom object enables deserialization of backend-received column data. + If not provided, data remains serialized as bytes for Proto Messages and + integer for Proto Enums. + :rtype: varies on field_type :returns: value extracted from value_pb :raises ValueError: if unknown type is passed """ - if value_pb.HasField("null_value"): - return None - if field_type.code == type_pb2.STRING: - result = value_pb.string_value - elif field_type.code == type_pb2.BYTES: - result = value_pb.string_value.encode("utf8") - elif field_type.code == type_pb2.BOOL: - result = value_pb.bool_value - elif field_type.code == type_pb2.INT64: - result = int(value_pb.string_value) - elif field_type.code == type_pb2.FLOAT64: - if value_pb.HasField("string_value"): - result = float(value_pb.string_value) - else: - result = value_pb.number_value - elif field_type.code == type_pb2.DATE: - result = _date_from_iso8601_date(value_pb.string_value) - elif field_type.code == type_pb2.TIMESTAMP: - DatetimeWithNanoseconds = datetime_helpers.DatetimeWithNanoseconds - result = DatetimeWithNanoseconds.from_rfc3339(value_pb.string_value) - elif field_type.code == type_pb2.ARRAY: - result = [ - _parse_value_pb(item_pb, field_type.array_element_type) - for item_pb in value_pb.list_value.values - ] - elif field_type.code == type_pb2.STRUCT: - result = [ - _parse_value_pb(item_pb, field_type.struct_type.fields[i].type) - for (i, item_pb) in enumerate(value_pb.list_value.values) + decoder = _get_type_decoder(field_type, field_name, column_info) + return _parse_nullable(value_pb, decoder) + + +def _get_type_decoder(field_type, field_name, column_info=None): + """Returns a function that converts a Value protobuf to cell data. + + :type field_type: :class:`~google.cloud.spanner_v1.types.Type` + :param field_type: type code for the value + + :type field_name: str + :param field_name: column name + + :type column_info: dict + :param column_info: (Optional) dict of column name and column information. + An object where column names as keys and custom objects as corresponding + values for deserialization. It's specifically useful for data types like + protobuf where deserialization logic is on user-specific code. When provided, + the custom object enables deserialization of backend-received column data. + If not provided, data remains serialized as bytes for Proto Messages and + integer for Proto Enums. + + :rtype: a function that takes a single protobuf value as an input argument + :returns: a function that can be used to extract a value from a protobuf value + :raises ValueError: if unknown type is passed + """ + + type_code = field_type.code + if type_code == TypeCode.STRING: + return _parse_string + elif type_code == TypeCode.BYTES: + return _parse_bytes + elif type_code == TypeCode.BOOL: + return _parse_bool + elif type_code == TypeCode.INT64: + return _parse_int64 + elif type_code == TypeCode.FLOAT64: + return _parse_float + elif type_code == TypeCode.FLOAT32: + return _parse_float + elif type_code == TypeCode.DATE: + return _parse_date + elif type_code == TypeCode.TIMESTAMP: + return _parse_timestamp + elif type_code == TypeCode.NUMERIC: + return _parse_numeric + elif type_code == TypeCode.JSON: + return _parse_json + elif type_code == TypeCode.PROTO: + return lambda value_pb: _parse_proto(value_pb, column_info, field_name) + elif type_code == TypeCode.ENUM: + return lambda value_pb: _parse_proto_enum(value_pb, column_info, field_name) + elif type_code == TypeCode.ARRAY: + element_decoder = _get_type_decoder( + field_type.array_element_type, field_name, column_info + ) + return lambda value_pb: _parse_array(value_pb, element_decoder) + elif type_code == TypeCode.STRUCT: + element_decoders = [ + _get_type_decoder(item_field.type_, field_name, column_info) + for item_field in field_type.struct_type.fields ] + return lambda value_pb: _parse_struct(value_pb, element_decoders) + elif type_code == TypeCode.INTERVAL: + return _parse_interval else: raise ValueError("Unknown type: %s" % (field_type,)) - return result - - -# pylint: enable=too-many-branches def _parse_list_value_pbs(rows, row_type): @@ -215,7 +381,7 @@ def _parse_list_value_pbs(rows, row_type): :type rows: list of :class:`~google.protobuf.struct_pb2.ListValue` :param rows: row data returned from a read/query - :type row_type: :class:`~google.cloud.spanner_v1.proto.type_pb2.StructType` + :type row_type: :class:`~google.cloud.spanner_v1.types.StructType` :param row_type: row schema specification :rtype: list of list of cell data @@ -225,11 +391,99 @@ def _parse_list_value_pbs(rows, row_type): for row in rows: row_data = [] for value_pb, field in zip(row.values, row_type.fields): - row_data.append(_parse_value_pb(value_pb, field.type)) + row_data.append(_parse_value_pb(value_pb, field.type_, field.name)) result.append(row_data) return result +def _parse_string(value_pb) -> str: + return value_pb.string_value + + +def _parse_bytes(value_pb): + return value_pb.string_value.encode("utf8") + + +def _parse_bool(value_pb) -> bool: + return value_pb.bool_value + + +def _parse_int64(value_pb) -> int: + return int(value_pb.string_value) + + +def _parse_float(value_pb) -> float: + if value_pb.HasField("string_value"): + return float(value_pb.string_value) + else: + return value_pb.number_value + + +def _parse_date(value_pb): + return _date_from_iso8601_date(value_pb.string_value) + + +def _parse_timestamp(value_pb): + DatetimeWithNanoseconds = datetime_helpers.DatetimeWithNanoseconds + return DatetimeWithNanoseconds.from_rfc3339(value_pb.string_value) + + +def _parse_numeric(value_pb): + return decimal.Decimal(value_pb.string_value) + + +def _parse_json(value_pb): + return JsonObject.from_str(value_pb.string_value) + + +def _parse_proto(value_pb, column_info, field_name): + bytes_value = base64.b64decode(value_pb.string_value) + if column_info is not None and column_info.get(field_name) is not None: + default_proto_message = column_info.get(field_name) + if isinstance(default_proto_message, Message): + proto_message = type(default_proto_message)() + proto_message.ParseFromString(bytes_value) + return proto_message + return bytes_value + + +def _parse_proto_enum(value_pb, column_info, field_name): + int_value = int(value_pb.string_value) + if column_info is not None and column_info.get(field_name) is not None: + proto_enum = column_info.get(field_name) + if isinstance(proto_enum, EnumTypeWrapper): + return proto_enum.Name(int_value) + return int_value + + +def _parse_array(value_pb, element_decoder) -> []: + return [ + _parse_nullable(item_pb, element_decoder) + for item_pb in value_pb.list_value.values + ] + + +def _parse_struct(value_pb, element_decoders): + return [ + _parse_nullable(item_pb, element_decoders[i]) + for (i, item_pb) in enumerate(value_pb.list_value.values) + ] + + +def _parse_nullable(value_pb, decoder): + if value_pb.HasField("null_value"): + return None + else: + return decoder(value_pb) + + +def _parse_interval(value_pb): + """Parse a Value protobuf containing an interval.""" + if hasattr(value_pb, "string_value"): + return Interval.from_str(value_pb.string_value) + return Interval.from_str(value_pb) + + class _SessionWrapper(object): """Base class for objects wrapping a session. @@ -251,3 +505,247 @@ def _metadata_with_prefix(prefix, **kw): List[Tuple[str, str]]: RPC metadata with supplied prefix """ return [("google-cloud-resource-prefix", prefix)] + + +def _retry_on_aborted_exception( + func, + deadline, + default_retry_delay=None, +): + """ + Handles retry logic for Aborted exceptions, considering the deadline. + """ + attempts = 0 + while True: + try: + attempts += 1 + return func() + except Aborted as exc: + _delay_until_retry( + exc, + deadline=deadline, + attempts=attempts, + default_retry_delay=default_retry_delay, + ) + continue + + +def _retry( + func, + retry_count=5, + delay=2, + allowed_exceptions=None, + before_next_retry=None, +): + """ + Retry a function with a specified number of retries, delay between retries, and list of allowed exceptions. + + Args: + func: The function to be retried. + retry_count: The maximum number of times to retry the function. + delay: The delay in seconds between retries. + allowed_exceptions: A tuple of exceptions that are allowed to occur without triggering a retry. + Passing allowed_exceptions as None will lead to retrying for all exceptions. + + Returns: + The result of the function if it is successful, or raises the last exception if all retries fail. + """ + retries = 0 + while retries <= retry_count: + if retries > 0 and before_next_retry: + before_next_retry(retries, delay) + + try: + return func() + except Exception as exc: + if ( + allowed_exceptions is None or exc.__class__ in allowed_exceptions + ) and retries < retry_count: + if ( + allowed_exceptions is not None + and allowed_exceptions[exc.__class__] is not None + ): + allowed_exceptions[exc.__class__](exc) + time.sleep(delay) + delay = delay * 2 + retries = retries + 1 + else: + raise exc + + +def _check_rst_stream_error(exc): + resumable_error = ( + any( + resumable_message in exc.message + for resumable_message in ( + "RST_STREAM", + "Received unexpected EOS on DATA frame from server", + ) + ), + ) + if not resumable_error: + raise + + +def _metadata_with_leader_aware_routing(value, **kw): + """Create RPC metadata containing a leader aware routing header + + Args: + value (bool): header value + + Returns: + List[Tuple[str, str]]: RPC metadata with leader aware routing header + """ + return ("x-goog-spanner-route-to-leader", str(value).lower()) + + +def _metadata_with_span_context(metadata: List[Tuple[str, str]], **kw) -> None: + """ + Appends metadata with end to end tracing header and OpenTelemetry span context . + + Args: + metadata (list[tuple[str, str]]): The metadata carrier where the OpenTelemetry context + should be injected. + Returns: + None + """ + if HAS_OPENTELEMETRY_INSTALLED and metadata is not None: + metadata.append(("x-goog-spanner-end-to-end-tracing", "true")) + inject(setter=OpenTelemetryContextSetter(), carrier=metadata) + + +def _delay_until_retry(exc, deadline, attempts, default_retry_delay=None): + """Helper for :meth:`Session.run_in_transaction`. + + Detect retryable abort, and impose server-supplied delay. + + :type exc: :class:`google.api_core.exceptions.Aborted` + :param exc: exception for aborted transaction + + :type deadline: float + :param deadline: maximum timestamp to continue retrying the transaction. + + :type attempts: int + :param attempts: number of call retries + """ + + cause = exc.errors[0] + now = time.time() + if now >= deadline: + raise + + delay = _get_retry_delay(cause, attempts, default_retry_delay=default_retry_delay) + if delay is not None: + if now + delay > deadline: + raise + + time.sleep(delay) + + +def _get_retry_delay(cause, attempts, default_retry_delay=None): + """Helper for :func:`_delay_until_retry`. + + :type exc: :class:`grpc.Call` + :param exc: exception for aborted transaction + + :rtype: float + :returns: seconds to wait before retrying the transaction. + + :type attempts: int + :param attempts: number of call retries + """ + if hasattr(cause, "trailing_metadata"): + metadata = dict(cause.trailing_metadata()) + else: + metadata = {} + retry_info_pb = metadata.get("google.rpc.retryinfo-bin") + if retry_info_pb is not None: + retry_info = RetryInfo() + retry_info.ParseFromString(retry_info_pb) + nanos = retry_info.retry_delay.nanos + return retry_info.retry_delay.seconds + nanos / 1.0e9 + if default_retry_delay is not None: + return default_retry_delay + + return 2**attempts + random.random() + + +class AtomicCounter: + def __init__(self, start_value=0): + self.__lock = threading.Lock() + self.__value = start_value + + @property + def value(self): + with self.__lock: + return self.__value + + def increment(self, n=1): + with self.__lock: + self.__value += n + return self.__value + + def __iadd__(self, n): + """ + Defines the inplace += operator result. + """ + with self.__lock: + self.__value += n + return self + + def __add__(self, n): + """ + Defines the result of invoking: value = AtomicCounter + addable + """ + with self.__lock: + n += self.__value + return n + + def __radd__(self, n): + """ + Defines the result of invoking: value = addable + AtomicCounter + """ + return self.__add__(n) + + def reset(self): + with self.__lock: + self.__value = 0 + + +def _metadata_with_request_id(*args, **kwargs): + return with_request_id(*args, **kwargs) + + +def _merge_Transaction_Options( + defaultTransactionOptions: TransactionOptions, + mergeTransactionOptions: TransactionOptions, +) -> TransactionOptions: + """Merges two TransactionOptions objects. + + - Values from `mergeTransactionOptions` take precedence if set. + - Values from `defaultTransactionOptions` are used only if missing. + + Args: + defaultTransactionOptions (TransactionOptions): The default transaction options (fallback values). + mergeTransactionOptions (TransactionOptions): The main transaction options (overrides when set). + + Returns: + TransactionOptions: A merged TransactionOptions object. + """ + + if defaultTransactionOptions is None: + return mergeTransactionOptions + + if mergeTransactionOptions is None: + return defaultTransactionOptions + + merged_pb = TransactionOptions()._pb # Create a new protobuf object + + # Merge defaultTransactionOptions first + merged_pb.MergeFrom(defaultTransactionOptions._pb) + + # Merge transactionOptions, ensuring it overrides default values + merged_pb.MergeFrom(mergeTransactionOptions._pb) + + # Convert protobuf object back into a TransactionOptions instance + return TransactionOptions(merged_pb) diff --git a/google/cloud/spanner_v1/_opentelemetry_tracing.py b/google/cloud/spanner_v1/_opentelemetry_tracing.py new file mode 100644 index 0000000000..eafc983850 --- /dev/null +++ b/google/cloud/spanner_v1/_opentelemetry_tracing.py @@ -0,0 +1,165 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Manages OpenTelemetry trace creation and handling""" + +from contextlib import contextmanager +from datetime import datetime +import os + +from google.cloud.spanner_v1 import SpannerClient +from google.cloud.spanner_v1 import gapic_version +from google.cloud.spanner_v1._helpers import ( + _metadata_with_span_context, +) + +try: + from opentelemetry import trace + from opentelemetry.trace.status import Status, StatusCode + from opentelemetry.semconv.attributes.otel_attributes import ( + OTEL_SCOPE_NAME, + OTEL_SCOPE_VERSION, + ) + + HAS_OPENTELEMETRY_INSTALLED = True +except ImportError: + HAS_OPENTELEMETRY_INSTALLED = False + +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture + +TRACER_NAME = "cloud.google.com/python/spanner" +TRACER_VERSION = gapic_version.__version__ +extended_tracing_globally_disabled = ( + os.getenv("SPANNER_ENABLE_EXTENDED_TRACING", "").lower() == "false" +) +end_to_end_tracing_globally_enabled = ( + os.getenv("SPANNER_ENABLE_END_TO_END_TRACING", "").lower() == "true" +) + + +def get_tracer(tracer_provider=None): + """ + get_tracer is a utility to unify and simplify retrieval of the tracer, without + leaking implementation details given that retrieving a tracer requires providing + the full qualified library name and version. + When the tracer_provider is set, it'll retrieve the tracer from it, otherwise + it'll fall back to the global tracer provider and use this library's specific semantics. + """ + if not tracer_provider: + # Acquire the global tracer provider. + tracer_provider = trace.get_tracer_provider() + + return tracer_provider.get_tracer(TRACER_NAME, TRACER_VERSION) + + +@contextmanager +def trace_call( + name, session=None, extra_attributes=None, observability_options=None, metadata=None +): + if session: + session._last_use_time = datetime.now() + + if not (HAS_OPENTELEMETRY_INSTALLED and name): + # Empty context manager. Users will have to check if the generated value is None or a span + yield None + return + + tracer_provider = None + + # By default enable_extended_tracing=True because in a bid to minimize + # breaking changes and preserve legacy behavior, we are keeping it turned + # on by default. + enable_extended_tracing = True + + enable_end_to_end_tracing = False + + db_name = "" + if session and getattr(session, "_database", None): + db_name = session._database.name + + if isinstance(observability_options, dict): # Avoid false positives with mock.Mock + tracer_provider = observability_options.get("tracer_provider", None) + enable_extended_tracing = observability_options.get( + "enable_extended_tracing", enable_extended_tracing + ) + enable_end_to_end_tracing = observability_options.get( + "enable_end_to_end_tracing", enable_end_to_end_tracing + ) + db_name = observability_options.get("db_name", db_name) + + tracer = get_tracer(tracer_provider) + + # Set base attributes that we know for every trace created + attributes = { + "db.type": "spanner", + "db.url": SpannerClient.DEFAULT_ENDPOINT, + "db.instance": db_name, + "net.host.name": SpannerClient.DEFAULT_ENDPOINT, + OTEL_SCOPE_NAME: TRACER_NAME, + OTEL_SCOPE_VERSION: TRACER_VERSION, + # Standard GCP attributes for OTel, attributes are used for internal purpose and are subjected to change + "gcp.client.service": "spanner", + "gcp.client.version": TRACER_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + } + + if extra_attributes: + attributes.update(extra_attributes) + + if extended_tracing_globally_disabled: + enable_extended_tracing = False + + if not enable_extended_tracing: + attributes.pop("db.statement", False) + + if end_to_end_tracing_globally_enabled: + enable_end_to_end_tracing = True + + with tracer.start_as_current_span( + name, kind=trace.SpanKind.CLIENT, attributes=attributes + ) as span: + with MetricsCapture(): + try: + if enable_end_to_end_tracing: + _metadata_with_span_context(metadata) + yield span + except Exception as error: + span.set_status(Status(StatusCode.ERROR, str(error))) + # OpenTelemetry-Python imposes invoking span.record_exception on __exit__ + # on any exception. We should file a bug later on with them to only + # invoke .record_exception if not already invoked, hence we should not + # invoke .record_exception on our own else we shall have 2 exceptions. + raise + else: + # All spans still have set_status available even if for example + # NonRecordingSpan doesn't have "_status". + absent_span_status = getattr(span, "_status", None) is None + if absent_span_status or span._status.status_code == StatusCode.UNSET: + # OpenTelemetry-Python only allows a status change + # if the current code is UNSET or ERROR. At the end + # of the generator's consumption, only set it to OK + # it wasn't previously set otherwise. + # https://github.com/googleapis/python-spanner/issues/1246 + span.set_status(Status(StatusCode.OK)) + + +def get_current_span(): + if not HAS_OPENTELEMETRY_INSTALLED: + return None + return trace.get_current_span() + + +def add_span_event(span, event_name, event_attributes=None): + if span: + span.add_event(event_name, event_attributes) diff --git a/google/cloud/spanner_v1/backup.py b/google/cloud/spanner_v1/backup.py index 2aaa1c0f5c..1fcffbe05a 100644 --- a/google/cloud/spanner_v1/backup.py +++ b/google/cloud/spanner_v1/backup.py @@ -16,10 +16,13 @@ import re -from google.cloud._helpers import _datetime_to_pb_timestamp, _pb_timestamp_to_datetime from google.cloud.exceptions import NotFound -from google.cloud.spanner_admin_database_v1.gapic import enums +from google.cloud.spanner_admin_database_v1 import Backup as BackupPB +from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig +from google.cloud.spanner_admin_database_v1 import CreateBackupRequest +from google.cloud.spanner_admin_database_v1 import CopyBackupEncryptionConfig +from google.cloud.spanner_admin_database_v1 import CopyBackupRequest from google.cloud.spanner_v1._helpers import _metadata_with_prefix _BACKUP_NAME_RE = re.compile( @@ -52,17 +55,57 @@ class Backup(object): :param expire_time: (Optional) The expire time that will be used to create the backup. Required if the create method needs to be called. + + :type version_time: :class:`datetime.datetime` + :param version_time: (Optional) The version time that was specified for + the externally consistent copy of the database. If + not present, it is the same as the `create_time` of + the backup. + + :type encryption_config: + :class:`~google.cloud.spanner_admin_database_v1.types.CreateBackupEncryptionConfig` + or :class:`dict` + :param encryption_config: + (Optional) Encryption configuration for the backup. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_admin_database_v1.types.CreateBackupEncryptionConfig` """ - def __init__(self, backup_id, instance, database="", expire_time=None): + def __init__( + self, + backup_id, + instance, + database="", + expire_time=None, + version_time=None, + encryption_config=None, + source_backup=None, + ): self.backup_id = backup_id self._instance = instance self._database = database + self._source_backup = source_backup self._expire_time = expire_time self._create_time = None + self._version_time = version_time self._size_bytes = None self._state = None self._referencing_databases = None + self._encryption_info = None + self._max_expire_time = None + self._referencing_backups = None + self._database_dialect = None + if type(encryption_config) is dict: + if source_backup: + self._encryption_config = CopyBackupEncryptionConfig( + **encryption_config + ) + else: + self._encryption_config = CreateBackupEncryptionConfig( + **encryption_config + ) + else: + self._encryption_config = encryption_config @property def name(self): @@ -110,6 +153,16 @@ def create_time(self): """ return self._create_time + @property + def version_time(self): + """Version time of this backup. + + :rtype: :class:`datetime.datetime` + :returns: a datetime object representing the version time of + this backup + """ + return self._version_time + @property def size_bytes(self): """Size of this backup in bytes. @@ -123,7 +176,7 @@ def size_bytes(self): def state(self): """State of this backup. - :rtype: :class:`~google.cloud.spanner_admin_database_v1.gapic.enums.Backup.State` + :rtype: :class:`~google.cloud.spanner_admin_database_v1.types.Backup.State` :returns: an enum describing the state of the backup """ return self._state @@ -138,11 +191,44 @@ def referencing_databases(self): """ return self._referencing_databases + @property + def encryption_info(self): + """Encryption info for this backup. + :rtype: :class:`~google.cloud.spanner_admin_database_v1.types.EncryptionInfo` + :returns: a class representing the encryption info + """ + return self._encryption_info + + @property + def max_expire_time(self): + """The max allowed expiration time of the backup. + :rtype: :class:`datetime.datetime` + :returns: a datetime object representing the max expire time of + this backup + """ + return self._max_expire_time + + @property + def referencing_backups(self): + """The names of the destination backups being created by copying this source backup. + :rtype: list of strings + :returns: a list of backup path strings which specify the backups that are + referencing this copy backup + """ + return self._referencing_backups + + def database_dialect(self): + """Database Dialect for this backup. + :rtype: :class:`~google.cloud.spanner_admin_database_v1.types.DatabaseDialect` + :returns: a class representing the dialect of this backup's database + """ + return self._database_dialect + @classmethod def from_pb(cls, backup_pb, instance): """Create an instance of this class from a protobuf message. - :type backup_pb: :class:`~google.spanner.admin.database.v1.Backup` + :type backup_pb: :class:`~google.cloud.spanner_admin_database_v1.types.Backup` :param backup_pb: A backup protobuf object. :type instance: :class:`~google.cloud.spanner_v1.instance.Instance` @@ -176,7 +262,7 @@ def from_pb(cls, backup_pb, instance): return cls(backup_id, instance) def create(self): - """Create this backup within its instance. + """Create this backup or backup copy within its instance. :rtype: :class:`~google.api_core.operation.Operation` :returns: a future used to poll the status of the create request @@ -187,17 +273,58 @@ def create(self): """ if not self._expire_time: raise ValueError("expire_time not set") - if not self._database: - raise ValueError("database not set") + + if not self._database and not self._source_backup: + raise ValueError("database and source backup both not set") + + if ( + ( + self._encryption_config + and self._encryption_config.kms_key_name + and self._encryption_config.encryption_type + != CreateBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION + ) + and self._encryption_config + and self._encryption_config.kms_key_name + and self._encryption_config.encryption_type + != CopyBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION + ): + raise ValueError("kms_key_name only used with CUSTOMER_MANAGED_ENCRYPTION") + api = self._instance._client.database_admin_api metadata = _metadata_with_prefix(self.name) - backup = { - "database": self._database, - "expire_time": _datetime_to_pb_timestamp(self.expire_time), - } + + if self._source_backup: + request = CopyBackupRequest( + parent=self._instance.name, + backup_id=self.backup_id, + source_backup=self._source_backup, + expire_time=self._expire_time, + encryption_config=self._encryption_config, + ) + + future = api.copy_backup( + request=request, + metadata=metadata, + ) + return future + + backup = BackupPB( + database=self._database, + expire_time=self.expire_time, + version_time=self.version_time, + ) + + request = CreateBackupRequest( + parent=self._instance.name, + backup_id=self.backup_id, + backup=backup, + encryption_config=self._encryption_config, + ) future = api.create_backup( - self._instance.name, self.backup_id, backup, metadata=metadata + request=request, + metadata=metadata, ) return future @@ -211,7 +338,7 @@ def exists(self): metadata = _metadata_with_prefix(self.name) try: - api.get_backup(self.name, metadata=metadata) + api.get_backup(name=self.name, metadata=metadata) except NotFound: return False return True @@ -225,13 +352,17 @@ def reload(self): """ api = self._instance._client.database_admin_api metadata = _metadata_with_prefix(self.name) - pb = api.get_backup(self.name, metadata=metadata) + pb = api.get_backup(name=self.name, metadata=metadata) self._database = pb.database - self._expire_time = _pb_timestamp_to_datetime(pb.expire_time) - self._create_time = _pb_timestamp_to_datetime(pb.create_time) + self._expire_time = pb.expire_time + self._create_time = pb.create_time + self._version_time = pb.version_time self._size_bytes = pb.size_bytes - self._state = enums.Backup.State(pb.state) + self._state = BackupPB.State(pb.state) self._referencing_databases = pb.referencing_databases + self._encryption_info = pb.encryption_info + self._max_expire_time = pb.max_expire_time + self._referencing_backups = pb.referencing_backups def update_expire_time(self, new_expire_time): """Update the expire time of this backup. @@ -241,12 +372,14 @@ def update_expire_time(self, new_expire_time): """ api = self._instance._client.database_admin_api metadata = _metadata_with_prefix(self.name) - backup_update = { - "name": self.name, - "expire_time": _datetime_to_pb_timestamp(new_expire_time), - } + backup_update = BackupPB( + name=self.name, + expire_time=new_expire_time, + ) update_mask = {"paths": ["expire_time"]} - api.update_backup(backup_update, update_mask, metadata=metadata) + api.update_backup( + backup=backup_update, update_mask=update_mask, metadata=metadata + ) self._expire_time = new_expire_time def is_ready(self): @@ -255,21 +388,10 @@ def is_ready(self): :rtype: bool :returns: True if the backup state is READY, else False. """ - return self.state == enums.Backup.State.READY + return self.state == BackupPB.State.READY def delete(self): """Delete this backup.""" api = self._instance._client.database_admin_api metadata = _metadata_with_prefix(self.name) - api.delete_backup(self.name, metadata=metadata) - - -class BackupInfo(object): - def __init__(self, backup, create_time, source_database): - self.backup = backup - self.create_time = _pb_timestamp_to_datetime(create_time) - self.source_database = source_database - - @classmethod - def from_pb(cls, pb): - return cls(pb.backup, pb.create_time, pb.source_database) + api.delete_backup(name=self.name, metadata=metadata) diff --git a/google/cloud/spanner_v1/batch.py b/google/cloud/spanner_v1/batch.py index e62763d7fd..0792e600dc 100644 --- a/google/cloud/spanner_v1/batch.py +++ b/google/cloud/spanner_v1/batch.py @@ -13,17 +13,32 @@ # limitations under the License. """Context manager for Cloud Spanner batched writes.""" +import functools +from typing import List, Optional -from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation -from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionOptions +from google.cloud.spanner_v1 import CommitRequest, CommitResponse +from google.cloud.spanner_v1 import Mutation +from google.cloud.spanner_v1 import TransactionOptions +from google.cloud.spanner_v1 import BatchWriteRequest -# pylint: disable=ungrouped-imports -from google.cloud._helpers import _pb_timestamp_to_datetime from google.cloud.spanner_v1._helpers import _SessionWrapper from google.cloud.spanner_v1._helpers import _make_list_value_pbs -from google.cloud.spanner_v1._helpers import _metadata_with_prefix - -# pylint: enable=ungrouped-imports +from google.cloud.spanner_v1._helpers import ( + _metadata_with_prefix, + _metadata_with_leader_aware_routing, + _merge_Transaction_Options, + AtomicCounter, +) +from google.cloud.spanner_v1._opentelemetry_tracing import trace_call +from google.cloud.spanner_v1 import RequestOptions +from google.cloud.spanner_v1._helpers import _retry +from google.cloud.spanner_v1._helpers import _retry_on_aborted_exception +from google.cloud.spanner_v1._helpers import _check_rst_stream_error +from google.api_core.exceptions import InternalServerError +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture +import time + +DEFAULT_RETRY_TIMEOUT_SECS = 30 class _BatchBase(_SessionWrapper): @@ -35,17 +50,13 @@ class _BatchBase(_SessionWrapper): def __init__(self, session): super(_BatchBase, self).__init__(session) - self._mutations = [] - - def _check_state(self): - """Helper for :meth:`commit` et al. - Subclasses must override + self._mutations: List[Mutation] = [] + self.transaction_tag: Optional[str] = None - :raises: :exc:`ValueError` if the object's state is invalid for making - API requests. - """ - raise NotImplementedError + self.committed = None + """Timestamp at which the batch was successfully committed.""" + self.commit_stats: Optional[CommitResponse.CommitStats] = None def insert(self, table, columns, values): """Insert one or more new table rows. @@ -60,6 +71,8 @@ def insert(self, table, columns, values): :param values: Values to be modified. """ self._mutations.append(Mutation(insert=_make_write_pb(table, columns, values))) + # TODO: Decide if we should add a span event per mutation: + # https://github.com/googleapis/python-spanner/issues/1269 def update(self, table, columns, values): """Update one or more existing table rows. @@ -74,6 +87,8 @@ def update(self, table, columns, values): :param values: Values to be modified. """ self._mutations.append(Mutation(update=_make_write_pb(table, columns, values))) + # TODO: Decide if we should add a span event per mutation: + # https://github.com/googleapis/python-spanner/issues/1269 def insert_or_update(self, table, columns, values): """Insert/update one or more table rows. @@ -90,6 +105,8 @@ def insert_or_update(self, table, columns, values): self._mutations.append( Mutation(insert_or_update=_make_write_pb(table, columns, values)) ) + # TODO: Decide if we should add a span event per mutation: + # https://github.com/googleapis/python-spanner/issues/1269 def replace(self, table, columns, values): """Replace one or more table rows. @@ -104,6 +121,8 @@ def replace(self, table, columns, values): :param values: Values to be modified. """ self._mutations.append(Mutation(replace=_make_write_pb(table, columns, values))) + # TODO: Decide if we should add a span event per mutation: + # https://github.com/googleapis/python-spanner/issues/1269 def delete(self, table, keyset): """Delete one or more table rows. @@ -116,49 +135,153 @@ def delete(self, table, keyset): """ delete = Mutation.Delete(table=table, key_set=keyset._to_pb()) self._mutations.append(Mutation(delete=delete)) + # TODO: Decide if we should add a span event per mutation: + # https://github.com/googleapis/python-spanner/issues/1269 class Batch(_BatchBase): - """Accumulate mutations for transmission during :meth:`commit`. - """ - - committed = None - """Timestamp at which the batch was successfully committed.""" + """Accumulate mutations for transmission during :meth:`commit`.""" + + def commit( + self, + return_commit_stats=False, + request_options=None, + max_commit_delay=None, + exclude_txn_from_change_streams=False, + isolation_level=TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED, + timeout_secs=DEFAULT_RETRY_TIMEOUT_SECS, + default_retry_delay=None, + ): + """Commit mutations to the database. - def _check_state(self): - """Helper for :meth:`commit` et al. + :type return_commit_stats: bool + :param return_commit_stats: + If true, the response will return commit stats which can be accessed though commit_stats. + + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type max_commit_delay: :class:`datetime.timedelta` + :param max_commit_delay: + (Optional) The amount of latency this request is willing to incur + in order to improve throughput. + + :type exclude_txn_from_change_streams: bool + :param exclude_txn_from_change_streams: + (Optional) If true, instructs the transaction to be excluded from being recorded in change streams + with the DDL option `allow_txn_exclusion=true`. This does not exclude the transaction from + being recorded in the change streams with the DDL option `allow_txn_exclusion` being false or + unset. + + :type isolation_level: + :class:`google.cloud.spanner_v1.types.TransactionOptions.IsolationLevel` + :param isolation_level: + (Optional) Sets isolation level for the transaction. + + :type read_lock_mode: + :class:`google.cloud.spanner_v1.types.TransactionOptions.ReadWrite.ReadLockMode` + :param read_lock_mode: + (Optional) Sets the read lock mode for this transaction. + + :type timeout_secs: int + :param timeout_secs: (Optional) The maximum time in seconds to wait for the commit to complete. + + :type default_retry_delay: int + :param timeout_secs: (Optional) The default time in seconds to wait before re-trying the commit.. - Subclasses must override + :rtype: datetime + :returns: timestamp of the committed changes. - :raises: :exc:`ValueError` if the object's state is invalid for making - API requests. + :raises: ValueError: if the transaction is not ready to commit. """ - if self.committed is not None: - raise ValueError("Batch already committed") - def commit(self): - """Commit mutations to the database. + if self.committed is not None: + raise ValueError("Transaction already committed.") - :rtype: datetime - :returns: timestamp of the committed changes. - """ - self._check_state() - database = self._session._database + mutations = self._mutations + session = self._session + database = session._database api = database.spanner_api + metadata = _metadata_with_prefix(database.name) - txn_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) - response = api.commit( - self._session.name, - mutations=self._mutations, - single_use_transaction=txn_options, - metadata=metadata, + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + txn_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=read_lock_mode, + ), + exclude_txn_from_change_streams=exclude_txn_from_change_streams, + isolation_level=isolation_level, + ) + + txn_options = _merge_Transaction_Options( + database.default_transaction_options.default_read_write_transaction_options, + txn_options, ) - self.committed = _pb_timestamp_to_datetime(response.commit_timestamp) + + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + request_options.transaction_tag = self.transaction_tag + + # Request tags are not supported for commit requests. + request_options.request_tag = None + + with trace_call( + name=f"CloudSpanner.{type(self).__name__}.commit", + session=session, + extra_attributes={"num_mutations": len(mutations)}, + observability_options=getattr(database, "observability_options", None), + metadata=metadata, + ) as span, MetricsCapture(): + + def wrapped_method(): + commit_request = CommitRequest( + session=session.name, + mutations=mutations, + single_use_transaction=txn_options, + return_commit_stats=return_commit_stats, + max_commit_delay=max_commit_delay, + request_options=request_options, + ) + commit_method = functools.partial( + api.commit, + request=commit_request, + metadata=database.metadata_with_request_id( + # This code is retried due to ABORTED, hence nth_request + # should be increased. attempt can only be increased if + # we encounter UNAVAILABLE or INTERNAL. + getattr(database, "_next_nth_request", 0), + 1, + metadata, + span, + ), + ) + return commit_method() + + response = _retry_on_aborted_exception( + wrapped_method, + deadline=time.time() + timeout_secs, + default_retry_delay=default_retry_delay, + ) + + self.committed = response.commit_timestamp + self.commit_stats = response.commit_stats + return self.committed def __enter__(self): """Begin ``with`` block.""" - self._check_state() + if self.committed is not None: + raise ValueError("Transaction already committed") return self @@ -168,8 +291,124 @@ def __exit__(self, exc_type, exc_val, exc_tb): self.commit() +class MutationGroup(_BatchBase): + """A container for mutations. + + Clients should use :class:`~google.cloud.spanner_v1.MutationGroups` to + obtain instances instead of directly creating instances. + + :type session: :class:`~google.cloud.spanner_v1.session.Session` + :param session: The session used to perform the commit. + + :type mutations: list + :param mutations: The list into which mutations are to be accumulated. + """ + + def __init__(self, session, mutations=[]): + super(MutationGroup, self).__init__(session) + self._mutations = mutations + + +class MutationGroups(_SessionWrapper): + """Accumulate mutation groups for transmission during :meth:`batch_write`. + + :type session: :class:`~google.cloud.spanner_v1.session.Session` + :param session: the session used to perform the commit + """ + + def __init__(self, session): + super(MutationGroups, self).__init__(session) + self._mutation_groups: List[MutationGroup] = [] + self.committed: bool = False + + def group(self): + """Returns a new `MutationGroup` to which mutations can be added.""" + mutation_group = BatchWriteRequest.MutationGroup() + self._mutation_groups.append(mutation_group) + return MutationGroup(self._session, mutation_group.mutations) + + def batch_write(self, request_options=None, exclude_txn_from_change_streams=False): + """Executes batch_write. + + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type exclude_txn_from_change_streams: bool + :param exclude_txn_from_change_streams: + (Optional) If true, instructs the transaction to be excluded from being recorded in change streams + with the DDL option `allow_txn_exclusion=true`. This does not exclude the transaction from + being recorded in the change streams with the DDL option `allow_txn_exclusion` being false or + unset. + + :rtype: :class:`Iterable[google.cloud.spanner_v1.types.BatchWriteResponse]` + :returns: a sequence of responses for each batch. + """ + + if self.committed: + raise ValueError("MutationGroups already committed") + + mutation_groups = self._mutation_groups + session = self._session + database = session._database + api = database.spanner_api + + metadata = _metadata_with_prefix(database.name) + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + + with trace_call( + name="CloudSpanner.batch_write", + session=session, + extra_attributes={"num_mutation_groups": len(mutation_groups)}, + observability_options=getattr(database, "observability_options", None), + metadata=metadata, + ) as span, MetricsCapture(): + attempt = AtomicCounter(0) + nth_request = getattr(database, "_next_nth_request", 0) + + def wrapped_method(): + batch_write_request = BatchWriteRequest( + session=session.name, + mutation_groups=mutation_groups, + request_options=request_options, + exclude_txn_from_change_streams=exclude_txn_from_change_streams, + ) + batch_write_method = functools.partial( + api.batch_write, + request=batch_write_request, + metadata=database.metadata_with_request_id( + nth_request, + attempt.increment(), + metadata, + span, + ), + ) + return batch_write_method() + + response = _retry( + wrapped_method, + allowed_exceptions={ + InternalServerError: _check_rst_stream_error, + }, + ) + + self.committed = True + return response + + def _make_write_pb(table, columns, values): - """Helper for :meth:`Batch.insert` et aliae. + """Helper for :meth:`Batch.insert` et al. :type table: str :param table: Name of the table to be modified. @@ -180,7 +419,7 @@ def _make_write_pb(table, columns, values): :type values: list of lists :param values: Values to be modified. - :rtype: :class:`google.cloud.spanner_v1.proto.mutation_pb2.Mutation.Write` + :rtype: :class:`google.cloud.spanner_v1.types.Mutation.Write` :returns: Write protobuf """ return Mutation.Write( diff --git a/google/cloud/spanner_v1/client.py b/google/cloud/spanner_v1/client.py index 29f3fcf69a..e0e8c44058 100644 --- a/google/cloud/spanner_v1/client.py +++ b/google/cloud/spanner_v1/client.py @@ -28,35 +28,53 @@ import warnings from google.api_core.gapic_v1 import client_info +from google.auth.credentials import AnonymousCredentials import google.api_core.client_options +from google.cloud.client import ClientWithProject +from typing import Optional -from google.cloud.spanner_admin_instance_v1.gapic.transports import ( - instance_admin_grpc_transport, -) -from google.cloud.spanner_admin_database_v1.gapic.transports import ( - database_admin_grpc_transport, +from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient +from google.cloud.spanner_admin_database_v1.services.database_admin.transports.grpc import ( + DatabaseAdminGrpcTransport, ) - -# pylint: disable=line-too-long -from google.cloud.spanner_admin_database_v1.gapic.database_admin_client import ( # noqa - DatabaseAdminClient, +from google.cloud.spanner_admin_instance_v1 import InstanceAdminClient +from google.cloud.spanner_admin_instance_v1.services.instance_admin.transports.grpc import ( + InstanceAdminGrpcTransport, +) +from google.cloud.spanner_admin_instance_v1 import ListInstanceConfigsRequest +from google.cloud.spanner_admin_instance_v1 import ListInstancesRequest +from google.cloud.spanner_v1 import __version__ +from google.cloud.spanner_v1 import ExecuteSqlRequest +from google.cloud.spanner_v1 import DefaultTransactionOptions +from google.cloud.spanner_v1._helpers import _merge_query_options +from google.cloud.spanner_v1._helpers import _metadata_with_prefix +from google.cloud.spanner_v1.instance import Instance +from google.cloud.spanner_v1.metrics.constants import ( + ENABLE_SPANNER_METRICS_ENV_VAR, + METRIC_EXPORT_INTERVAL_MS, +) +from google.cloud.spanner_v1.metrics.spanner_metrics_tracer_factory import ( + SpannerMetricsTracerFactory, ) -from google.cloud.spanner_admin_instance_v1.gapic.instance_admin_client import ( # noqa - InstanceAdminClient, +from google.cloud.spanner_v1.metrics.metrics_exporter import ( + CloudMonitoringMetricsExporter, ) -# pylint: enable=line-too-long +try: + from opentelemetry import metrics + from opentelemetry.sdk.metrics import MeterProvider + from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader -from google.cloud.client import ClientWithProject -from google.cloud.spanner_v1 import __version__ -from google.cloud.spanner_v1._helpers import _merge_query_options, _metadata_with_prefix -from google.cloud.spanner_v1.instance import DEFAULT_NODE_COUNT -from google.cloud.spanner_v1.instance import Instance -from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + HAS_GOOGLE_CLOUD_MONITORING_INSTALLED = True +except ImportError: # pragma: NO COVER + HAS_GOOGLE_CLOUD_MONITORING_INSTALLED = False + +from google.cloud.spanner_v1._helpers import AtomicCounter _CLIENT_INFO = client_info.ClientInfo(client_library_version=__version__) EMULATOR_ENV_VAR = "SPANNER_EMULATOR_HOST" +ENABLE_BUILTIN_METRICS_ENV_VAR = "SPANNER_ENABLE_BUILTIN_METRICS" _EMULATOR_HOST_HTTP_SCHEME = ( "%s contains a http scheme. When used with a scheme it may cause gRPC's " "DNS resolver to endlessly attempt to resolve. %s is intended to be used " @@ -64,10 +82,7 @@ ) % ((EMULATOR_ENV_VAR,) * 3) SPANNER_ADMIN_SCOPE = "https://www.googleapis.com/auth/spanner.admin" OPTIMIZER_VERSION_ENV_VAR = "SPANNER_OPTIMIZER_VERSION" -_USER_AGENT_DEPRECATED = ( - "The 'user_agent' argument to 'Client' is deprecated / unused. " - "Please pass an appropriate 'client_info' instead." -) +OPTIMIZER_STATISITCS_PACKAGE_ENV_VAR = "SPANNER_OPTIMIZER_STATISTICS_PACKAGE" def _get_spanner_emulator_host(): @@ -78,32 +93,12 @@ def _get_spanner_optimizer_version(): return os.getenv(OPTIMIZER_VERSION_ENV_VAR, "") -class InstanceConfig(object): - """Named configurations for Spanner instances. - - :type name: str - :param name: ID of the instance configuration - - :type display_name: str - :param display_name: Name of the instance configuration - """ - - def __init__(self, name, display_name): - self.name = name - self.display_name = display_name - - @classmethod - def from_pb(cls, config_pb): - """Construct an instance from the equvalent protobuf. +def _get_spanner_optimizer_statistics_package(): + return os.getenv(OPTIMIZER_STATISITCS_PACKAGE_ENV_VAR, "") - :type config_pb: - :class:`~google.spanner.v1.spanner_instance_admin_pb2.InstanceConfig` - :param config_pb: the protobuf to parse - :rtype: :class:`InstanceConfig` - :returns: an instance of this class - """ - return cls(config_pb.name, config_pb.display_name) +def _get_spanner_enable_builtin_metrics(): + return os.getenv(ENABLE_SPANNER_METRICS_ENV_VAR) == "true" class Client(ClientWithProject): @@ -134,76 +129,149 @@ class Client(ClientWithProject): you only need to set this if you're developing your own library or partner tool. - :type user_agent: str - :param user_agent: - (Deprecated) The user agent to be used with API request. - Not used. - :type client_options: :class:`~google.api_core.client_options.ClientOptions` or :class:`dict` :param client_options: (Optional) Client options used to set user options on the client. API Endpoint should be set through client_options. :type query_options: - :class:`~google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or :class:`dict` :param query_options: (Optional) Query optimizer configuration to use for the given query. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.QueryOptions` + :type route_to_leader_enabled: boolean + :param route_to_leader_enabled: + (Optional) Default True. Set route_to_leader_enabled as False to + disable leader aware routing. Disabling leader aware routing would + route all requests in RW/PDML transactions to the closest region. + + :type directed_read_options: :class:`~google.cloud.spanner_v1.DirectedReadOptions` + or :class:`dict` + :param directed_read_options: (Optional) Client options used to set the directed_read_options + for all ReadRequests and ExecuteSqlRequests that indicates which replicas + or regions should be used for non-transactional reads or queries. + + :type observability_options: dict (str -> any) or None + :param observability_options: (Optional) the configuration to control + the tracer's behavior. + tracer_provider is the injected tracer provider + enable_extended_tracing: :type:boolean when set to true will allow for + spans that issue SQL statements to be annotated with SQL. + Default `True`, please set it to `False` to turn it off + or you can use the environment variable `SPANNER_ENABLE_EXTENDED_TRACING=` + to control it. + enable_end_to_end_tracing: :type:boolean when set to true will allow for spans from Spanner server side. + Default `False`, please set it to `True` to turn it on + or you can use the environment variable `SPANNER_ENABLE_END_TO_END_TRACING=` + to control it. + + :type default_transaction_options: :class:`~google.cloud.spanner_v1.DefaultTransactionOptions` + or :class:`dict` + :param default_transaction_options: (Optional) Default options to use for all transactions. + :raises: :class:`ValueError ` if both ``read_only`` and ``admin`` are :data:`True` """ _instance_admin_api = None _database_admin_api = None - _endpoint_cache = {} - user_agent = None _SET_PROJECT = True # Used by from_service_account_json() SCOPE = (SPANNER_ADMIN_SCOPE,) """The scopes required for Google Cloud Spanner.""" + NTH_CLIENT = AtomicCounter() + def __init__( self, project=None, credentials=None, client_info=_CLIENT_INFO, - user_agent=None, client_options=None, query_options=None, + route_to_leader_enabled=True, + directed_read_options=None, + observability_options=None, + default_transaction_options: Optional[DefaultTransactionOptions] = None, ): + self._emulator_host = _get_spanner_emulator_host() + + if client_options and type(client_options) is dict: + self._client_options = google.api_core.client_options.from_dict( + client_options + ) + else: + self._client_options = client_options + + if self._emulator_host: + credentials = AnonymousCredentials() + elif isinstance(credentials, AnonymousCredentials): + self._emulator_host = self._client_options.api_endpoint + # NOTE: This API has no use for the _http argument, but sending it # will have no impact since the _http() @property only lazily # creates a working HTTP object. super(Client, self).__init__( - project=project, credentials=credentials, _http=None + project=project, + credentials=credentials, + client_options=client_options, + _http=None, ) self._client_info = client_info - if client_options and type(client_options) == dict: - self._client_options = google.api_core.client_options.from_dict( - client_options - ) - else: - self._client_options = client_options env_query_options = ExecuteSqlRequest.QueryOptions( - optimizer_version=_get_spanner_optimizer_version() + optimizer_version=_get_spanner_optimizer_version(), + optimizer_statistics_package=_get_spanner_optimizer_statistics_package(), ) # Environment flag config has higher precedence than application config. self._query_options = _merge_query_options(query_options, env_query_options) - if user_agent is not None: - warnings.warn(_USER_AGENT_DEPRECATED, DeprecationWarning, stacklevel=2) - self.user_agent = user_agent - - if _get_spanner_emulator_host() is not None and ( - "http://" in _get_spanner_emulator_host() - or "https://" in _get_spanner_emulator_host() + if self._emulator_host is not None and ( + "http://" in self._emulator_host or "https://" in self._emulator_host ): warnings.warn(_EMULATOR_HOST_HTTP_SCHEME) + # Check flag to enable Spanner builtin metrics + if ( + _get_spanner_enable_builtin_metrics() + and HAS_GOOGLE_CLOUD_MONITORING_INSTALLED + ): + meter_provider = metrics.NoOpMeterProvider() + if not _get_spanner_emulator_host(): + meter_provider = MeterProvider( + metric_readers=[ + PeriodicExportingMetricReader( + CloudMonitoringMetricsExporter( + project_id=project, credentials=credentials + ), + export_interval_millis=METRIC_EXPORT_INTERVAL_MS, + ) + ] + ) + metrics.set_meter_provider(meter_provider) + SpannerMetricsTracerFactory() + else: + SpannerMetricsTracerFactory(enabled=False) + + self._route_to_leader_enabled = route_to_leader_enabled + self._directed_read_options = directed_read_options + self._observability_options = observability_options + if default_transaction_options is None: + default_transaction_options = DefaultTransactionOptions() + elif not isinstance(default_transaction_options, DefaultTransactionOptions): + raise TypeError( + "default_transaction_options must be an instance of DefaultTransactionOptions" + ) + self._default_transaction_options = default_transaction_options + self._nth_client_id = Client.NTH_CLIENT.increment() + self._nth_request = AtomicCounter(0) + + @property + def _next_nth_request(self): + return self._nth_request.increment() @property def credentials(self): @@ -238,9 +306,9 @@ def project_name(self): def instance_admin_api(self): """Helper for session-related API calls.""" if self._instance_admin_api is None: - if _get_spanner_emulator_host() is not None: - transport = instance_admin_grpc_transport.InstanceAdminGrpcTransport( - channel=grpc.insecure_channel(_get_spanner_emulator_host()) + if self._emulator_host is not None: + transport = InstanceAdminGrpcTransport( + channel=grpc.insecure_channel(target=self._emulator_host) ) self._instance_admin_api = InstanceAdminClient( client_info=self._client_info, @@ -259,9 +327,9 @@ def instance_admin_api(self): def database_admin_api(self): """Helper for session-related API calls.""" if self._database_admin_api is None: - if _get_spanner_emulator_host() is not None: - transport = database_admin_grpc_transport.DatabaseAdminGrpcTransport( - channel=grpc.insecure_channel(_get_spanner_emulator_host()) + if self._emulator_host is not None: + transport = DatabaseAdminGrpcTransport( + channel=grpc.insecure_channel(target=self._emulator_host) ) self._database_admin_api = DatabaseAdminClient( client_info=self._client_info, @@ -276,6 +344,46 @@ def database_admin_api(self): ) return self._database_admin_api + @property + def route_to_leader_enabled(self): + """Getter for if read-write or pdml requests will be routed to leader. + + :rtype: boolean + :returns: If read-write requests will be routed to leader. + """ + return self._route_to_leader_enabled + + @property + def observability_options(self): + """Getter for observability_options. + + :rtype: dict + :returns: The configured observability_options if set. + """ + return self._observability_options + + @property + def default_transaction_options(self): + """Getter for default_transaction_options. + + :rtype: + :class:`~google.cloud.spanner_v1.DefaultTransactionOptions` + or :class:`dict` + :returns: The default transaction options that are used by this client for all transactions. + """ + return self._default_transaction_options + + @property + def directed_read_options(self): + """Getter for directed_read_options. + + :rtype: + :class:`~google.cloud.spanner_v1.DirectedReadOptions` + or :class:`dict` + :returns: The directed_read_options for the client. + """ + return self._directed_read_options + def copy(self): """Make a copy of this client. @@ -287,7 +395,7 @@ def copy(self): """ return self.__class__(project=self.project, credentials=self._credentials) - def list_instance_configs(self, page_size=None, page_token=None): + def list_instance_configs(self, page_size=None): """List available instance configurations for the client's project. .. _RPC docs: https://cloud.google.com/spanner/docs/reference/rpc/\ @@ -302,27 +410,19 @@ def list_instance_configs(self, page_size=None, page_token=None): from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. - :type page_token: str - :param page_token: - Optional. If present, return the next batch of configs, using - the value, which must correspond to the ``nextPageToken`` value - returned in the previous response. Deprecated: use the ``pages`` - property of the returned iterator instead of manually passing - the token. - :rtype: :class:`~google.api_core.page_iterator.Iterator` :returns: Iterator of - :class:`~google.cloud.spanner_v1.instance.InstanceConfig` + :class:`~google.cloud.spanner_admin_instance_v1.types.InstanceConfig` resources within the client's project. """ metadata = _metadata_with_prefix(self.project_name) - path = "projects/%s" % (self.project,) + request = ListInstanceConfigsRequest( + parent=self.project_name, page_size=page_size + ) page_iter = self.instance_admin_api.list_instance_configs( - path, page_size=page_size, metadata=metadata + request=request, metadata=metadata ) - page_iter.next_page_token = page_token - page_iter.item_to_value = _item_to_instance_config return page_iter def instance( @@ -330,7 +430,9 @@ def instance( instance_id, configuration_name=None, display_name=None, - node_count=DEFAULT_NODE_COUNT, + node_count=None, + labels=None, + processing_units=None, ): """Factory to create a instance associated with this client. @@ -355,6 +457,13 @@ def instance( :param node_count: (Optional) The number of nodes in the instance's cluster; used to set up the instance's cluster. + :type processing_units: int + :param processing_units: (Optional) The number of processing units + allocated to this instance. + + :type labels: dict (str -> str) or None + :param labels: (Optional) User-assigned labels for this instance. + :rtype: :class:`~google.cloud.spanner_v1.instance.Instance` :returns: an instance owned by this client. """ @@ -364,10 +473,12 @@ def instance( configuration_name, node_count, display_name, - _get_spanner_emulator_host(), + self._emulator_host, + labels, + processing_units, ) - def list_instances(self, filter_="", page_size=None, page_token=None): + def list_instances(self, filter_="", page_size=None): """List instances for the client's project. See @@ -383,54 +494,45 @@ def list_instances(self, filter_="", page_size=None, page_token=None): from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. - :type page_token: str - :param page_token: - Optional. If present, return the next batch of instances, using - the value, which must correspond to the ``nextPageToken`` value - returned in the previous response. Deprecated: use the ``pages`` - property of the returned iterator instead of manually passing - the token. - :rtype: :class:`~google.api_core.page_iterator.Iterator` :returns: - Iterator of :class:`~google.cloud.spanner_v1.instance.Instance` + Iterator of :class:`~google.cloud.spanner_admin_instance_v1.types.Instance` resources within the client's project. """ metadata = _metadata_with_prefix(self.project_name) - path = "projects/%s" % (self.project,) + request = ListInstancesRequest( + parent=self.project_name, filter=filter_, page_size=page_size + ) page_iter = self.instance_admin_api.list_instances( - path, page_size=page_size, metadata=metadata + request=request, metadata=metadata ) - page_iter.item_to_value = self._item_to_instance - page_iter.next_page_token = page_token return page_iter - def _item_to_instance(self, iterator, instance_pb): - """Convert an instance protobuf to the native object. - - :type iterator: :class:`~google.api_core.page_iterator.Iterator` - :param iterator: The iterator that is currently in use. - - :type instance_pb: :class:`~google.spanner.admin.instance.v1.Instance` - :param instance_pb: An instance returned from the API. - - :rtype: :class:`~google.cloud.spanner_v1.instance.Instance` - :returns: The next instance in the page. + @directed_read_options.setter + def directed_read_options(self, directed_read_options): + """Sets directed_read_options for the client + :type directed_read_options: :class:`~google.cloud.spanner_v1.DirectedReadOptions` + or :class:`dict` + :param directed_read_options: Client options used to set the directed_read_options + for all ReadRequests and ExecuteSqlRequests that indicates which replicas + or regions should be used for non-transactional reads or queries. """ - return Instance.from_pb(instance_pb, self) - - -def _item_to_instance_config(iterator, config_pb): # pylint: disable=unused-argument - """Convert an instance config protobuf to the native object. - - :type iterator: :class:`~google.api_core.page_iterator.Iterator` - :param iterator: The iterator that is currently in use. + self._directed_read_options = directed_read_options - :type config_pb: - :class:`~google.spanner.admin.instance.v1.InstanceConfig` - :param config_pb: An instance config returned from the API. + @default_transaction_options.setter + def default_transaction_options( + self, default_transaction_options: DefaultTransactionOptions + ): + """Sets default_transaction_options for the client + :type default_transaction_options: :class:`~google.cloud.spanner_v1.DefaultTransactionOptions` + or :class:`dict` + :param default_transaction_options: Default options to use for transactions. + """ + if default_transaction_options is None: + default_transaction_options = DefaultTransactionOptions() + elif not isinstance(default_transaction_options, DefaultTransactionOptions): + raise TypeError( + "default_transaction_options must be an instance of DefaultTransactionOptions" + ) - :rtype: :class:`~google.cloud.spanner_v1.instance.InstanceConfig` - :returns: The next instance config in the page. - """ - return InstanceConfig.from_pb(config_pb) + self._default_transaction_options = default_transaction_options diff --git a/google/cloud/spanner_v1/data_types.py b/google/cloud/spanner_v1/data_types.py new file mode 100644 index 0000000000..6703f359e9 --- /dev/null +++ b/google/cloud/spanner_v1/data_types.py @@ -0,0 +1,350 @@ +# Copyright 2021 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Custom data types for spanner.""" + +import json +import types +import re +from dataclasses import dataclass +from google.protobuf.message import Message +from google.protobuf.internal.enum_type_wrapper import EnumTypeWrapper + + +class JsonObject(dict): + """ + Provides functionality of JSON data type in Cloud Spanner + API, mimicking simple `dict()` behaviour and making + all the necessary conversions under the hood. + """ + + def __init__(self, *args, **kwargs): + self._is_null = (args, kwargs) == ((), {}) or args == (None,) + self._is_array = len(args) and isinstance(args[0], (list, tuple)) + self._is_scalar_value = len(args) == 1 and not isinstance(args[0], (list, dict)) + + # if the JSON object is represented with an array, + # the value is contained separately + if self._is_array: + self._array_value = args[0] + return + + # If it's a scalar value, set _simple_value and return early + if self._is_scalar_value: + self._simple_value = args[0] + return + + if len(args) and isinstance(args[0], JsonObject): + self._is_array = args[0]._is_array + self._is_scalar_value = args[0]._is_scalar_value + if self._is_array: + self._array_value = args[0]._array_value + elif self._is_scalar_value: + self._simple_value = args[0]._simple_value + + if not self._is_null: + super(JsonObject, self).__init__(*args, **kwargs) + + def __repr__(self): + if self._is_array: + return str(self._array_value) + + if self._is_scalar_value: + return str(self._simple_value) + + return super(JsonObject, self).__repr__() + + @classmethod + def from_str(cls, str_repr): + """Initiate an object from its `str` representation. + + Args: + str_repr (str): JSON text representation. + + Returns: + JsonObject: JSON object. + """ + if str_repr == "null": + return cls() + + return cls(json.loads(str_repr)) + + def serialize(self): + """Return the object text representation. + + Returns: + str: JSON object text representation. + """ + if self._is_null: + return None + + if self._is_scalar_value: + return json.dumps(self._simple_value) + + if self._is_array: + return json.dumps(self._array_value, sort_keys=True, separators=(",", ":")) + + return json.dumps(self, sort_keys=True, separators=(",", ":")) + + +@dataclass +class Interval: + """Represents a Spanner INTERVAL type. + + An interval is a combination of months, days and nanoseconds. + Internally, Spanner supports Interval value with the following range of individual fields: + months: [-120000, 120000] + days: [-3660000, 3660000] + nanoseconds: [-316224000000000000000, 316224000000000000000] + """ + + months: int = 0 + days: int = 0 + nanos: int = 0 + + def __str__(self) -> str: + """Returns the ISO8601 duration format string representation.""" + result = ["P"] + + # Handle years and months + if self.months: + is_negative = self.months < 0 + abs_months = abs(self.months) + years, months = divmod(abs_months, 12) + if years: + result.append(f"{'-' if is_negative else ''}{years}Y") + if months: + result.append(f"{'-' if is_negative else ''}{months}M") + + # Handle days + if self.days: + result.append(f"{self.days}D") + + # Handle time components + if self.nanos: + result.append("T") + nanos = abs(self.nanos) + is_negative = self.nanos < 0 + + # Convert to hours, minutes, seconds + nanos_per_hour = 3600000000000 + hours, nanos = divmod(nanos, nanos_per_hour) + if hours: + if is_negative: + result.append("-") + result.append(f"{hours}H") + + nanos_per_minute = 60000000000 + minutes, nanos = divmod(nanos, nanos_per_minute) + if minutes: + if is_negative: + result.append("-") + result.append(f"{minutes}M") + + nanos_per_second = 1000000000 + seconds, nanos_fraction = divmod(nanos, nanos_per_second) + + if seconds or nanos_fraction: + if is_negative: + result.append("-") + if seconds: + result.append(str(seconds)) + elif nanos_fraction: + result.append("0") + + if nanos_fraction: + nano_str = f"{nanos_fraction:09d}" + trimmed = nano_str.rstrip("0") + if len(trimmed) <= 3: + while len(trimmed) < 3: + trimmed += "0" + elif len(trimmed) <= 6: + while len(trimmed) < 6: + trimmed += "0" + else: + while len(trimmed) < 9: + trimmed += "0" + result.append(f".{trimmed}") + result.append("S") + + if len(result) == 1: + result.append("0Y") # Special case for zero interval + + return "".join(result) + + @classmethod + def from_str(cls, s: str) -> "Interval": + """Parse an ISO8601 duration format string into an Interval.""" + pattern = r"^P(-?\d+Y)?(-?\d+M)?(-?\d+D)?(T(-?\d+H)?(-?\d+M)?(-?((\d+([.,]\d{1,9})?)|([.,]\d{1,9}))S)?)?$" + match = re.match(pattern, s) + if not match or len(s) == 1: + raise ValueError(f"Invalid interval format: {s}") + + parts = match.groups() + if not any(parts[:3]) and not parts[3]: + raise ValueError( + f"Invalid interval format: at least one component (Y/M/D/H/M/S) is required: {s}" + ) + + if parts[3] == "T" and not any(parts[4:7]): + raise ValueError( + f"Invalid interval format: time designator 'T' present but no time components specified: {s}" + ) + + def parse_num(s: str, suffix: str) -> int: + if not s: + return 0 + return int(s.rstrip(suffix)) + + years = parse_num(parts[0], "Y") + months = parse_num(parts[1], "M") + total_months = years * 12 + months + + days = parse_num(parts[2], "D") + + nanos = 0 + if parts[3]: # Has time component + # Convert hours to nanoseconds + hours = parse_num(parts[4], "H") + nanos += hours * 3600000000000 + + # Convert minutes to nanoseconds + minutes = parse_num(parts[5], "M") + nanos += minutes * 60000000000 + + # Handle seconds and fractional seconds + if parts[6]: + seconds = parts[6].rstrip("S") + if "," in seconds: + seconds = seconds.replace(",", ".") + + if "." in seconds: + sec_parts = seconds.split(".") + whole_seconds = sec_parts[0] if sec_parts[0] else "0" + nanos += int(whole_seconds) * 1000000000 + frac = sec_parts[1][:9].ljust(9, "0") + frac_nanos = int(frac) + if seconds.startswith("-"): + frac_nanos = -frac_nanos + nanos += frac_nanos + else: + nanos += int(seconds) * 1000000000 + + return cls(months=total_months, days=days, nanos=nanos) + + +def _proto_message(bytes_val, proto_message_object): + """Helper for :func:`get_proto_message`. + parses serialized protocol buffer bytes data into proto message. + + Args: + bytes_val (bytes): bytes object. + proto_message_object (Message): Message object for parsing + + Returns: + Message: parses serialized protocol buffer data into this message. + + Raises: + ValueError: if the input proto_message_object is not of type Message + """ + if isinstance(bytes_val, types.NoneType): + return None + + if not isinstance(bytes_val, bytes): + raise ValueError("Expected input bytes_val to be a string") + + proto_message = proto_message_object.__deepcopy__() + proto_message.ParseFromString(bytes_val) + return proto_message + + +def _proto_enum(int_val, proto_enum_object): + """Helper for :func:`get_proto_enum`. + parses int value into string containing the name of an enum value. + + Args: + int_val (int): integer value. + proto_enum_object (EnumTypeWrapper): Enum object. + + Returns: + str: string containing the name of an enum value. + + Raises: + ValueError: if the input proto_enum_object is not of type EnumTypeWrapper + """ + if isinstance(int_val, types.NoneType): + return None + + if not isinstance(int_val, int): + raise ValueError("Expected input int_val to be a integer") + + return proto_enum_object.Name(int_val) + + +def get_proto_message(bytes_string, proto_message_object): + """parses serialized protocol buffer bytes' data or its list into proto message or list of proto message. + + Args: + bytes_string (bytes or list[bytes]): bytes object. + proto_message_object (Message): Message object for parsing + + Returns: + Message or list[Message]: parses serialized protocol buffer data into this message. + + Raises: + ValueError: if the input proto_message_object is not of type Message + """ + if isinstance(bytes_string, types.NoneType): + return None + + if not isinstance(proto_message_object, Message): + raise ValueError("Input proto_message_object should be of type Message") + + if not isinstance(bytes_string, (bytes, list)): + raise ValueError( + "Expected input bytes_string to be a string or list of strings" + ) + + if isinstance(bytes_string, list): + return [_proto_message(item, proto_message_object) for item in bytes_string] + + return _proto_message(bytes_string, proto_message_object) + + +def get_proto_enum(int_value, proto_enum_object): + """parses int or list of int values into enum or list of enum values. + + Args: + int_value (int or list[int]): list of integer value. + proto_enum_object (EnumTypeWrapper): Enum object. + + Returns: + str or list[str]: list of strings containing the name of enum value. + + Raises: + ValueError: if the input int_list is not of type list + """ + if isinstance(int_value, types.NoneType): + return None + + if not isinstance(proto_enum_object, EnumTypeWrapper): + raise ValueError("Input proto_enum_object should be of type EnumTypeWrapper") + + if not isinstance(int_value, (int, list)): + raise ValueError("Expected input int_value to be a integer or list of integers") + + if isinstance(int_value, list): + return [_proto_enum(item, proto_enum_object) for item in int_value] + + return _proto_enum(int_value, proto_enum_object) diff --git a/google/cloud/spanner_v1/database.py b/google/cloud/spanner_v1/database.py index 99b7244f9d..215cd5bed8 100644 --- a/google/cloud/spanner_v1/database.py +++ b/google/cloud/spanner_v1/database.py @@ -12,48 +12,73 @@ # See the License for the specific language governing permissions and # limitations under the License. -"""User friendly container for Cloud Spanner Database.""" +"""User-friendly container for Cloud Spanner Database.""" import copy import functools +from typing import Optional + import grpc -import os +import logging import re import threading -import warnings -from google.api_core.client_options import ClientOptions import google.auth.credentials -from google.protobuf.struct_pb2 import Struct +from google.api_core.retry import Retry +from google.api_core.retry import if_exception_type from google.cloud.exceptions import NotFound -from google.api_core.exceptions import PermissionDenied -import six - -# pylint: disable=ungrouped-imports -from google.cloud.spanner_admin_database_v1.gapic import enums +from google.api_core.exceptions import Aborted +from google.api_core import gapic_v1 +from google.iam.v1 import iam_policy_pb2 +from google.iam.v1 import options_pb2 +from google.protobuf.field_mask_pb2 import FieldMask + +from google.cloud.spanner_admin_database_v1 import CreateDatabaseRequest +from google.cloud.spanner_admin_database_v1 import Database as DatabasePB +from google.cloud.spanner_admin_database_v1 import ListDatabaseRolesRequest +from google.cloud.spanner_admin_database_v1 import EncryptionConfig +from google.cloud.spanner_admin_database_v1 import RestoreDatabaseEncryptionConfig +from google.cloud.spanner_admin_database_v1 import RestoreDatabaseRequest +from google.cloud.spanner_admin_database_v1 import UpdateDatabaseDdlRequest +from google.cloud.spanner_admin_database_v1.types import DatabaseDialect +from google.cloud.spanner_v1.transaction import BatchTransactionId +from google.cloud.spanner_v1 import ExecuteSqlRequest +from google.cloud.spanner_v1 import Type +from google.cloud.spanner_v1 import TypeCode +from google.cloud.spanner_v1 import TransactionSelector +from google.cloud.spanner_v1 import TransactionOptions +from google.cloud.spanner_v1 import DefaultTransactionOptions +from google.cloud.spanner_v1 import RequestOptions +from google.cloud.spanner_v1 import SpannerClient +from google.cloud.spanner_v1._helpers import _merge_query_options from google.cloud.spanner_v1._helpers import ( - _make_value_pb, - _merge_query_options, _metadata_with_prefix, + _metadata_with_leader_aware_routing, + _metadata_with_request_id, ) -from google.cloud.spanner_v1.backup import BackupInfo from google.cloud.spanner_v1.batch import Batch -from google.cloud.spanner_v1.gapic.spanner_client import SpannerClient -from google.cloud.spanner_v1.gapic.transports import spanner_grpc_transport +from google.cloud.spanner_v1.batch import MutationGroups from google.cloud.spanner_v1.keyset import KeySet +from google.cloud.spanner_v1.merged_result_set import MergedResultSet from google.cloud.spanner_v1.pool import BurstyPool -from google.cloud.spanner_v1.pool import SessionCheckout from google.cloud.spanner_v1.session import Session +from google.cloud.spanner_v1.database_sessions_manager import ( + DatabaseSessionsManager, + TransactionType, +) from google.cloud.spanner_v1.snapshot import _restart_on_unavailable from google.cloud.spanner_v1.snapshot import Snapshot from google.cloud.spanner_v1.streamed import StreamedResultSet -from google.cloud.spanner_v1.proto.transaction_pb2 import ( - TransactionSelector, - TransactionOptions, +from google.cloud.spanner_v1.services.spanner.transports.grpc import ( + SpannerGrpcTransport, ) -from google.cloud._helpers import _pb_timestamp_to_datetime - -# pylint: enable=ungrouped-imports +from google.cloud.spanner_v1.table import Table +from google.cloud.spanner_v1._opentelemetry_tracing import ( + add_span_event, + get_current_span, + trace_call, +) +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture SPANNER_DATA_SCOPE = "https://www.googleapis.com/auth/spanner.data" @@ -67,17 +92,12 @@ _DATABASE_METADATA_FILTER = "name:{0}/operations/" -_RESOURCE_ROUTING_PERMISSIONS_WARNING = ( - "The client library attempted to connect to an endpoint closer to your Cloud Spanner data " - "but was unable to do so. The client library will fall back and route requests to the endpoint " - "given in the client options, which may result in increased latency. " - "We recommend including the scope https://www.googleapis.com/auth/spanner.admin so that the " - "client library can get an instance-specific endpoint and efficiently route requests." -) - +_LIST_TABLES_QUERY = """SELECT TABLE_NAME +FROM INFORMATION_SCHEMA.TABLES +{} +""" -class ResourceRoutingPermissionsWarning(Warning): - pass +DEFAULT_RETRY_BACKOFF = Retry(initial=0.02, maximum=32, multiplier=1.3) class Database(object): @@ -105,11 +125,53 @@ class Database(object): :param pool: (Optional) session pool to be used by database. If not passed, the database will construct an instance of :class:`~google.cloud.spanner_v1.pool.BurstyPool`. + + :type logger: :class:`logging.Logger` + :param logger: (Optional) a custom logger that is used if `log_commit_stats` + is `True` to log commit statistics. If not passed, a logger + will be created when needed that will log the commit statistics + to stdout. + :type encryption_config: + :class:`~google.cloud.spanner_admin_database_v1.types.EncryptionConfig` + or :class:`~google.cloud.spanner_admin_database_v1.types.RestoreDatabaseEncryptionConfig` + or :class:`dict` + :param encryption_config: + (Optional) Encryption configuration for the database. + If a dict is provided, it must be of the same form as either of the protobuf + messages :class:`~google.cloud.spanner_admin_database_v1.types.EncryptionConfig` + or :class:`~google.cloud.spanner_admin_database_v1.types.RestoreDatabaseEncryptionConfig` + :type database_dialect: + :class:`~google.cloud.spanner_admin_database_v1.types.DatabaseDialect` + :param database_dialect: + (Optional) database dialect for the database + :type database_role: str or None + :param database_role: (Optional) user-assigned database_role for the session. + :type enable_drop_protection: boolean + :param enable_drop_protection: (Optional) Represents whether the database + has drop protection enabled or not. + :type proto_descriptors: bytes + :param proto_descriptors: (Optional) Proto descriptors used by CREATE/ALTER PROTO BUNDLE + statements in 'ddl_statements' above. """ - _spanner_api = None + _spanner_api: SpannerClient = None + + __transport_lock = threading.Lock() + __transports_to_channel_id = dict() - def __init__(self, database_id, instance, ddl_statements=(), pool=None): + def __init__( + self, + database_id, + instance, + ddl_statements=(), + pool=None, + logger=None, + encryption_config=None, + database_dialect=DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED, + database_role=None, + enable_drop_protection=False, + proto_descriptors=None, + ): self.database_id = database_id self._instance = instance self._ddl_statements = _check_ddl_statements(ddl_statements) @@ -117,19 +179,39 @@ def __init__(self, database_id, instance, ddl_statements=(), pool=None): self._state = None self._create_time = None self._restore_info = None + self._version_retention_period = None + self._earliest_version_time = None + self._encryption_info = None + self._default_leader = None + self.log_commit_stats = False + self._logger = logger + self._encryption_config = encryption_config + self._database_dialect = database_dialect + self._database_role = database_role + self._route_to_leader_enabled = self._instance._client.route_to_leader_enabled + self._enable_drop_protection = enable_drop_protection + self._reconciling = False + self._directed_read_options = self._instance._client.directed_read_options + self.default_transaction_options: DefaultTransactionOptions = ( + self._instance._client.default_transaction_options + ) + self._proto_descriptors = proto_descriptors + self._channel_id = 0 # It'll be created when _spanner_api is created. if pool is None: - pool = BurstyPool() + pool = BurstyPool(database_role=database_role) self._pool = pool pool.bind(self) + self._sessions_manager = DatabaseSessionsManager(self, pool) + @classmethod def from_pb(cls, database_pb, instance, pool=None): """Creates an instance of this class from a protobuf. :type database_pb: - :class:`~google.spanner.v2.spanner_instance_admin_pb2.Instance` + :class:`~google.cloud.spanner_admin_instance_v1.types.Instance` :param database_pb: A instance protobuf object. :type instance: :class:`~google.cloud.spanner_v1.instance.Instance` @@ -190,7 +272,7 @@ def name(self): def state(self): """State of this database. - :rtype: :class:`~google.cloud.spanner_admin_database_v1.gapic.enums.Database.State` + :rtype: :class:`~google.cloud.spanner_admin_database_v1.types.Database.State` :returns: an enum describing the state of the database """ return self._state @@ -209,11 +291,55 @@ def create_time(self): def restore_info(self): """Restore info for this database. - :rtype: :class:`~google.cloud.spanner_v1.database.RestoreInfo` + :rtype: :class:`~google.cloud.spanner_v1.types.RestoreInfo` :returns: an object representing the restore info for this database """ return self._restore_info + @property + def version_retention_period(self): + """The period in which Cloud Spanner retains all versions of data + for the database. + + :rtype: str + :returns: a string representing the duration of the version retention period + """ + return self._version_retention_period + + @property + def earliest_version_time(self): + """The earliest time at which older versions of the data can be read. + + :rtype: :class:`datetime.datetime` + :returns: a datetime object representing the earliest version time + """ + return self._earliest_version_time + + @property + def encryption_config(self): + """Encryption config for this database. + :rtype: :class:`~google.cloud.spanner_admin_instance_v1.types.EncryptionConfig` + :returns: an object representing the encryption config for this database + """ + return self._encryption_config + + @property + def encryption_info(self): + """Encryption info for this database. + :rtype: a list of :class:`~google.cloud.spanner_admin_instance_v1.types.EncryptionInfo` + :returns: a list of objects representing encryption info for this database + """ + return self._encryption_info + + @property + def default_leader(self): + """The read-write region which contains the database's leader replicas. + + :rtype: str + :returns: a string representing the read-write region + """ + return self._default_leader + @property def ddl_statements(self): """DDL Statements used to define database schema. @@ -226,6 +352,89 @@ def ddl_statements(self): """ return self._ddl_statements + @property + def database_dialect(self): + """DDL Statements used to define database schema. + + See + cloud.google.com/spanner/docs/data-definition-language + + :rtype: :class:`google.cloud.spanner_admin_database_v1.types.DatabaseDialect` + :returns: the dialect of the database + """ + if self._database_dialect == DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED: + self.reload() + return self._database_dialect + + @property + def default_schema_name(self): + """Default schema name for this database. + + :rtype: str + :returns: "" for GoogleSQL and "public" for PostgreSQL + """ + if self.database_dialect == DatabaseDialect.POSTGRESQL: + return "public" + return "" + + @property + def database_role(self): + """User-assigned database_role for sessions created by the pool. + :rtype: str + :returns: a str with the name of the database role. + """ + return self._database_role + + @property + def reconciling(self): + """Whether the database is currently reconciling. + + :rtype: boolean + :returns: a boolean representing whether the database is reconciling + """ + return self._reconciling + + @property + def enable_drop_protection(self): + """Whether the database has drop protection enabled. + + :rtype: boolean + :returns: a boolean representing whether the database has drop + protection enabled + """ + return self._enable_drop_protection + + @enable_drop_protection.setter + def enable_drop_protection(self, value): + self._enable_drop_protection = value + + @property + def proto_descriptors(self): + """Proto Descriptors for this database. + :rtype: bytes + :returns: bytes representing the proto descriptors for this database + """ + return self._proto_descriptors + + @property + def logger(self): + """Logger used by the database. + + The default logger will log commit stats at the log level INFO using + `sys.stderr`. + + :rtype: :class:`logging.Logger` or `None` + :returns: the logger + """ + if self._logger is None: + self._logger = logging.getLogger(self.name) + self._logger.setLevel(logging.INFO) + + ch = logging.StreamHandler() + ch.setLevel(logging.INFO) + self._logger.addHandler(ch) + return self._logger + @property def spanner_api(self): """Helper for session-related API calls.""" @@ -233,55 +442,47 @@ def spanner_api(self): client_info = self._instance._client._client_info client_options = self._instance._client._client_options if self._instance.emulator_host is not None: - transport = spanner_grpc_transport.SpannerGrpcTransport( + transport = SpannerGrpcTransport( channel=grpc.insecure_channel(self._instance.emulator_host) ) self._spanner_api = SpannerClient( - client_info=client_info, - client_options=client_options, - transport=transport, + client_info=client_info, transport=transport ) return self._spanner_api credentials = self._instance._client.credentials if isinstance(credentials, google.auth.credentials.Scoped): credentials = credentials.with_scopes((SPANNER_DATA_SCOPE,)) - if ( - os.getenv("GOOGLE_CLOUD_SPANNER_ENABLE_RESOURCE_BASED_ROUTING") - == "true" - ): - endpoint_cache = self._instance._client._endpoint_cache - if self._instance.name in endpoint_cache: - client_options = ClientOptions( - api_endpoint=endpoint_cache[self._instance.name] - ) - else: - try: - api = self._instance._client.instance_admin_api - resp = api.get_instance( - self._instance.name, - field_mask={"paths": ["endpoint_uris"]}, - metadata=_metadata_with_prefix(self.name), - ) - endpoints = resp.endpoint_uris - if endpoints: - endpoint_cache[self._instance.name] = list(endpoints)[0] - client_options = ClientOptions( - api_endpoint=endpoint_cache[self._instance.name] - ) - # If there are no endpoints, use default endpoint. - except PermissionDenied: - warnings.warn( - _RESOURCE_ROUTING_PERMISSIONS_WARNING, - ResourceRoutingPermissionsWarning, - stacklevel=2, - ) self._spanner_api = SpannerClient( credentials=credentials, client_info=client_info, client_options=client_options, ) + + with self.__transport_lock: + transport = self._spanner_api._transport + channel_id = self.__transports_to_channel_id.get(transport, None) + if channel_id is None: + channel_id = len(self.__transports_to_channel_id) + 1 + self.__transports_to_channel_id[transport] = channel_id + self._channel_id = channel_id + return self._spanner_api + def metadata_with_request_id( + self, nth_request, nth_attempt, prior_metadata=[], span=None + ): + if span is None: + span = get_current_span() + + return _metadata_with_request_id( + self._nth_client_id, + self._channel_id, + nth_request, + nth_attempt, + prior_metadata, + span, + ) + def __eq__(self, other): if not isinstance(other, self.__class__): return NotImplemented @@ -295,7 +496,7 @@ def __ne__(self, other): def create(self): """Create this database within its instance - Inclues any configured schema assigned to :attr:`ddl_statements`. + Includes any configured schema assigned to :attr:`ddl_statements`. See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase @@ -309,13 +510,24 @@ def create(self): metadata = _metadata_with_prefix(self.name) db_name = self.database_id if "-" in db_name: - db_name = "`%s`" % (db_name,) - - future = api.create_database( + if self._database_dialect == DatabaseDialect.POSTGRESQL: + db_name = f'"{db_name}"' + else: + db_name = f"`{db_name}`" + if type(self._encryption_config) is dict: + self._encryption_config = EncryptionConfig(**self._encryption_config) + + request = CreateDatabaseRequest( parent=self._instance.name, create_statement="CREATE DATABASE %s" % (db_name,), extra_statements=list(self._ddl_statements), - metadata=metadata, + encryption_config=self._encryption_config, + database_dialect=self._database_dialect, + proto_descriptors=self._proto_descriptors, + ) + future = api.create_database( + request=request, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), ) return future @@ -332,7 +544,12 @@ def exists(self): metadata = _metadata_with_prefix(self.name) try: - api.get_database_ddl(self.name, metadata=metadata) + api.get_database_ddl( + database=self.name, + metadata=self.metadata_with_request_id( + self._next_nth_request, 1, metadata + ), + ) except NotFound: return False return True @@ -349,25 +566,44 @@ def reload(self): """ api = self._instance._client.database_admin_api metadata = _metadata_with_prefix(self.name) - response = api.get_database_ddl(self.name, metadata=metadata) + response = api.get_database_ddl( + database=self.name, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), + ) self._ddl_statements = tuple(response.statements) - response = api.get_database(self.name, metadata=metadata) - self._state = enums.Database.State(response.state) - self._create_time = _pb_timestamp_to_datetime(response.create_time) + self._proto_descriptors = response.proto_descriptors + response = api.get_database( + name=self.name, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), + ) + self._state = DatabasePB.State(response.state) + self._create_time = response.create_time self._restore_info = response.restore_info - - def update_ddl(self, ddl_statements, operation_id=""): + self._version_retention_period = response.version_retention_period + self._earliest_version_time = response.earliest_version_time + self._encryption_config = response.encryption_config + self._encryption_info = response.encryption_info + self._default_leader = response.default_leader + # Only update if the data is specific to avoid losing specificity. + if response.database_dialect != DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED: + self._database_dialect = response.database_dialect + self._enable_drop_protection = response.enable_drop_protection + self._reconciling = response.reconciling + + def update_ddl(self, ddl_statements, operation_id="", proto_descriptors=None): """Update DDL for this database. Apply any configured schema from :attr:`ddl_statements`. See - https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase + https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl :type ddl_statements: Sequence[str] :param ddl_statements: a list of DDL statements to use on this database :type operation_id: str :param operation_id: (optional) a string ID for the long-running operation + :type proto_descriptors: bytes + :param proto_descriptors: (optional) Proto descriptors used by CREATE/ALTER PROTO BUNDLE statements :rtype: :class:`google.api_core.operation.Operation` :returns: an operation instance @@ -377,9 +613,59 @@ def update_ddl(self, ddl_statements, operation_id=""): api = client.database_admin_api metadata = _metadata_with_prefix(self.name) + request = UpdateDatabaseDdlRequest( + database=self.name, + statements=ddl_statements, + operation_id=operation_id, + proto_descriptors=proto_descriptors, + ) + future = api.update_database_ddl( - self.name, ddl_statements, operation_id=operation_id, metadata=metadata + request=request, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), + ) + return future + + def update(self, fields): + """Update this database. + + See + https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase + + .. note:: + + Updates the specified fields of a Cloud Spanner database. Currently, + only the `enable_drop_protection` field supports updates. To change + this value before updating, set it via + + .. code:: python + + database.enable_drop_protection = True + + before calling :meth:`update`. + + :type fields: Sequence[str] + :param fields: a list of fields to update + + :rtype: :class:`google.api_core.operation.Operation` + :returns: an operation instance + :raises NotFound: if the database does not exist + """ + api = self._instance._client.database_admin_api + database_pb = DatabasePB( + name=self.name, enable_drop_protection=self._enable_drop_protection ) + + # Only support updating drop protection for now. + field_mask = FieldMask(paths=fields) + metadata = _metadata_with_prefix(self.name) + + future = api.update_database( + database=database_pb, + update_mask=field_mask, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), + ) + return future def drop(self): @@ -390,10 +676,19 @@ def drop(self): """ api = self._instance._client.database_admin_api metadata = _metadata_with_prefix(self.name) - api.drop_database(self.name, metadata=metadata) + api.drop_database( + database=self.name, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), + ) def execute_partitioned_dml( - self, dml, params=None, param_types=None, query_options=None + self, + dml, + params=None, + param_types=None, + query_options=None, + request_options=None, + exclude_txn_from_change_streams=False, ): """Execute a partitionable DML statement. @@ -410,70 +705,157 @@ def execute_partitioned_dml( required if parameters are passed. :type query_options: - :class:`~google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or :class:`dict` :param query_options: (Optional) Query optimizer configuration to use for the given query. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.QueryOptions` + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + Please note, the `transactionTag` setting will be ignored as it is + not supported for partitioned DML. + + :type exclude_txn_from_change_streams: bool + :param exclude_txn_from_change_streams: + (Optional) If true, instructs the transaction to be excluded from being recorded in change streams + with the DDL option `allow_txn_exclusion=true`. This does not exclude the transaction from + being recorded in the change streams with the DDL option `allow_txn_exclusion` being false or + unset. + :rtype: int :returns: Count of rows affected by the DML statement. """ query_options = _merge_query_options( self._instance._client._query_options, query_options ) + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + request_options.transaction_tag = None + if params is not None: - if param_types is None: - raise ValueError("Specify 'param_types' when passing 'params'.") - params_pb = Struct( - fields={key: _make_value_pb(value) for key, value in params.items()} - ) + from google.cloud.spanner_v1.transaction import Transaction + + params_pb = Transaction._make_params_pb(params, param_types) else: - params_pb = None + params_pb = {} api = self.spanner_api txn_options = TransactionOptions( - partitioned_dml=TransactionOptions.PartitionedDml() + partitioned_dml=TransactionOptions.PartitionedDml(), + exclude_txn_from_change_streams=exclude_txn_from_change_streams, ) metadata = _metadata_with_prefix(self.name) + if self._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(self._route_to_leader_enabled) + ) + + def execute_pdml(): + with trace_call( + "CloudSpanner.Database.execute_partitioned_pdml", + observability_options=self.observability_options, + ) as span, MetricsCapture(): + transaction_type = TransactionType.PARTITIONED + session = self._sessions_manager.get_session(transaction_type) + + try: + add_span_event(span, "Starting BeginTransaction") + txn = api.begin_transaction( + session=session.name, + options=txn_options, + metadata=self.metadata_with_request_id( + self._next_nth_request, + 1, + metadata, + span, + ), + ) - with SessionCheckout(self._pool) as session: + txn_selector = TransactionSelector(id=txn.id) - txn = api.begin_transaction(session.name, txn_options, metadata=metadata) + request = ExecuteSqlRequest( + session=session.name, + sql=dml, + params=params_pb, + param_types=param_types, + query_options=query_options, + request_options=request_options, + ) - txn_selector = TransactionSelector(id=txn.id) + method = functools.partial( + api.execute_streaming_sql, + metadata=metadata, + ) - restart = functools.partial( - api.execute_streaming_sql, - session.name, - dml, - transaction=txn_selector, - params=params_pb, - param_types=param_types, - query_options=query_options, - metadata=metadata, - ) + iterator = _restart_on_unavailable( + method=method, + request=request, + trace_name="CloudSpanner.ExecuteStreamingSql", + session=session, + metadata=metadata, + transaction_selector=txn_selector, + observability_options=self.observability_options, + request_id_manager=self, + ) - iterator = _restart_on_unavailable(restart) + result_set = StreamedResultSet(iterator) + list(result_set) # consume all partials - result_set = StreamedResultSet(iterator) - list(result_set) # consume all partials + return result_set.stats.row_count_lower_bound + finally: + self._sessions_manager.put_session(session) - return result_set.stats.row_count_lower_bound + return _retry_on_aborted(execute_pdml, DEFAULT_RETRY_BACKOFF)() - def session(self, labels=None): + @property + def _next_nth_request(self): + if self._instance and self._instance._client: + return self._instance._client._next_nth_request + return 1 + + @property + def _nth_client_id(self): + if self._instance and self._instance._client: + return self._instance._client._nth_client_id + return 0 + + def session(self, labels=None, database_role=None): """Factory to create a session for this database. + Deprecated. Sessions should be checked out indirectly using context + managers or :meth:`~google.cloud.spanner_v1.database.Database.run_in_transaction`, + rather than built directly from the database. + :type labels: dict (str -> str) or None :param labels: (Optional) user-assigned labels for the session. + :type database_role: str + :param database_role: (Optional) user-assigned database_role for the session. + :rtype: :class:`~google.cloud.spanner_v1.session.Session` :returns: a session bound to this database. """ - return Session(self, labels=labels) + # If role is specified in param, then that role is used + # instead. + role = database_role or self._database_role + is_multiplexed = False + if self.sessions_manager._use_multiplexed( + transaction_type=TransactionType.READ_ONLY + ): + is_multiplexed = True + return Session( + self, labels=labels, database_role=role, is_multiplexed=is_multiplexed + ) def snapshot(self, **kw): """Return an object which wraps a snapshot. @@ -494,18 +876,82 @@ def snapshot(self, **kw): """ return SnapshotCheckout(self, **kw) - def batch(self): + def batch( + self, + request_options=None, + max_commit_delay=None, + exclude_txn_from_change_streams=False, + isolation_level=TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED, + **kw, + ): """Return an object which wraps a batch. The wrapper *must* be used as a context manager, with the batch as the value returned by the wrapper. + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for the commit request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type max_commit_delay: :class:`datetime.timedelta` + :param max_commit_delay: + (Optional) The amount of latency this request is willing to incur + in order to improve throughput. Value must be between 0ms and + 500ms. + + :type exclude_txn_from_change_streams: bool + :param exclude_txn_from_change_streams: + (Optional) If true, instructs the transaction to be excluded from being recorded in change streams + with the DDL option `allow_txn_exclusion=true`. This does not exclude the transaction from + being recorded in the change streams with the DDL option `allow_txn_exclusion` being false or + unset. + + :type isolation_level: + :class:`google.cloud.spanner_v1.types.TransactionOptions.IsolationLevel` + :param isolation_level: + (Optional) Sets the isolation level for this transaction. This overrides any default isolation level set for the client. + + :type read_lock_mode: + :class:`google.cloud.spanner_v1.types.TransactionOptions.ReadWrite.ReadLockMode` + :param read_lock_mode: + (Optional) Sets the read lock mode for this transaction. This overrides any default read lock mode set for the client. + :rtype: :class:`~google.cloud.spanner_v1.database.BatchCheckout` :returns: new wrapper """ - return BatchCheckout(self) - def batch_snapshot(self, read_timestamp=None, exact_staleness=None): + return BatchCheckout( + self, + request_options, + max_commit_delay, + exclude_txn_from_change_streams, + isolation_level, + read_lock_mode, + **kw, + ) + + def mutation_groups(self): + """Return an object which wraps a mutation_group. + + The wrapper *must* be used as a context manager, with the mutation group + as the value returned by the wrapper. + + :rtype: :class:`~google.cloud.spanner_v1.database.MutationGroupsCheckout` + :returns: new wrapper + """ + return MutationGroupsCheckout(self) + + def batch_snapshot( + self, + read_timestamp=None, + exact_staleness=None, + session_id=None, + transaction_id=None, + ): """Return an object which wraps a batch read / query. :type read_timestamp: :class:`datetime.datetime` @@ -515,11 +961,21 @@ def batch_snapshot(self, read_timestamp=None, exact_staleness=None): :param exact_staleness: Execute all reads at a timestamp that is ``exact_staleness`` old. + :type session_id: str + :param session_id: id of the session used in transaction + + :type transaction_id: str + :param transaction_id: id of the transaction + :rtype: :class:`~google.cloud.spanner_v1.database.BatchSnapshot` :returns: new wrapper """ return BatchSnapshot( - self, read_timestamp=read_timestamp, exact_staleness=exact_staleness + self, + read_timestamp=read_timestamp, + exact_staleness=exact_staleness, + session_id=session_id, + transaction_id=transaction_id, ) def run_in_transaction(self, func, *args, **kw): @@ -535,36 +991,56 @@ def run_in_transaction(self, func, *args, **kw): :type kw: dict :param kw: (Optional) keyword arguments to be passed to ``func``. - If passed, "timeout_secs" will be removed and used to + If passed, + "timeout_secs" will be removed and used to override the default retry timeout which defines maximum timestamp to continue retrying the transaction. + "max_commit_delay" will be removed and used to set the + max_commit_delay for the request. Value must be between + 0ms and 500ms. + "exclude_txn_from_change_streams" if true, instructs the transaction to be excluded + from being recorded in change streams with the DDL option `allow_txn_exclusion=true`. + This does not exclude the transaction from being recorded in the change streams with + the DDL option `allow_txn_exclusion` being false or unset. + "isolation_level" sets the isolation level for the transaction. + "read_lock_mode" sets the read lock mode for the transaction. :rtype: Any :returns: The return value of ``func``. :raises Exception: - reraises any non-ABORT execptions raised by ``func``. + reraises any non-ABORT exceptions raised by ``func``. """ - # Sanity check: Is there a transaction already running? - # If there is, then raise a red flag. Otherwise, mark that this one - # is running. - if getattr(self._local, "transaction_running", False): - raise RuntimeError("Spanner does not support nested transactions.") - self._local.transaction_running = True - - # Check out a session and run the function in a transaction; once - # done, flip the sanity check bit back. - try: - with SessionCheckout(self._pool) as session: + observability_options = getattr(self, "observability_options", None) + with trace_call( + "CloudSpanner.Database.run_in_transaction", + observability_options=observability_options, + ), MetricsCapture(): + # Sanity check: Is there a transaction already running? + # If there is, then raise a red flag. Otherwise, mark that this one + # is running. + if getattr(self._local, "transaction_running", False): + raise RuntimeError("Spanner does not support nested transactions.") + + self._local.transaction_running = True + + # Check out a session and run the function in a transaction; once + # done, flip the sanity check bit back and return the session. + transaction_type = TransactionType.READ_WRITE + session = self._sessions_manager.get_session(transaction_type) + + try: return session.run_in_transaction(func, *args, **kw) - finally: - self._local.transaction_running = False + + finally: + self._local.transaction_running = False + self._sessions_manager.put_session(session) def restore(self, source): """Restore from a backup to this database. - :type backup: :class:`~google.cloud.spanner_v1.backup.Backup` - :param backup: the path of the backup being restored from. + :type source: :class:`~google.cloud.spanner_v1.backup.Backup` + :param source: the path of the source being restored from. :rtype: :class:`~google.api_core.operation.Operation` :returns: a future used to poll the status of the create request @@ -576,10 +1052,28 @@ def restore(self, source): """ if source is None: raise ValueError("Restore source not specified") + if type(self._encryption_config) is dict: + self._encryption_config = RestoreDatabaseEncryptionConfig( + **self._encryption_config + ) + if ( + self.encryption_config + and self.encryption_config.kms_key_name + and self.encryption_config.encryption_type + != RestoreDatabaseEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION + ): + raise ValueError("kms_key_name only used with CUSTOMER_MANAGED_ENCRYPTION") api = self._instance._client.database_admin_api metadata = _metadata_with_prefix(self.name) + request = RestoreDatabaseRequest( + parent=self._instance.name, + database_id=self.database_id, + backup=source.name, + encryption_config=self._encryption_config or None, + ) future = api.restore_database( - self._instance.name, self.database_id, backup=source.name, metadata=metadata + request=request, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), ) return future @@ -590,8 +1084,8 @@ def is_ready(self): :returns: True if the database state is READY_OPTIMIZING or READY, else False. """ return ( - self.state == enums.Database.State.READY_OPTIMIZING - or self.state == enums.Database.State.READY + self.state == DatabasePB.State.READY_OPTIMIZING + or self.state == DatabasePB.State.READY ) def is_optimized(self): @@ -600,7 +1094,7 @@ def is_optimized(self): :rtype: bool :returns: True if the database state is READY, else False. """ - return self.state == enums.Database.State.READY + return self.state == DatabasePB.State.READY def list_database_operations(self, filter_="", page_size=None): """List database operations for the database. @@ -627,6 +1121,170 @@ def list_database_operations(self, filter_="", page_size=None): filter_=database_filter, page_size=page_size ) + def list_database_roles(self, page_size=None): + """Lists Cloud Spanner database roles. + + :type page_size: int + :param page_size: + Optional. The maximum number of database roles in each page of results + from this request. Non-positive values are ignored. Defaults to a + sensible value set by the API. + + :type: Iterable + :returns: + Iterable of :class:`~google.cloud.spanner_admin_database_v1.types.spanner_database_admin.DatabaseRole` + resources within the current database. + """ + api = self._instance._client.database_admin_api + metadata = _metadata_with_prefix(self.name) + + request = ListDatabaseRolesRequest( + parent=self.name, + page_size=page_size, + ) + return api.list_database_roles( + request=request, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), + ) + + def table(self, table_id): + """Factory to create a table object within this database. + + Note: This method does not create a table in Cloud Spanner, but it can + be used to check if a table exists. + + .. code-block:: python + + my_table = database.table("my_table") + if my_table.exists(): + print("Table with ID 'my_table' exists.") + else: + print("Table with ID 'my_table' does not exist.") + + :type table_id: str + :param table_id: The ID of the table. + + :rtype: :class:`~google.cloud.spanner_v1.table.Table` + :returns: a table owned by this database. + """ + return Table(table_id, self) + + def list_tables(self, schema="_default"): + """List tables within the database. + + :type schema: str + :param schema: The schema to search for tables, or None for all schemas. Use the special string "_default" to + search for tables in the default schema of the database. + + :type: Iterable + :returns: + Iterable of :class:`~google.cloud.spanner_v1.table.Table` + resources within the current database. + """ + if "_default" == schema: + schema = self.default_schema_name + + with self.snapshot() as snapshot: + if schema is None: + results = snapshot.execute_sql( + sql=_LIST_TABLES_QUERY.format(""), + ) + else: + if self._database_dialect == DatabaseDialect.POSTGRESQL: + where_clause = "WHERE TABLE_SCHEMA = $1" + param_name = "p1" + else: + where_clause = ( + "WHERE TABLE_SCHEMA = @schema AND SPANNER_STATE = 'COMMITTED'" + ) + param_name = "schema" + results = snapshot.execute_sql( + sql=_LIST_TABLES_QUERY.format(where_clause), + params={param_name: schema}, + param_types={param_name: Type(code=TypeCode.STRING)}, + ) + for row in results: + yield self.table(row[0]) + + def get_iam_policy(self, policy_version=None): + """Gets the access control policy for a database resource. + + :type policy_version: int + :param policy_version: + (Optional) the maximum policy version that will be + used to format the policy. Valid values are 0, 1 ,3. + + :rtype: :class:`~google.iam.v1.policy_pb2.Policy` + :returns: + returns an Identity and Access Management (IAM) policy. It is used to + specify access control policies for Cloud Platform + resources. + """ + api = self._instance._client.database_admin_api + metadata = _metadata_with_prefix(self.name) + + request = iam_policy_pb2.GetIamPolicyRequest( + resource=self.name, + options=options_pb2.GetPolicyOptions( + requested_policy_version=policy_version + ), + ) + response = api.get_iam_policy( + request=request, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), + ) + return response + + def set_iam_policy(self, policy): + """Sets the access control policy on a database resource. + Replaces any existing policy. + + :type policy: :class:`~google.iam.v1.policy_pb2.Policy` + :param policy_version: + the complete policy to be applied to the resource. + + :rtype: :class:`~google.iam.v1.policy_pb2.Policy` + :returns: + returns the new Identity and Access Management (IAM) policy. + """ + api = self._instance._client.database_admin_api + metadata = _metadata_with_prefix(self.name) + + request = iam_policy_pb2.SetIamPolicyRequest( + resource=self.name, + policy=policy, + ) + response = api.set_iam_policy( + request=request, + metadata=self.metadata_with_request_id(self._next_nth_request, 1, metadata), + ) + return response + + @property + def observability_options(self): + """ + Returns the observability options that you set when creating + the SpannerClient. + """ + if not (self._instance and self._instance._client): + return None + + opts = getattr(self._instance._client, "observability_options", None) + if not opts: + opts = dict() + + opts["db_name"] = self.name + return opts + + @property + def sessions_manager(self) -> DatabaseSessionsManager: + """Returns the database sessions manager. + + :rtype: :class:`~google.cloud.spanner_v1.database_sessions_manager.DatabaseSessionsManager` + :returns: The sessions manager for this database. + """ + return self._sessions_manager + class BatchCheckout(object): """Context manager for using a batch from a database. @@ -637,27 +1295,129 @@ class BatchCheckout(object): Caller must *not* use the batch to perform API requests outside the scope of the context manager. - :type database: :class:`~google.cloud.spanner.database.Database` + :type database: :class:`~google.cloud.spanner_v1.database.Database` :param database: database to use + + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for the commit request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type max_commit_delay: :class:`datetime.timedelta` + :param max_commit_delay: + (Optional) The amount of latency this request is willing to incur + in order to improve throughput. """ - def __init__(self, database): - self._database = database - self._session = self._batch = None + def __init__( + self, + database, + request_options=None, + max_commit_delay=None, + exclude_txn_from_change_streams=False, + isolation_level=TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED, + **kw, + ): + self._database: Database = database + self._session: Optional[Session] = None + self._batch: Optional[Batch] = None + + if request_options is None: + self._request_options = RequestOptions() + elif type(request_options) is dict: + self._request_options = RequestOptions(request_options) + else: + self._request_options = request_options + self._max_commit_delay = max_commit_delay + self._exclude_txn_from_change_streams = exclude_txn_from_change_streams + self._isolation_level = isolation_level + self._read_lock_mode = read_lock_mode + self._kw = kw def __enter__(self): """Begin ``with`` block.""" - session = self._session = self._database._pool.get() - batch = self._batch = Batch(session) + + # Batch transactions are performed as blind writes, + # which are treated as read-only transactions. + transaction_type = TransactionType.READ_ONLY + self._session = self._database.sessions_manager.get_session(transaction_type) + + add_span_event( + span=get_current_span(), + event_name="Using session", + event_attributes={"id": self._session.session_id}, + ) + + batch = self._batch = Batch(session=self._session) + if self._request_options.transaction_tag: + batch.transaction_tag = self._request_options.transaction_tag + return batch def __exit__(self, exc_type, exc_val, exc_tb): """End ``with`` block.""" try: if exc_type is None: - self._batch.commit() + self._batch.commit( + return_commit_stats=self._database.log_commit_stats, + request_options=self._request_options, + max_commit_delay=self._max_commit_delay, + exclude_txn_from_change_streams=self._exclude_txn_from_change_streams, + isolation_level=self._isolation_level, + read_lock_mode=self._read_lock_mode, + **self._kw, + ) finally: - self._database._pool.put(self._session) + if self._database.log_commit_stats and self._batch.commit_stats: + self._database.logger.info( + "CommitStats: {}".format(self._batch.commit_stats), + extra={"commit_stats": self._batch.commit_stats}, + ) + self._database.sessions_manager.put_session(self._session) + current_span = get_current_span() + add_span_event( + current_span, + "Returned session to pool", + {"id": self._session.session_id}, + ) + + +class MutationGroupsCheckout(object): + """Context manager for using mutation groups from a database. + + Inside the context manager, checks out a session from the database, + creates mutation groups from it, making the groups available. + + Caller must *not* use the object to perform API requests outside the scope + of the context manager. + + :type database: :class:`~google.cloud.spanner_v1.database.Database` + :param database: database to use + """ + + def __init__(self, database): + self._database: Database = database + self._session: Optional[Session] = None + + def __enter__(self): + """Begin ``with`` block.""" + transaction_type = TransactionType.READ_WRITE + self._session = self._database.sessions_manager.get_session(transaction_type) + + return MutationGroups(session=self._session) + + def __exit__(self, exc_type, exc_val, exc_tb): + """End ``with`` block.""" + if isinstance(exc_val, NotFound): + # If NotFound exception occurs inside the with block + # then we validate if the session still exists. + if not self._session.exists(): + self._session = self._database._pool._new_session() + self._session.create() + self._database.sessions_manager.put_session(self._session) class SnapshotCheckout(object): @@ -669,7 +1429,7 @@ class SnapshotCheckout(object): Caller must *not* use the snapshot to perform API requests outside the scope of the context manager. - :type database: :class:`~google.cloud.spanner.database.Database` + :type database: :class:`~google.cloud.spanner_v1.database.Database` :param database: database to use :type kw: dict @@ -679,24 +1439,32 @@ class SnapshotCheckout(object): """ def __init__(self, database, **kw): - self._database = database - self._session = None - self._kw = kw + self._database: Database = database + self._session: Optional[Session] = None + self._kw: dict = kw def __enter__(self): """Begin ``with`` block.""" - session = self._session = self._database._pool.get() - return Snapshot(session, **self._kw) + transaction_type = TransactionType.READ_ONLY + self._session = self._database.sessions_manager.get_session(transaction_type) + + return Snapshot(session=self._session, **self._kw) def __exit__(self, exc_type, exc_val, exc_tb): """End ``with`` block.""" - self._database._pool.put(self._session) + if isinstance(exc_val, NotFound): + # If NotFound exception occurs inside the with block + # then we validate if the session still exists. + if not self._session.exists(): + self._session = self._database._pool._new_session() + self._session.create() + self._database.sessions_manager.put_session(self._session) class BatchSnapshot(object): """Wrapper for generating and processing read / query batches. - :type database: :class:`~google.cloud.spanner.database.Database` + :type database: :class:`~google.cloud.spanner_v1.database.Database` :param database: database to use :type read_timestamp: :class:`datetime.datetime` @@ -707,10 +1475,22 @@ class BatchSnapshot(object): ``exact_staleness`` old. """ - def __init__(self, database, read_timestamp=None, exact_staleness=None): - self._database = database - self._session = None - self._snapshot = None + def __init__( + self, + database, + read_timestamp=None, + exact_staleness=None, + session_id=None, + transaction_id=None, + ): + self._database: Database = database + + self._session_id: Optional[str] = session_id + self._transaction_id: Optional[bytes] = transaction_id + + self._session: Optional[Session] = None + self._snapshot: Optional[Snapshot] = None + self._read_timestamp = read_timestamp self._exact_staleness = exact_staleness @@ -718,7 +1498,7 @@ def __init__(self, database, read_timestamp=None, exact_staleness=None): def from_dict(cls, database, mapping): """Reconstruct an instance from a mapping. - :type database: :class:`~google.cloud.spanner.database.Database` + :type database: :class:`~google.cloud.spanner_v1.database.Database` :param database: database to use :type mapping: mapping @@ -726,11 +1506,15 @@ def from_dict(cls, database, mapping): :rtype: :class:`BatchSnapshot` """ + instance = cls(database) - session = instance._session = database.session() - session._session_id = mapping["session_id"] + + session = instance._session = Session(database=database) + instance._session_id = session._session_id = mapping["session_id"] + snapshot = instance._snapshot = session.snapshot() - snapshot._transaction_id = mapping["transaction_id"] + instance._transaction_id = snapshot._transaction_id = mapping["transaction_id"] + return instance def to_dict(self): @@ -748,6 +1532,10 @@ def to_dict(self): "transaction_id": snapshot._transaction_id, } + @property + def observability_options(self): + return getattr(self._database, "observability_options", {}) + def _get_session(self): """Create session as needed. @@ -757,21 +1545,51 @@ def _get_session(self): all partitions have been processed. """ if self._session is None: - session = self._session = self._database.session() - session.create() + database = self._database + + # If the session ID is not specified, check out a new session for + # partitioned transactions from the database session manager; otherwise, + # the session has already been checked out, so just create a session to + # represent it. + if self._session_id is None: + transaction_type = TransactionType.PARTITIONED + session = database.sessions_manager.get_session(transaction_type) + self._session_id = session.session_id + + else: + session = Session(database=database) + session._session_id = self._session_id + + self._session = session + return self._session def _get_snapshot(self): """Create snapshot if needed.""" + if self._snapshot is None: self._snapshot = self._get_session().snapshot( read_timestamp=self._read_timestamp, exact_staleness=self._exact_staleness, multi_use=True, + transaction_id=self._transaction_id, ) - self._snapshot.begin() + + if self._transaction_id is None: + self._snapshot.begin() + return self._snapshot + def get_batch_transaction_id(self): + snapshot = self._snapshot + if snapshot is None: + raise ValueError("Read-only transaction not begun") + return BatchTransactionId( + snapshot._transaction_id, + snapshot._session.session_id, + snapshot._read_timestamp, + ) + def read(self, *args, **kw): """Convenience method: perform read operation via snapshot. @@ -794,6 +1612,11 @@ def generate_read_batches( index="", partition_size_bytes=None, max_partitions=None, + data_boost_enabled=False, + directed_read_options=None, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ): """Start a partitioned batch read operation. @@ -825,30 +1648,62 @@ def generate_read_batches( service uses this as a hint, the actual number of partitions may differ. + :type data_boost_enabled: + :param data_boost_enabled: + (Optional) If this is for a partitioned read and this field is + set ``true``, the request will be executed via offline access. + + :type directed_read_options: :class:`~google.cloud.spanner_v1.DirectedReadOptions` + or :class:`dict` + :param directed_read_options: (Optional) Request level option used to set the directed_read_options + for ReadRequests that indicates which replicas + or regions should be used for non-transactional reads. + + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + :rtype: iterable of dict :returns: - mappings of information used peform actual partitioned reads via + mappings of information used perform actual partitioned reads via :meth:`process_read_batch`. """ - partitions = self._get_snapshot().partition_read( - table=table, - columns=columns, - keyset=keyset, - index=index, - partition_size_bytes=partition_size_bytes, - max_partitions=max_partitions, - ) - - read_info = { - "table": table, - "columns": columns, - "keyset": keyset._to_dict(), - "index": index, - } - for partition in partitions: - yield {"partition": partition, "read": read_info.copy()} + with trace_call( + f"CloudSpanner.{type(self).__name__}.generate_read_batches", + extra_attributes=dict(table=table, columns=columns), + observability_options=self.observability_options, + ), MetricsCapture(): + partitions = self._get_snapshot().partition_read( + table=table, + columns=columns, + keyset=keyset, + index=index, + partition_size_bytes=partition_size_bytes, + max_partitions=max_partitions, + retry=retry, + timeout=timeout, + ) - def process_read_batch(self, batch): + read_info = { + "table": table, + "columns": columns, + "keyset": keyset._to_dict(), + "index": index, + "data_boost_enabled": data_boost_enabled, + "directed_read_options": directed_read_options, + } + for partition in partitions: + yield {"partition": partition, "read": read_info.copy()} + + def process_read_batch( + self, + batch, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ): """Process a single, partitioned read. :type batch: mapping @@ -856,13 +1711,27 @@ def process_read_batch(self, batch): one of the mappings returned from an earlier call to :meth:`generate_read_batches`. + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + + :rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet` :returns: a result set instance which can be used to consume rows. """ - kwargs = copy.deepcopy(batch["read"]) - keyset_dict = kwargs.pop("keyset") - kwargs["keyset"] = KeySet._from_dict(keyset_dict) - return self._get_snapshot().read(partition=batch["partition"], **kwargs) + observability_options = self.observability_options + with trace_call( + f"CloudSpanner.{type(self).__name__}.process_read_batch", + observability_options=observability_options, + ), MetricsCapture(): + kwargs = copy.deepcopy(batch["read"]) + keyset_dict = kwargs.pop("keyset") + kwargs["keyset"] = KeySet._from_dict(keyset_dict) + return self._get_snapshot().read( + partition=batch["partition"], **kwargs, retry=retry, timeout=timeout + ) def generate_query_batches( self, @@ -872,12 +1741,17 @@ def generate_query_batches( partition_size_bytes=None, max_partitions=None, query_options=None, + data_boost_enabled=False, + directed_read_options=None, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ): """Start a partitioned query operation. Uses the ``PartitionQuery`` API request to start a partitioned query operation. Returns a list of batch information needed to - peform the actual queries. + perform the actual queries. :type sql: str :param sql: SQL query statement @@ -896,11 +1770,6 @@ def generate_query_batches( (Optional) desired size for each partition generated. The service uses this as a hint, the actual partition size may differ. - :type partition_size_bytes: int - :param partition_size_bytes: - (Optional) desired size for each partition generated. The service - uses this as a hint, the actual partition size may differ. - :type max_partitions: int :param max_partitions: (Optional) desired maximum number of partitions generated. The @@ -908,42 +1777,76 @@ def generate_query_batches( differ. :type query_options: - :class:`~google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or :class:`dict` :param query_options: (Optional) Query optimizer configuration to use for the given query. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.QueryOptions` + :type data_boost_enabled: + :param data_boost_enabled: + (Optional) If this is for a partitioned query and this field is + set ``true``, the request will be executed via offline access. + + :type directed_read_options: :class:`~google.cloud.spanner_v1.DirectedReadOptions` + or :class:`dict` + :param directed_read_options: (Optional) Request level option used to set the directed_read_options + for ExecuteSqlRequests that indicates which replicas + or regions should be used for non-transactional queries. + + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + :rtype: iterable of dict :returns: - mappings of information used peform actual partitioned reads via + mappings of information used perform actual partitioned reads via :meth:`process_read_batch`. """ - partitions = self._get_snapshot().partition_query( - sql=sql, - params=params, - param_types=param_types, - partition_size_bytes=partition_size_bytes, - max_partitions=max_partitions, - ) - - query_info = {"sql": sql} - if params: - query_info["params"] = params - query_info["param_types"] = param_types + with trace_call( + f"CloudSpanner.{type(self).__name__}.generate_query_batches", + extra_attributes=dict(sql=sql), + observability_options=self.observability_options, + ), MetricsCapture(): + partitions = self._get_snapshot().partition_query( + sql=sql, + params=params, + param_types=param_types, + partition_size_bytes=partition_size_bytes, + max_partitions=max_partitions, + retry=retry, + timeout=timeout, + ) - # Query-level options have higher precedence than client-level and - # environment-level options - default_query_options = self._database._instance._client._query_options - query_info["query_options"] = _merge_query_options( - default_query_options, query_options - ) + query_info = { + "sql": sql, + "data_boost_enabled": data_boost_enabled, + "directed_read_options": directed_read_options, + } + if params: + query_info["params"] = params + query_info["param_types"] = param_types + + # Query-level options have higher precedence than client-level and + # environment-level options + default_query_options = self._database._instance._client._query_options + query_info["query_options"] = _merge_query_options( + default_query_options, query_options + ) - for partition in partitions: - yield {"partition": partition, "query": query_info} + for partition in partitions: + yield {"partition": partition, "query": query_info} - def process_query_batch(self, batch): + def process_query_batch( + self, + batch, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ): """Process a single, partitioned query. :type batch: mapping @@ -951,12 +1854,96 @@ def process_query_batch(self, batch): one of the mappings returned from an earlier call to :meth:`generate_query_batches`. + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + :rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet` :returns: a result set instance which can be used to consume rows. """ - return self._get_snapshot().execute_sql( - partition=batch["partition"], **batch["query"] - ) + with trace_call( + f"CloudSpanner.{type(self).__name__}.process_query_batch", + observability_options=self.observability_options, + ), MetricsCapture(): + return self._get_snapshot().execute_sql( + partition=batch["partition"], + **batch["query"], + retry=retry, + timeout=timeout, + ) + + def run_partitioned_query( + self, + sql, + params=None, + param_types=None, + partition_size_bytes=None, + max_partitions=None, + query_options=None, + data_boost_enabled=False, + ): + """Start a partitioned query operation to get list of partitions and + then executes each partition on a separate thread + + :type sql: str + :param sql: SQL query statement + + :type params: dict, {str -> column value} + :param params: values for parameter replacement. Keys must match + the names used in ``sql``. + + :type param_types: dict[str -> Union[dict, .types.Type]] + :param param_types: + (Optional) maps explicit types for one or more param values; + required if parameters are passed. + + :type partition_size_bytes: int + :param partition_size_bytes: + (Optional) desired size for each partition generated. The service + uses this as a hint, the actual partition size may differ. + + :type max_partitions: int + :param max_partitions: + (Optional) desired maximum number of partitions generated. The + service uses this as a hint, the actual number of partitions may + differ. + + :type query_options: + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` + or :class:`dict` + :param query_options: + (Optional) Query optimizer configuration to use for the given query. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.QueryOptions` + + :type data_boost_enabled: + :param data_boost_enabled: + (Optional) If this is for a partitioned query and this field is + set ``true``, the request will be executed using data boost. + Please see https://cloud.google.com/spanner/docs/databoost/databoost-overview + + :rtype: :class:`~google.cloud.spanner_v1.merged_result_set.MergedResultSet` + :returns: a result set instance which can be used to consume rows. + """ + with trace_call( + f"CloudSpanner.${type(self).__name__}.run_partitioned_query", + extra_attributes=dict(sql=sql), + observability_options=self.observability_options, + ), MetricsCapture(): + partitions = list( + self.generate_query_batches( + sql, + params, + param_types, + partition_size_bytes, + max_partitions, + query_options, + data_boost_enabled, + ) + ) + return MergedResultSet(self, partitions, 0) def process(self, batch): """Process a single, partitioned query or read. @@ -987,7 +1974,8 @@ def close(self): from all the partitions. """ if self._session is not None: - self._session.delete() + if not self._session.is_multiplexed: + self._session.delete() def _check_ddl_statements(value): @@ -997,7 +1985,7 @@ def _check_ddl_statements(value): https://cloud.google.com/spanner/docs/data-definition-language :type value: list of string - :param value: DDL statements, excluding the 'CREATE DATABSE' statement + :param value: DDL statements, excluding the 'CREATE DATABASE' statement :rtype: tuple :returns: tuple of validated DDL statement strings. @@ -1005,7 +1993,7 @@ def _check_ddl_statements(value): if elements in ``value`` are not strings, or if ``value`` contains a ``CREATE DATABASE`` statement. """ - if not all(isinstance(line, six.string_types) for line in value): + if not all(isinstance(line, str) for line in value): raise ValueError("Pass a list of strings") if any("create database" in line.lower() for line in value): @@ -1014,11 +2002,17 @@ def _check_ddl_statements(value): return tuple(value) -class RestoreInfo(object): - def __init__(self, source_type, backup_info): - self.source_type = enums.RestoreSourceType(source_type) - self.backup_info = BackupInfo.from_pb(backup_info) +def _retry_on_aborted(func, retry_config): + """Helper for :meth:`Database.execute_partitioned_dml`. - @classmethod - def from_pb(cls, pb): - return cls(pb.source_type, pb.backup_info) + Wrap function in a Retry that will retry on Aborted exceptions + with the retry config specified. + + :type func: callable + :param func: the function to be retried on Aborted exceptions + + :type retry_config: Retry + :param retry_config: retry object with the settings to be used + """ + retry = retry_config.with_predicate(if_exception_type(Aborted)) + return retry(func) diff --git a/google/cloud/spanner_v1/database_sessions_manager.py b/google/cloud/spanner_v1/database_sessions_manager.py new file mode 100644 index 0000000000..aba32f21bd --- /dev/null +++ b/google/cloud/spanner_v1/database_sessions_manager.py @@ -0,0 +1,277 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from enum import Enum +from os import getenv +from datetime import timedelta +from threading import Event, Lock, Thread +from time import sleep, time +from typing import Optional +from weakref import ref + +from google.cloud.spanner_v1.session import Session +from google.cloud.spanner_v1._opentelemetry_tracing import ( + get_current_span, + add_span_event, +) + + +class TransactionType(Enum): + """Transaction types for session options.""" + + READ_ONLY = "read-only" + PARTITIONED = "partitioned" + READ_WRITE = "read/write" + + +class DatabaseSessionsManager(object): + """Manages sessions for a Cloud Spanner database. + + Sessions can be checked out from the database session manager for a specific + transaction type using :meth:`get_session`, and returned to the session manager + using :meth:`put_session`. + + The sessions returned by the session manager depend on the configured environment variables + and the provided session pool (see :class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`). + + :type database: :class:`~google.cloud.spanner_v1.database.Database` + :param database: The database to manage sessions for. + + :type pool: :class:`~google.cloud.spanner_v1.pool.AbstractSessionPool` + :param pool: The pool to get non-multiplexed sessions from. + """ + + # Environment variables for multiplexed sessions + _ENV_VAR_MULTIPLEXED = "GOOGLE_CLOUD_SPANNER_MULTIPLEXED_SESSIONS" + _ENV_VAR_MULTIPLEXED_PARTITIONED = ( + "GOOGLE_CLOUD_SPANNER_MULTIPLEXED_SESSIONS_PARTITIONED_OPS" + ) + _ENV_VAR_MULTIPLEXED_READ_WRITE = "GOOGLE_CLOUD_SPANNER_MULTIPLEXED_SESSIONS_FOR_RW" + + # Intervals for the maintenance thread to check and refresh the multiplexed session. + _MAINTENANCE_THREAD_POLLING_INTERVAL = timedelta(minutes=10) + _MAINTENANCE_THREAD_REFRESH_INTERVAL = timedelta(days=7) + + def __init__(self, database, pool): + self._database = database + self._pool = pool + + # Declare multiplexed session attributes. When a multiplexed session for the + # database session manager is created, a maintenance thread is initialized to + # periodically delete and recreate the multiplexed session so that it remains + # valid. Because of this concurrency, we need to use a lock whenever we access + # the multiplexed session to avoid any race conditions. + self._multiplexed_session: Optional[Session] = None + self._multiplexed_session_thread: Optional[Thread] = None + self._multiplexed_session_lock: Lock = Lock() + + # Event to terminate the maintenance thread. + # Only used for testing purposes. + self._multiplexed_session_terminate_event: Event = Event() + + def get_session(self, transaction_type: TransactionType) -> Session: + """Returns a session for the given transaction type from the database session manager. + + :rtype: :class:`~google.cloud.spanner_v1.session.Session` + :returns: a session for the given transaction type. + """ + + session = ( + self._get_multiplexed_session() + if self._use_multiplexed(transaction_type) + else self._pool.get() + ) + + add_span_event( + get_current_span(), + "Using session", + {"id": session.session_id, "multiplexed": session.is_multiplexed}, + ) + + return session + + def put_session(self, session: Session) -> None: + """Returns the session to the database session manager. + + :type session: :class:`~google.cloud.spanner_v1.session.Session` + :param session: The session to return to the database session manager. + """ + + add_span_event( + get_current_span(), + "Returning session", + {"id": session.session_id, "multiplexed": session.is_multiplexed}, + ) + + # No action is needed for multiplexed sessions: the session + # pool is only used for managing non-multiplexed sessions, + # since they can only process one transaction at a time. + if not session.is_multiplexed: + self._pool.put(session) + + def _get_multiplexed_session(self) -> Session: + """Returns a multiplexed session from the database session manager. + + If the multiplexed session is not defined, creates a new multiplexed + session and starts a maintenance thread to periodically delete and + recreate it so that it remains valid. Otherwise, simply returns the + current multiplexed session. + + :rtype: :class:`~google.cloud.spanner_v1.session.Session` + :returns: a multiplexed session. + """ + + with self._multiplexed_session_lock: + if self._multiplexed_session is None: + self._multiplexed_session = self._build_multiplexed_session() + + self._multiplexed_session_thread = self._build_maintenance_thread() + self._multiplexed_session_thread.start() + + return self._multiplexed_session + + def _build_multiplexed_session(self) -> Session: + """Builds and returns a new multiplexed session for the database session manager. + + :rtype: :class:`~google.cloud.spanner_v1.session.Session` + :returns: a new multiplexed session. + """ + + session = Session( + database=self._database, + database_role=self._database.database_role, + is_multiplexed=True, + ) + session.create() + + self._database.logger.info("Created multiplexed session.") + + return session + + def _build_maintenance_thread(self) -> Thread: + """Builds and returns a multiplexed session maintenance thread for + the database session manager. This thread will periodically delete + and recreate the multiplexed session to ensure that it is always valid. + + :rtype: :class:`threading.Thread` + :returns: a multiplexed session maintenance thread. + """ + + # Use a weak reference to the database session manager to avoid + # creating a circular reference that would prevent the database + # session manager from being garbage collected. + session_manager_ref = ref(self) + + return Thread( + target=self._maintain_multiplexed_session, + name=f"maintenance-multiplexed-session-{self._multiplexed_session.name}", + args=[session_manager_ref], + daemon=True, + ) + + @staticmethod + def _maintain_multiplexed_session(session_manager_ref) -> None: + """Maintains the multiplexed session for the database session manager. + + This method will delete and recreate the referenced database session manager's + multiplexed session to ensure that it is always valid. The method will run until + the database session manager is deleted or the multiplexed session is deleted. + + :type session_manager_ref: :class:`_weakref.ReferenceType` + :param session_manager_ref: A weak reference to the database session manager. + """ + + manager = session_manager_ref() + if manager is None: + return + + polling_interval_seconds = ( + manager._MAINTENANCE_THREAD_POLLING_INTERVAL.total_seconds() + ) + refresh_interval_seconds = ( + manager._MAINTENANCE_THREAD_REFRESH_INTERVAL.total_seconds() + ) + + session_created_time = time() + + while True: + # Terminate the thread is the database session manager has been deleted. + manager = session_manager_ref() + if manager is None: + return + + # Terminate the thread if corresponding event is set. + if manager._multiplexed_session_terminate_event.is_set(): + return + + # Wait for until the refresh interval has elapsed. + if time() - session_created_time < refresh_interval_seconds: + sleep(polling_interval_seconds) + continue + + with manager._multiplexed_session_lock: + manager._multiplexed_session.delete() + manager._multiplexed_session = manager._build_multiplexed_session() + + session_created_time = time() + + @classmethod + def _use_multiplexed(cls, transaction_type: TransactionType) -> bool: + """Returns whether to use multiplexed sessions for the given transaction type. + + Multiplexed sessions are enabled for read-only transactions if: + * _ENV_VAR_MULTIPLEXED != 'false'. + + Multiplexed sessions are enabled for partitioned transactions if: + * _ENV_VAR_MULTIPLEXED_PARTITIONED != 'false'. + + Multiplexed sessions are enabled for read/write transactions if: + * _ENV_VAR_MULTIPLEXED_READ_WRITE != 'false'. + + :type transaction_type: :class:`TransactionType` + :param transaction_type: the type of transaction + + :rtype: bool + :returns: True if multiplexed sessions should be used for the given transaction + type, False otherwise. + + :raises ValueError: if the transaction type is not supported. + """ + + if transaction_type is TransactionType.READ_ONLY: + return cls._getenv(cls._ENV_VAR_MULTIPLEXED) + + elif transaction_type is TransactionType.PARTITIONED: + return cls._getenv(cls._ENV_VAR_MULTIPLEXED_PARTITIONED) + + elif transaction_type is TransactionType.READ_WRITE: + return cls._getenv(cls._ENV_VAR_MULTIPLEXED_READ_WRITE) + + raise ValueError(f"Transaction type {transaction_type} is not supported.") + + @classmethod + def _getenv(cls, env_var_name: str) -> bool: + """Returns the value of the given environment variable as a boolean. + + True unless explicitly 'false' (case-insensitive). + All other values (including unset) are considered true. + + :type env_var_name: str + :param env_var_name: the name of the boolean environment variable + + :rtype: bool + :returns: True unless the environment variable is set to 'false', False otherwise. + """ + + env_var_value = getenv(env_var_name, "true").lower().strip() + return env_var_value != "false" diff --git a/google/cloud/spanner_v1/gapic/enums.py b/google/cloud/spanner_v1/gapic/enums.py deleted file mode 100644 index 445abc8429..0000000000 --- a/google/cloud/spanner_v1/gapic/enums.py +++ /dev/null @@ -1,118 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Wrappers for protocol buffer enum types.""" - -import enum - - -class NullValue(enum.IntEnum): - """ - ``NullValue`` is a singleton enumeration to represent the null value for - the ``Value`` type union. - - The JSON representation for ``NullValue`` is JSON ``null``. - - Attributes: - NULL_VALUE (int): Null value. - """ - - NULL_VALUE = 0 - - -class TypeCode(enum.IntEnum): - """ - ``TypeCode`` is used as part of ``Type`` to indicate the type of a Cloud - Spanner value. - - Each legal value of a type can be encoded to or decoded from a JSON - value, using the encodings described below. All Cloud Spanner values can - be ``null``, regardless of type; ``null``\ s are always encoded as a - JSON ``null``. - - Attributes: - TYPE_CODE_UNSPECIFIED (int): Not specified. - BOOL (int): Encoded as JSON ``true`` or ``false``. - INT64 (int): Encoded as ``string``, in decimal format. - FLOAT64 (int): Encoded as ``number``, or the strings ``"NaN"``, ``"Infinity"``, or - ``"-Infinity"``. - TIMESTAMP (int): Encoded as ``string`` in RFC 3339 timestamp format. The time zone must - be present, and must be ``"Z"``. - - If the schema has the column option ``allow_commit_timestamp=true``, the - placeholder string ``"spanner.commit_timestamp()"`` can be used to - instruct the system to insert the commit timestamp associated with the - transaction commit. - DATE (int): Encoded as ``string`` in RFC 3339 date format. - STRING (int): Encoded as ``string``. - BYTES (int): Encoded as a base64-encoded ``string``, as described in RFC 4648, - section 4. - ARRAY (int): Encoded as ``list``, where the list elements are represented according - to ``array_element_type``. - STRUCT (int): Encoded as ``list``, where list element ``i`` is represented according - to [struct\_type.fields[i]][google.spanner.v1.StructType.fields]. - """ - - TYPE_CODE_UNSPECIFIED = 0 - BOOL = 1 - INT64 = 2 - FLOAT64 = 3 - TIMESTAMP = 4 - DATE = 5 - STRING = 6 - BYTES = 7 - ARRAY = 8 - STRUCT = 9 - - -class ExecuteSqlRequest(object): - class QueryMode(enum.IntEnum): - """ - Mode in which the statement must be processed. - - Attributes: - NORMAL (int): The default mode. Only the statement results are returned. - PLAN (int): This mode returns only the query plan, without any results or - execution statistics information. - PROFILE (int): This mode returns both the query plan and the execution statistics along - with the results. - """ - - NORMAL = 0 - PLAN = 1 - PROFILE = 2 - - -class PlanNode(object): - class Kind(enum.IntEnum): - """ - The kind of ``PlanNode``. Distinguishes between the two different kinds - of nodes that can appear in a query plan. - - Attributes: - KIND_UNSPECIFIED (int): Not specified. - RELATIONAL (int): Denotes a Relational operator node in the expression tree. Relational - operators represent iterative processing of rows during query execution. - For example, a ``TableScan`` operation that reads rows from a table. - SCALAR (int): Denotes a Scalar node in the expression tree. Scalar nodes represent - non-iterable entities in the query plan. For example, constants or - arithmetic operators appearing inside predicate expressions or references - to column names. - """ - - KIND_UNSPECIFIED = 0 - RELATIONAL = 1 - SCALAR = 2 diff --git a/google/cloud/spanner_v1/gapic/spanner_client.py b/google/cloud/spanner_v1/gapic/spanner_client.py deleted file mode 100644 index 20b6522780..0000000000 --- a/google/cloud/spanner_v1/gapic/spanner_client.py +++ /dev/null @@ -1,1910 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.spanner.v1 Spanner API.""" - -import functools -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.gapic_v1.routing_header -import google.api_core.grpc_helpers -import google.api_core.page_iterator -import google.api_core.path_template -import google.api_core.protobuf_helpers -import grpc - -from google.cloud.spanner_v1.gapic import enums -from google.cloud.spanner_v1.gapic import spanner_client_config -from google.cloud.spanner_v1.gapic.transports import spanner_grpc_transport -from google.cloud.spanner_v1.proto import keys_pb2 -from google.cloud.spanner_v1.proto import mutation_pb2 -from google.cloud.spanner_v1.proto import result_set_pb2 -from google.cloud.spanner_v1.proto import spanner_pb2 -from google.cloud.spanner_v1.proto import spanner_pb2_grpc -from google.cloud.spanner_v1.proto import transaction_pb2 -from google.protobuf import empty_pb2 -from google.protobuf import struct_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution("google-cloud-spanner").version - - -class SpannerClient(object): - """ - Cloud Spanner API - - The Cloud Spanner API can be used to manage sessions and execute - transactions on data stored in Cloud Spanner databases. - """ - - SERVICE_ADDRESS = "spanner.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = "google.spanner.v1.Spanner" - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - SpannerClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - @classmethod - def database_path(cls, project, instance, database): - """Return a fully-qualified database string.""" - return google.api_core.path_template.expand( - "projects/{project}/instances/{instance}/databases/{database}", - project=project, - instance=instance, - database=database, - ) - - @classmethod - def session_path(cls, project, instance, database, session): - """Return a fully-qualified session string.""" - return google.api_core.path_template.expand( - "projects/{project}/instances/{instance}/databases/{database}/sessions/{session}", - project=project, - instance=instance, - database=database, - session=session, - ) - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.SpannerGrpcTransport, - Callable[[~.Credentials, type], ~.SpannerGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = spanner_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=spanner_grpc_transport.SpannerGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = spanner_grpc_transport.SpannerGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME] - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def create_session( - self, - database, - session=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Creates a new session. A session can be used to perform transactions - that read and/or modify data in a Cloud Spanner database. Sessions are - meant to be reused for many consecutive transactions. - - Sessions can only execute one transaction at a time. To execute multiple - concurrent read-write/write-only transactions, create multiple sessions. - Note that standalone reads and queries use a transaction internally, and - count toward the one transaction limit. - - Active sessions use additional server resources, so it is a good idea to - delete idle and unneeded sessions. Aside from explicit deletes, Cloud - Spanner may delete sessions for which no operations are sent for more - than an hour. If a session is deleted, requests to it return - ``NOT_FOUND``. - - Idle sessions can be kept alive by sending a trivial SQL query - periodically, e.g., ``"SELECT 1"``. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]') - >>> - >>> response = client.create_session(database) - - Args: - database (str): Required. The database in which the new session is created. - session (Union[dict, ~google.cloud.spanner_v1.types.Session]): The session to create. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Session` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.Session` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "create_session" not in self._inner_api_calls: - self._inner_api_calls[ - "create_session" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.create_session, - default_retry=self._method_configs["CreateSession"].retry, - default_timeout=self._method_configs["CreateSession"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.CreateSessionRequest(database=database, session=session) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("database", database)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["create_session"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def batch_create_sessions( - self, - database, - session_count, - session_template=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Creates multiple new sessions. - - This API can be used to initialize a session cache on the clients. - See https://goo.gl/TgSFN2 for best practices on session cache management. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]') - >>> - >>> # TODO: Initialize `session_count`: - >>> session_count = 0 - >>> - >>> response = client.batch_create_sessions(database, session_count) - - Args: - database (str): Required. The database in which the new sessions are created. - session_count (int): Required. The number of sessions to be created in this batch call. The - API may return fewer than the requested number of sessions. If a - specific number of sessions are desired, the client can make additional - calls to BatchCreateSessions (adjusting ``session_count`` as necessary). - session_template (Union[dict, ~google.cloud.spanner_v1.types.Session]): Parameters to be applied to each created session. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Session` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.BatchCreateSessionsResponse` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "batch_create_sessions" not in self._inner_api_calls: - self._inner_api_calls[ - "batch_create_sessions" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.batch_create_sessions, - default_retry=self._method_configs["BatchCreateSessions"].retry, - default_timeout=self._method_configs["BatchCreateSessions"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.BatchCreateSessionsRequest( - database=database, - session_count=session_count, - session_template=session_template, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("database", database)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["batch_create_sessions"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def get_session( - self, - name, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Gets a session. Returns ``NOT_FOUND`` if the session does not exist. - This is mainly useful for determining whether a session is still alive. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> name = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> response = client.get_session(name) - - Args: - name (str): Required. The name of the session to retrieve. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.Session` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "get_session" not in self._inner_api_calls: - self._inner_api_calls[ - "get_session" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.get_session, - default_retry=self._method_configs["GetSession"].retry, - default_timeout=self._method_configs["GetSession"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.GetSessionRequest(name=name) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("name", name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["get_session"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def list_sessions( - self, - database, - page_size=None, - filter_=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Lists all sessions in a given database. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> database = client.database_path('[PROJECT]', '[INSTANCE]', '[DATABASE]') - >>> - >>> # Iterate over all results - >>> for element in client.list_sessions(database): - ... # process element - ... pass - >>> - >>> - >>> # Alternatively: - >>> - >>> # Iterate over results one page at a time - >>> for page in client.list_sessions(database).pages: - ... for element in page: - ... # process element - ... pass - - Args: - database (str): Required. The database in which to list sessions. - page_size (int): The maximum number of resources contained in the - underlying API response. If page streaming is performed per- - resource, this parameter does not affect the return value. If page - streaming is performed per-page, this determines the maximum number - of resources in a page. - filter_ (str): An expression for filtering the results of the request. Filter rules are - case insensitive. The fields eligible for filtering are: - - - ``labels.key`` where key is the name of a label - - Some examples of using filters are: - - - ``labels.env:*`` --> The session has the label "env". - - ``labels.env:dev`` --> The session has the label "env" and the value - of the label contains the string "dev". - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.api_core.page_iterator.PageIterator` instance. - An iterable of :class:`~google.cloud.spanner_v1.types.Session` instances. - You can also iterate over the pages of the response - using its `pages` property. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "list_sessions" not in self._inner_api_calls: - self._inner_api_calls[ - "list_sessions" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.list_sessions, - default_retry=self._method_configs["ListSessions"].retry, - default_timeout=self._method_configs["ListSessions"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.ListSessionsRequest( - database=database, page_size=page_size, filter=filter_ - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("database", database)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - iterator = google.api_core.page_iterator.GRPCIterator( - client=None, - method=functools.partial( - self._inner_api_calls["list_sessions"], - retry=retry, - timeout=timeout, - metadata=metadata, - ), - request=request, - items_field="sessions", - request_token_field="page_token", - response_token_field="next_page_token", - ) - return iterator - - def delete_session( - self, - name, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Ends a session, releasing server resources associated with it. This will - asynchronously trigger cancellation of any operations that are running with - this session. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> name = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> client.delete_session(name) - - Args: - name (str): Required. The name of the session to delete. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "delete_session" not in self._inner_api_calls: - self._inner_api_calls[ - "delete_session" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.delete_session, - default_retry=self._method_configs["DeleteSession"].retry, - default_timeout=self._method_configs["DeleteSession"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.DeleteSessionRequest(name=name) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("name", name)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - self._inner_api_calls["delete_session"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def execute_sql( - self, - session, - sql, - transaction=None, - params=None, - param_types=None, - resume_token=None, - query_mode=None, - partition_token=None, - seqno=None, - query_options=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Executes an SQL statement, returning all results in a single reply. This - method cannot be used to return a result set larger than 10 MiB; if the - query yields more data than that, the query fails with a - ``FAILED_PRECONDITION`` error. - - Operations inside read-write transactions might return ``ABORTED``. If - this occurs, the application should restart the transaction from the - beginning. See ``Transaction`` for more details. - - Larger result sets can be fetched in streaming fashion by calling - ``ExecuteStreamingSql`` instead. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `sql`: - >>> sql = '' - >>> - >>> response = client.execute_sql(session, sql) - - Args: - session (str): Required. The session in which the SQL query should be performed. - sql (str): Required. The SQL string. - transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): The transaction to use. - - For queries, if none is provided, the default is a temporary read-only - transaction with strong concurrency. - - Standard DML statements require a read-write transaction. To protect - against replays, single-use transactions are not supported. The caller - must either supply an existing transaction ID or begin a new transaction. - - Partitioned DML requires an existing Partitioned DML transaction ID. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionSelector` - params (Union[dict, ~google.cloud.spanner_v1.types.Struct]): Parameter names and values that bind to placeholders in the SQL string. - - A parameter placeholder consists of the ``@`` character followed by the - parameter name (for example, ``@firstName``). Parameter names can - contain letters, numbers, and underscores. - - Parameters can appear anywhere that a literal value is expected. The - same parameter name can be used more than once, for example: - - ``"WHERE id > @msg_id AND id < @msg_id + 100"`` - - It is an error to execute a SQL statement with unbound parameters. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Struct` - param_types (dict[str -> Union[dict, ~google.cloud.spanner_v1.types.Type]]): It is not always possible for Cloud Spanner to infer the right SQL type - from a JSON value. For example, values of type ``BYTES`` and values of - type ``STRING`` both appear in ``params`` as JSON strings. - - In these cases, ``param_types`` can be used to specify the exact SQL - type for some or all of the SQL statement parameters. See the definition - of ``Type`` for more information about SQL types. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Type` - resume_token (bytes): If this request is resuming a previously interrupted SQL statement - execution, ``resume_token`` should be copied from the last - ``PartialResultSet`` yielded before the interruption. Doing this enables - the new SQL statement execution to resume where the last one left off. - The rest of the request parameters must exactly match the request that - yielded this token. - query_mode (~google.cloud.spanner_v1.types.QueryMode): Used to control the amount of debugging information returned in - ``ResultSetStats``. If ``partition_token`` is set, ``query_mode`` can - only be set to ``QueryMode.NORMAL``. - partition_token (bytes): If present, results will be restricted to the specified partition - previously created using PartitionQuery(). There must be an exact match - for the values of fields common to this message and the - PartitionQueryRequest message used to create this partition\_token. - seqno (long): A per-transaction sequence number used to identify this request. This field - makes each request idempotent such that if the request is received multiple - times, at most one will succeed. - - The sequence number must be monotonically increasing within the - transaction. If a request arrives for the first time with an out-of-order - sequence number, the transaction may be aborted. Replays of previously - handled requests will yield the same response as the first execution. - - Required for DML statements. Ignored for queries. - query_options (Union[dict, ~google.cloud.spanner_v1.types.QueryOptions]): Query optimizer configuration to use for the given query. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.QueryOptions` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.ResultSet` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "execute_sql" not in self._inner_api_calls: - self._inner_api_calls[ - "execute_sql" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.execute_sql, - default_retry=self._method_configs["ExecuteSql"].retry, - default_timeout=self._method_configs["ExecuteSql"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.ExecuteSqlRequest( - session=session, - sql=sql, - transaction=transaction, - params=params, - param_types=param_types, - resume_token=resume_token, - query_mode=query_mode, - partition_token=partition_token, - seqno=seqno, - query_options=query_options, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["execute_sql"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def execute_streaming_sql( - self, - session, - sql, - transaction=None, - params=None, - param_types=None, - resume_token=None, - query_mode=None, - partition_token=None, - seqno=None, - query_options=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Like ``ExecuteSql``, except returns the result set as a stream. Unlike - ``ExecuteSql``, there is no limit on the size of the returned result - set. However, no individual row in the result set can exceed 100 MiB, - and no column value can exceed 10 MiB. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `sql`: - >>> sql = '' - >>> - >>> for element in client.execute_streaming_sql(session, sql): - ... # process element - ... pass - - Args: - session (str): Required. The session in which the SQL query should be performed. - sql (str): Required. The SQL string. - transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): The transaction to use. - - For queries, if none is provided, the default is a temporary read-only - transaction with strong concurrency. - - Standard DML statements require a read-write transaction. To protect - against replays, single-use transactions are not supported. The caller - must either supply an existing transaction ID or begin a new transaction. - - Partitioned DML requires an existing Partitioned DML transaction ID. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionSelector` - params (Union[dict, ~google.cloud.spanner_v1.types.Struct]): Parameter names and values that bind to placeholders in the SQL string. - - A parameter placeholder consists of the ``@`` character followed by the - parameter name (for example, ``@firstName``). Parameter names can - contain letters, numbers, and underscores. - - Parameters can appear anywhere that a literal value is expected. The - same parameter name can be used more than once, for example: - - ``"WHERE id > @msg_id AND id < @msg_id + 100"`` - - It is an error to execute a SQL statement with unbound parameters. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Struct` - param_types (dict[str -> Union[dict, ~google.cloud.spanner_v1.types.Type]]): It is not always possible for Cloud Spanner to infer the right SQL type - from a JSON value. For example, values of type ``BYTES`` and values of - type ``STRING`` both appear in ``params`` as JSON strings. - - In these cases, ``param_types`` can be used to specify the exact SQL - type for some or all of the SQL statement parameters. See the definition - of ``Type`` for more information about SQL types. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Type` - resume_token (bytes): If this request is resuming a previously interrupted SQL statement - execution, ``resume_token`` should be copied from the last - ``PartialResultSet`` yielded before the interruption. Doing this enables - the new SQL statement execution to resume where the last one left off. - The rest of the request parameters must exactly match the request that - yielded this token. - query_mode (~google.cloud.spanner_v1.types.QueryMode): Used to control the amount of debugging information returned in - ``ResultSetStats``. If ``partition_token`` is set, ``query_mode`` can - only be set to ``QueryMode.NORMAL``. - partition_token (bytes): If present, results will be restricted to the specified partition - previously created using PartitionQuery(). There must be an exact match - for the values of fields common to this message and the - PartitionQueryRequest message used to create this partition\_token. - seqno (long): A per-transaction sequence number used to identify this request. This field - makes each request idempotent such that if the request is received multiple - times, at most one will succeed. - - The sequence number must be monotonically increasing within the - transaction. If a request arrives for the first time with an out-of-order - sequence number, the transaction may be aborted. Replays of previously - handled requests will yield the same response as the first execution. - - Required for DML statements. Ignored for queries. - query_options (Union[dict, ~google.cloud.spanner_v1.types.QueryOptions]): Query optimizer configuration to use for the given query. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.QueryOptions` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - Iterable[~google.cloud.spanner_v1.types.PartialResultSet]. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "execute_streaming_sql" not in self._inner_api_calls: - self._inner_api_calls[ - "execute_streaming_sql" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.execute_streaming_sql, - default_retry=self._method_configs["ExecuteStreamingSql"].retry, - default_timeout=self._method_configs["ExecuteStreamingSql"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.ExecuteSqlRequest( - session=session, - sql=sql, - transaction=transaction, - params=params, - param_types=param_types, - resume_token=resume_token, - query_mode=query_mode, - partition_token=partition_token, - seqno=seqno, - query_options=query_options, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["execute_streaming_sql"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def execute_batch_dml( - self, - session, - transaction, - statements, - seqno, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Executes a batch of SQL DML statements. This method allows many - statements to be run with lower latency than submitting them - sequentially with ``ExecuteSql``. - - Statements are executed in sequential order. A request can succeed even - if a statement fails. The ``ExecuteBatchDmlResponse.status`` field in - the response provides information about the statement that failed. - Clients must inspect this field to determine whether an error occurred. - - Execution stops after the first failed statement; the remaining - statements are not executed. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `transaction`: - >>> transaction = {} - >>> - >>> # TODO: Initialize `statements`: - >>> statements = [] - >>> - >>> # TODO: Initialize `seqno`: - >>> seqno = 0 - >>> - >>> response = client.execute_batch_dml(session, transaction, statements, seqno) - - Args: - session (str): Required. The session in which the DML statements should be performed. - transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): Required. The transaction to use. Must be a read-write transaction. - - To protect against replays, single-use transactions are not supported. The - caller must either supply an existing transaction ID or begin a new - transaction. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionSelector` - statements (list[Union[dict, ~google.cloud.spanner_v1.types.Statement]]): Required. The list of statements to execute in this batch. Statements - are executed serially, such that the effects of statement ``i`` are - visible to statement ``i+1``. Each statement must be a DML statement. - Execution stops at the first failed statement; the remaining statements - are not executed. - - Callers must provide at least one statement. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Statement` - seqno (long): Required. A per-transaction sequence number used to identify this request. This field - makes each request idempotent such that if the request is received multiple - times, at most one will succeed. - - The sequence number must be monotonically increasing within the - transaction. If a request arrives for the first time with an out-of-order - sequence number, the transaction may be aborted. Replays of previously - handled requests will yield the same response as the first execution. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.ExecuteBatchDmlResponse` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "execute_batch_dml" not in self._inner_api_calls: - self._inner_api_calls[ - "execute_batch_dml" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.execute_batch_dml, - default_retry=self._method_configs["ExecuteBatchDml"].retry, - default_timeout=self._method_configs["ExecuteBatchDml"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.ExecuteBatchDmlRequest( - session=session, transaction=transaction, statements=statements, seqno=seqno - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["execute_batch_dml"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def read( - self, - session, - table, - columns, - key_set, - transaction=None, - index=None, - limit=None, - resume_token=None, - partition_token=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Reads rows from the database using key lookups and scans, as a simple - key/value style alternative to ``ExecuteSql``. This method cannot be - used to return a result set larger than 10 MiB; if the read matches more - data than that, the read fails with a ``FAILED_PRECONDITION`` error. - - Reads inside read-write transactions might return ``ABORTED``. If this - occurs, the application should restart the transaction from the - beginning. See ``Transaction`` for more details. - - Larger result sets can be yielded in streaming fashion by calling - ``StreamingRead`` instead. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `table`: - >>> table = '' - >>> - >>> # TODO: Initialize `columns`: - >>> columns = [] - >>> - >>> # TODO: Initialize `key_set`: - >>> key_set = {} - >>> - >>> response = client.read(session, table, columns, key_set) - - Args: - session (str): Required. The session in which the read should be performed. - table (str): Required. The name of the table in the database to be read. - columns (list[str]): Required. The columns of ``table`` to be returned for each row matching - this request. - key_set (Union[dict, ~google.cloud.spanner_v1.types.KeySet]): Required. ``key_set`` identifies the rows to be yielded. ``key_set`` - names the primary keys of the rows in ``table`` to be yielded, unless - ``index`` is present. If ``index`` is present, then ``key_set`` instead - names index keys in ``index``. - - If the ``partition_token`` field is empty, rows are yielded in table - primary key order (if ``index`` is empty) or index key order (if - ``index`` is non-empty). If the ``partition_token`` field is not empty, - rows will be yielded in an unspecified order. - - It is not an error for the ``key_set`` to name rows that do not exist in - the database. Read yields nothing for nonexistent rows. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.KeySet` - transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): The transaction to use. If none is provided, the default is a - temporary read-only transaction with strong concurrency. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionSelector` - index (str): If non-empty, the name of an index on ``table``. This index is used - instead of the table primary key when interpreting ``key_set`` and - sorting result rows. See ``key_set`` for further information. - limit (long): If greater than zero, only the first ``limit`` rows are yielded. If - ``limit`` is zero, the default is no limit. A limit cannot be specified - if ``partition_token`` is set. - resume_token (bytes): If this request is resuming a previously interrupted read, - ``resume_token`` should be copied from the last ``PartialResultSet`` - yielded before the interruption. Doing this enables the new read to - resume where the last read left off. The rest of the request parameters - must exactly match the request that yielded this token. - partition_token (bytes): If present, results will be restricted to the specified partition - previously created using PartitionRead(). There must be an exact match - for the values of fields common to this message and the - PartitionReadRequest message used to create this partition\_token. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.ResultSet` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "read" not in self._inner_api_calls: - self._inner_api_calls["read"] = google.api_core.gapic_v1.method.wrap_method( - self.transport.read, - default_retry=self._method_configs["Read"].retry, - default_timeout=self._method_configs["Read"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.ReadRequest( - session=session, - table=table, - columns=columns, - key_set=key_set, - transaction=transaction, - index=index, - limit=limit, - resume_token=resume_token, - partition_token=partition_token, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["read"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def streaming_read( - self, - session, - table, - columns, - key_set, - transaction=None, - index=None, - limit=None, - resume_token=None, - partition_token=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Like ``Read``, except returns the result set as a stream. Unlike - ``Read``, there is no limit on the size of the returned result set. - However, no individual row in the result set can exceed 100 MiB, and no - column value can exceed 10 MiB. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `table`: - >>> table = '' - >>> - >>> # TODO: Initialize `columns`: - >>> columns = [] - >>> - >>> # TODO: Initialize `key_set`: - >>> key_set = {} - >>> - >>> for element in client.streaming_read(session, table, columns, key_set): - ... # process element - ... pass - - Args: - session (str): Required. The session in which the read should be performed. - table (str): Required. The name of the table in the database to be read. - columns (list[str]): Required. The columns of ``table`` to be returned for each row matching - this request. - key_set (Union[dict, ~google.cloud.spanner_v1.types.KeySet]): Required. ``key_set`` identifies the rows to be yielded. ``key_set`` - names the primary keys of the rows in ``table`` to be yielded, unless - ``index`` is present. If ``index`` is present, then ``key_set`` instead - names index keys in ``index``. - - If the ``partition_token`` field is empty, rows are yielded in table - primary key order (if ``index`` is empty) or index key order (if - ``index`` is non-empty). If the ``partition_token`` field is not empty, - rows will be yielded in an unspecified order. - - It is not an error for the ``key_set`` to name rows that do not exist in - the database. Read yields nothing for nonexistent rows. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.KeySet` - transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): The transaction to use. If none is provided, the default is a - temporary read-only transaction with strong concurrency. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionSelector` - index (str): If non-empty, the name of an index on ``table``. This index is used - instead of the table primary key when interpreting ``key_set`` and - sorting result rows. See ``key_set`` for further information. - limit (long): If greater than zero, only the first ``limit`` rows are yielded. If - ``limit`` is zero, the default is no limit. A limit cannot be specified - if ``partition_token`` is set. - resume_token (bytes): If this request is resuming a previously interrupted read, - ``resume_token`` should be copied from the last ``PartialResultSet`` - yielded before the interruption. Doing this enables the new read to - resume where the last read left off. The rest of the request parameters - must exactly match the request that yielded this token. - partition_token (bytes): If present, results will be restricted to the specified partition - previously created using PartitionRead(). There must be an exact match - for the values of fields common to this message and the - PartitionReadRequest message used to create this partition\_token. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - Iterable[~google.cloud.spanner_v1.types.PartialResultSet]. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "streaming_read" not in self._inner_api_calls: - self._inner_api_calls[ - "streaming_read" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.streaming_read, - default_retry=self._method_configs["StreamingRead"].retry, - default_timeout=self._method_configs["StreamingRead"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.ReadRequest( - session=session, - table=table, - columns=columns, - key_set=key_set, - transaction=transaction, - index=index, - limit=limit, - resume_token=resume_token, - partition_token=partition_token, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["streaming_read"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def begin_transaction( - self, - session, - options_, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Begins a new transaction. This step can often be skipped: ``Read``, - ``ExecuteSql`` and ``Commit`` can begin a new transaction as a - side-effect. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `options_`: - >>> options_ = {} - >>> - >>> response = client.begin_transaction(session, options_) - - Args: - session (str): Required. The session in which the transaction runs. - options_ (Union[dict, ~google.cloud.spanner_v1.types.TransactionOptions]): Required. Options for the new transaction. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionOptions` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.Transaction` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "begin_transaction" not in self._inner_api_calls: - self._inner_api_calls[ - "begin_transaction" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.begin_transaction, - default_retry=self._method_configs["BeginTransaction"].retry, - default_timeout=self._method_configs["BeginTransaction"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.BeginTransactionRequest(session=session, options=options_) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["begin_transaction"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def commit( - self, - session, - transaction_id=None, - single_use_transaction=None, - mutations=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Commits a transaction. The request includes the mutations to be applied - to rows in the database. - - ``Commit`` might return an ``ABORTED`` error. This can occur at any - time; commonly, the cause is conflicts with concurrent transactions. - However, it can also happen for a variety of other reasons. If - ``Commit`` returns ``ABORTED``, the caller should re-attempt the - transaction from the beginning, re-using the same session. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> response = client.commit(session) - - Args: - session (str): Required. The session in which the transaction to be committed is running. - transaction_id (bytes): Commit a previously-started transaction. - single_use_transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionOptions]): Execute mutations in a temporary transaction. Note that unlike commit of - a previously-started transaction, commit with a temporary transaction is - non-idempotent. That is, if the ``CommitRequest`` is sent to Cloud - Spanner more than once (for instance, due to retries in the application, - or in the transport library), it is possible that the mutations are - executed more than once. If this is undesirable, use - ``BeginTransaction`` and ``Commit`` instead. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionOptions` - mutations (list[Union[dict, ~google.cloud.spanner_v1.types.Mutation]]): The mutations to be executed when this transaction commits. All - mutations are applied atomically, in the order they appear in - this list. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Mutation` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.CommitResponse` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "commit" not in self._inner_api_calls: - self._inner_api_calls[ - "commit" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.commit, - default_retry=self._method_configs["Commit"].retry, - default_timeout=self._method_configs["Commit"].timeout, - client_info=self._client_info, - ) - - # Sanity check: We have some fields which are mutually exclusive; - # raise ValueError if more than one is sent. - google.api_core.protobuf_helpers.check_oneof( - transaction_id=transaction_id, single_use_transaction=single_use_transaction - ) - - request = spanner_pb2.CommitRequest( - session=session, - transaction_id=transaction_id, - single_use_transaction=single_use_transaction, - mutations=mutations, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["commit"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def rollback( - self, - session, - transaction_id, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Rolls back a transaction, releasing any locks it holds. It is a good - idea to call this for any transaction that includes one or more ``Read`` - or ``ExecuteSql`` requests and ultimately decides not to commit. - - ``Rollback`` returns ``OK`` if it successfully aborts the transaction, - the transaction was already aborted, or the transaction is not found. - ``Rollback`` never returns ``ABORTED``. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `transaction_id`: - >>> transaction_id = b'' - >>> - >>> client.rollback(session, transaction_id) - - Args: - session (str): Required. The session in which the transaction to roll back is running. - transaction_id (bytes): Required. The transaction to roll back. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "rollback" not in self._inner_api_calls: - self._inner_api_calls[ - "rollback" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.rollback, - default_retry=self._method_configs["Rollback"].retry, - default_timeout=self._method_configs["Rollback"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.RollbackRequest( - session=session, transaction_id=transaction_id - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - self._inner_api_calls["rollback"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def partition_query( - self, - session, - sql, - transaction=None, - params=None, - param_types=None, - partition_options=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Creates a set of partition tokens that can be used to execute a query - operation in parallel. Each of the returned partition tokens can be used - by ``ExecuteStreamingSql`` to specify a subset of the query result to - read. The same session and read-only transaction must be used by the - PartitionQueryRequest used to create the partition tokens and the - ExecuteSqlRequests that use the partition tokens. - - Partition tokens become invalid when the session used to create them is - deleted, is idle for too long, begins a new transaction, or becomes too - old. When any of these happen, it is not possible to resume the query, - and the whole operation must be restarted from the beginning. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `sql`: - >>> sql = '' - >>> - >>> response = client.partition_query(session, sql) - - Args: - session (str): Required. The session used to create the partitions. - sql (str): Required. The query request to generate partitions for. The request will - fail if the query is not root partitionable. The query plan of a root - partitionable query has a single distributed union operator. A - distributed union operator conceptually divides one or more tables into - multiple splits, remotely evaluates a subquery independently on each - split, and then unions all results. - - This must not contain DML commands, such as INSERT, UPDATE, or DELETE. - Use ``ExecuteStreamingSql`` with a PartitionedDml transaction for large, - partition-friendly DML operations. - transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): Read only snapshot transactions are supported, read/write and single use - transactions are not. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionSelector` - params (Union[dict, ~google.cloud.spanner_v1.types.Struct]): Parameter names and values that bind to placeholders in the SQL string. - - A parameter placeholder consists of the ``@`` character followed by the - parameter name (for example, ``@firstName``). Parameter names can - contain letters, numbers, and underscores. - - Parameters can appear anywhere that a literal value is expected. The - same parameter name can be used more than once, for example: - - ``"WHERE id > @msg_id AND id < @msg_id + 100"`` - - It is an error to execute a SQL statement with unbound parameters. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Struct` - param_types (dict[str -> Union[dict, ~google.cloud.spanner_v1.types.Type]]): It is not always possible for Cloud Spanner to infer the right SQL type - from a JSON value. For example, values of type ``BYTES`` and values of - type ``STRING`` both appear in ``params`` as JSON strings. - - In these cases, ``param_types`` can be used to specify the exact SQL - type for some or all of the SQL query parameters. See the definition of - ``Type`` for more information about SQL types. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.Type` - partition_options (Union[dict, ~google.cloud.spanner_v1.types.PartitionOptions]): Additional options that affect how many partitions are created. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.PartitionOptions` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.PartitionResponse` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "partition_query" not in self._inner_api_calls: - self._inner_api_calls[ - "partition_query" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.partition_query, - default_retry=self._method_configs["PartitionQuery"].retry, - default_timeout=self._method_configs["PartitionQuery"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.PartitionQueryRequest( - session=session, - sql=sql, - transaction=transaction, - params=params, - param_types=param_types, - partition_options=partition_options, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["partition_query"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - - def partition_read( - self, - session, - table, - key_set, - transaction=None, - index=None, - columns=None, - partition_options=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Creates a set of partition tokens that can be used to execute a read - operation in parallel. Each of the returned partition tokens can be used - by ``StreamingRead`` to specify a subset of the read result to read. The - same session and read-only transaction must be used by the - PartitionReadRequest used to create the partition tokens and the - ReadRequests that use the partition tokens. There are no ordering - guarantees on rows returned among the returned partition tokens, or even - within each individual StreamingRead call issued with a - partition\_token. - - Partition tokens become invalid when the session used to create them is - deleted, is idle for too long, begins a new transaction, or becomes too - old. When any of these happen, it is not possible to resume the read, - and the whole operation must be restarted from the beginning. - - Example: - >>> from google.cloud import spanner_v1 - >>> - >>> client = spanner_v1.SpannerClient() - >>> - >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') - >>> - >>> # TODO: Initialize `table`: - >>> table = '' - >>> - >>> # TODO: Initialize `key_set`: - >>> key_set = {} - >>> - >>> response = client.partition_read(session, table, key_set) - - Args: - session (str): Required. The session used to create the partitions. - table (str): Required. The name of the table in the database to be read. - key_set (Union[dict, ~google.cloud.spanner_v1.types.KeySet]): Required. ``key_set`` identifies the rows to be yielded. ``key_set`` - names the primary keys of the rows in ``table`` to be yielded, unless - ``index`` is present. If ``index`` is present, then ``key_set`` instead - names index keys in ``index``. - - It is not an error for the ``key_set`` to name rows that do not exist in - the database. Read yields nothing for nonexistent rows. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.KeySet` - transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): Read only snapshot transactions are supported, read/write and single use - transactions are not. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.TransactionSelector` - index (str): If non-empty, the name of an index on ``table``. This index is used - instead of the table primary key when interpreting ``key_set`` and - sorting result rows. See ``key_set`` for further information. - columns (list[str]): The columns of ``table`` to be returned for each row matching this - request. - partition_options (Union[dict, ~google.cloud.spanner_v1.types.PartitionOptions]): Additional options that affect how many partitions are created. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.spanner_v1.types.PartitionOptions` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.spanner_v1.types.PartitionResponse` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "partition_read" not in self._inner_api_calls: - self._inner_api_calls[ - "partition_read" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.partition_read, - default_retry=self._method_configs["PartitionRead"].retry, - default_timeout=self._method_configs["PartitionRead"].timeout, - client_info=self._client_info, - ) - - request = spanner_pb2.PartitionReadRequest( - session=session, - table=table, - key_set=key_set, - transaction=transaction, - index=index, - columns=columns, - partition_options=partition_options, - ) - if metadata is None: - metadata = [] - metadata = list(metadata) - try: - routing_header = [("session", session)] - except AttributeError: - pass - else: - routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( - routing_header - ) - metadata.append(routing_metadata) - - return self._inner_api_calls["partition_read"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) diff --git a/google/cloud/spanner_v1/gapic/spanner_client_config.py b/google/cloud/spanner_v1/gapic/spanner_client_config.py deleted file mode 100644 index 44b81c5fb9..0000000000 --- a/google/cloud/spanner_v1/gapic/spanner_client_config.py +++ /dev/null @@ -1,117 +0,0 @@ -config = { - "interfaces": { - "google.spanner.v1.Spanner": { - "retry_codes": { - "idempotent": ["UNAVAILABLE"], - "non_idempotent": [], - "long_running": ["UNAVAILABLE"], - }, - "retry_params": { - "default": { - "initial_retry_delay_millis": 250, - "retry_delay_multiplier": 1.3, - "max_retry_delay_millis": 32000, - "initial_rpc_timeout_millis": 3600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 3600000, - "total_timeout_millis": 3600000, - }, - "streaming": { - "initial_retry_delay_millis": 250, - "retry_delay_multiplier": 1.3, - "max_retry_delay_millis": 32000, - "initial_rpc_timeout_millis": 3600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 3600000, - "total_timeout_millis": 3600000, - }, - "long_running": { - "initial_retry_delay_millis": 250, - "retry_delay_multiplier": 1.3, - "max_retry_delay_millis": 32000, - "initial_rpc_timeout_millis": 3600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 3600000, - "total_timeout_millis": 3600000, - }, - }, - "methods": { - "CreateSession": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "BatchCreateSessions": { - "timeout_millis": 60000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "GetSession": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "ListSessions": { - "timeout_millis": 3600000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "DeleteSession": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "ExecuteSql": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "ExecuteStreamingSql": { - "timeout_millis": 3600000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "streaming", - }, - "ExecuteBatchDml": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "Read": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "StreamingRead": { - "timeout_millis": 3600000, - "retry_codes_name": "non_idempotent", - "retry_params_name": "streaming", - }, - "BeginTransaction": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "Commit": { - "timeout_millis": 3600000, - "retry_codes_name": "long_running", - "retry_params_name": "long_running", - }, - "Rollback": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "PartitionQuery": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - "PartitionRead": { - "timeout_millis": 30000, - "retry_codes_name": "idempotent", - "retry_params_name": "default", - }, - }, - } - } -} diff --git a/google/cloud/spanner_v1/gapic/transports/spanner.grpc.config b/google/cloud/spanner_v1/gapic/transports/spanner.grpc.config deleted file mode 100755 index c34397a1c8..0000000000 --- a/google/cloud/spanner_v1/gapic/transports/spanner.grpc.config +++ /dev/null @@ -1,88 +0,0 @@ -channel_pool: { - max_size: 10 - max_concurrent_streams_low_watermark: 100 -} -method: { - name: "/google.spanner.v1.Spanner/CreateSession" - affinity: { - command: BIND - affinity_key: "name" - } -} -method: { - name: "/google.spanner.v1.Spanner/GetSession" - affinity: { - command: BOUND - affinity_key: "name" - } -} -method: { - name: "/google.spanner.v1.Spanner/DeleteSession" - affinity: { - command: UNBIND - affinity_key: "name" - } -} -method: { - name: "/google.spanner.v1.Spanner/ExecuteSql" - affinity: { - command: BOUND - affinity_key: "session" - } -} -method: { - name: "/google.spanner.v1.Spanner/ExecuteStreamingSql" - affinity: { - command: BOUND - affinity_key: "session" - } -} -method: { - name: "/google.spanner.v1.Spanner/Read" - affinity: { - command: BOUND - affinity_key: "session" - } -} -method: { - name: "/google.spanner.v1.Spanner/StreamingRead" - affinity: { - command: BOUND - affinity_key: "session" - } -} -method: { - name: "/google.spanner.v1.Spanner/BeginTransaction" - affinity: { - command: BOUND - affinity_key: "session" - } -} -method: { - name: "/google.spanner.v1.Spanner/Commit" - affinity: { - command: BOUND - affinity_key: "session" - } -} -method: { - name: "/google.spanner.v1.Spanner/Rollback" - affinity: { - command: BOUND - affinity_key: "session" - } -} -method: { - name: "/google.spanner.v1.Spanner/PartitionQuery" - affinity: { - command: BOUND - affinity_key: "session" - } -} -method: { - name: "/google.spanner.v1.Spanner/PartitionRead" - affinity: { - command: BOUND - affinity_key: "session" - } -} diff --git a/google/cloud/spanner_v1/gapic/transports/spanner_grpc_transport.py b/google/cloud/spanner_v1/gapic/transports/spanner_grpc_transport.py deleted file mode 100644 index 72b7beeda6..0000000000 --- a/google/cloud/spanner_v1/gapic/transports/spanner_grpc_transport.py +++ /dev/null @@ -1,415 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import pkg_resources -import grpc_gcp - -import google.api_core.grpc_helpers - -from google.cloud.spanner_v1.proto import spanner_pb2_grpc - - -_GRPC_KEEPALIVE_MS = 2 * 60 * 1000 -_SPANNER_GRPC_CONFIG = "spanner.grpc.config" - - -class SpannerGrpcTransport(object): - """gRPC transport class providing stubs for - google.spanner.v1 Spanner API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ( - "https://www.googleapis.com/auth/cloud-platform", - "https://www.googleapis.com/auth/spanner.data", - ) - - def __init__( - self, channel=None, credentials=None, address="spanner.googleapis.com:443" - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive." - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - "grpc.keepalive_time_ms": _GRPC_KEEPALIVE_MS, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = {"spanner_stub": spanner_pb2_grpc.SpannerStub(channel)} - - @classmethod - def create_channel( - cls, address="spanner.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - grpc_gcp_config = grpc_gcp.api_config_from_text_pb( - pkg_resources.resource_string(__name__, _SPANNER_GRPC_CONFIG) - ) - options = [(grpc_gcp.API_CONFIG_CHANNEL_ARG, grpc_gcp_config)] - if "options" in kwargs: - options.extend(kwargs["options"]) - kwargs["options"] = options - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def create_session(self): - """Return the gRPC stub for :meth:`SpannerClient.create_session`. - - Creates a new session. A session can be used to perform transactions - that read and/or modify data in a Cloud Spanner database. Sessions are - meant to be reused for many consecutive transactions. - - Sessions can only execute one transaction at a time. To execute multiple - concurrent read-write/write-only transactions, create multiple sessions. - Note that standalone reads and queries use a transaction internally, and - count toward the one transaction limit. - - Active sessions use additional server resources, so it is a good idea to - delete idle and unneeded sessions. Aside from explicit deletes, Cloud - Spanner may delete sessions for which no operations are sent for more - than an hour. If a session is deleted, requests to it return - ``NOT_FOUND``. - - Idle sessions can be kept alive by sending a trivial SQL query - periodically, e.g., ``"SELECT 1"``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].CreateSession - - @property - def batch_create_sessions(self): - """Return the gRPC stub for :meth:`SpannerClient.batch_create_sessions`. - - Creates multiple new sessions. - - This API can be used to initialize a session cache on the clients. - See https://goo.gl/TgSFN2 for best practices on session cache management. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].BatchCreateSessions - - @property - def get_session(self): - """Return the gRPC stub for :meth:`SpannerClient.get_session`. - - Gets a session. Returns ``NOT_FOUND`` if the session does not exist. - This is mainly useful for determining whether a session is still alive. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].GetSession - - @property - def list_sessions(self): - """Return the gRPC stub for :meth:`SpannerClient.list_sessions`. - - Lists all sessions in a given database. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].ListSessions - - @property - def delete_session(self): - """Return the gRPC stub for :meth:`SpannerClient.delete_session`. - - Ends a session, releasing server resources associated with it. This will - asynchronously trigger cancellation of any operations that are running with - this session. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].DeleteSession - - @property - def execute_sql(self): - """Return the gRPC stub for :meth:`SpannerClient.execute_sql`. - - Executes an SQL statement, returning all results in a single reply. This - method cannot be used to return a result set larger than 10 MiB; if the - query yields more data than that, the query fails with a - ``FAILED_PRECONDITION`` error. - - Operations inside read-write transactions might return ``ABORTED``. If - this occurs, the application should restart the transaction from the - beginning. See ``Transaction`` for more details. - - Larger result sets can be fetched in streaming fashion by calling - ``ExecuteStreamingSql`` instead. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].ExecuteSql - - @property - def execute_streaming_sql(self): - """Return the gRPC stub for :meth:`SpannerClient.execute_streaming_sql`. - - Like ``ExecuteSql``, except returns the result set as a stream. Unlike - ``ExecuteSql``, there is no limit on the size of the returned result - set. However, no individual row in the result set can exceed 100 MiB, - and no column value can exceed 10 MiB. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].ExecuteStreamingSql - - @property - def execute_batch_dml(self): - """Return the gRPC stub for :meth:`SpannerClient.execute_batch_dml`. - - Executes a batch of SQL DML statements. This method allows many - statements to be run with lower latency than submitting them - sequentially with ``ExecuteSql``. - - Statements are executed in sequential order. A request can succeed even - if a statement fails. The ``ExecuteBatchDmlResponse.status`` field in - the response provides information about the statement that failed. - Clients must inspect this field to determine whether an error occurred. - - Execution stops after the first failed statement; the remaining - statements are not executed. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].ExecuteBatchDml - - @property - def read(self): - """Return the gRPC stub for :meth:`SpannerClient.read`. - - Reads rows from the database using key lookups and scans, as a simple - key/value style alternative to ``ExecuteSql``. This method cannot be - used to return a result set larger than 10 MiB; if the read matches more - data than that, the read fails with a ``FAILED_PRECONDITION`` error. - - Reads inside read-write transactions might return ``ABORTED``. If this - occurs, the application should restart the transaction from the - beginning. See ``Transaction`` for more details. - - Larger result sets can be yielded in streaming fashion by calling - ``StreamingRead`` instead. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].Read - - @property - def streaming_read(self): - """Return the gRPC stub for :meth:`SpannerClient.streaming_read`. - - Like ``Read``, except returns the result set as a stream. Unlike - ``Read``, there is no limit on the size of the returned result set. - However, no individual row in the result set can exceed 100 MiB, and no - column value can exceed 10 MiB. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].StreamingRead - - @property - def begin_transaction(self): - """Return the gRPC stub for :meth:`SpannerClient.begin_transaction`. - - Begins a new transaction. This step can often be skipped: ``Read``, - ``ExecuteSql`` and ``Commit`` can begin a new transaction as a - side-effect. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].BeginTransaction - - @property - def commit(self): - """Return the gRPC stub for :meth:`SpannerClient.commit`. - - Commits a transaction. The request includes the mutations to be applied - to rows in the database. - - ``Commit`` might return an ``ABORTED`` error. This can occur at any - time; commonly, the cause is conflicts with concurrent transactions. - However, it can also happen for a variety of other reasons. If - ``Commit`` returns ``ABORTED``, the caller should re-attempt the - transaction from the beginning, re-using the same session. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].Commit - - @property - def rollback(self): - """Return the gRPC stub for :meth:`SpannerClient.rollback`. - - Rolls back a transaction, releasing any locks it holds. It is a good - idea to call this for any transaction that includes one or more ``Read`` - or ``ExecuteSql`` requests and ultimately decides not to commit. - - ``Rollback`` returns ``OK`` if it successfully aborts the transaction, - the transaction was already aborted, or the transaction is not found. - ``Rollback`` never returns ``ABORTED``. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].Rollback - - @property - def partition_query(self): - """Return the gRPC stub for :meth:`SpannerClient.partition_query`. - - Creates a set of partition tokens that can be used to execute a query - operation in parallel. Each of the returned partition tokens can be used - by ``ExecuteStreamingSql`` to specify a subset of the query result to - read. The same session and read-only transaction must be used by the - PartitionQueryRequest used to create the partition tokens and the - ExecuteSqlRequests that use the partition tokens. - - Partition tokens become invalid when the session used to create them is - deleted, is idle for too long, begins a new transaction, or becomes too - old. When any of these happen, it is not possible to resume the query, - and the whole operation must be restarted from the beginning. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].PartitionQuery - - @property - def partition_read(self): - """Return the gRPC stub for :meth:`SpannerClient.partition_read`. - - Creates a set of partition tokens that can be used to execute a read - operation in parallel. Each of the returned partition tokens can be used - by ``StreamingRead`` to specify a subset of the read result to read. The - same session and read-only transaction must be used by the - PartitionReadRequest used to create the partition tokens and the - ReadRequests that use the partition tokens. There are no ordering - guarantees on rows returned among the returned partition tokens, or even - within each individual StreamingRead call issued with a - partition\_token. - - Partition tokens become invalid when the session used to create them is - deleted, is idle for too long, begins a new transaction, or becomes too - old. When any of these happen, it is not possible to resume the read, - and the whole operation must be restarted from the beginning. - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["spanner_stub"].PartitionRead diff --git a/google/cloud/spanner_v1/gapic_metadata.json b/google/cloud/spanner_v1/gapic_metadata.json new file mode 100644 index 0000000000..f5957c633a --- /dev/null +++ b/google/cloud/spanner_v1/gapic_metadata.json @@ -0,0 +1,268 @@ + { + "comment": "This file maps proto services/RPCs to the corresponding library clients/methods", + "language": "python", + "libraryPackage": "google.cloud.spanner_v1", + "protoPackage": "google.spanner.v1", + "schema": "1.0", + "services": { + "Spanner": { + "clients": { + "grpc": { + "libraryClient": "SpannerClient", + "rpcs": { + "BatchCreateSessions": { + "methods": [ + "batch_create_sessions" + ] + }, + "BatchWrite": { + "methods": [ + "batch_write" + ] + }, + "BeginTransaction": { + "methods": [ + "begin_transaction" + ] + }, + "Commit": { + "methods": [ + "commit" + ] + }, + "CreateSession": { + "methods": [ + "create_session" + ] + }, + "DeleteSession": { + "methods": [ + "delete_session" + ] + }, + "ExecuteBatchDml": { + "methods": [ + "execute_batch_dml" + ] + }, + "ExecuteSql": { + "methods": [ + "execute_sql" + ] + }, + "ExecuteStreamingSql": { + "methods": [ + "execute_streaming_sql" + ] + }, + "GetSession": { + "methods": [ + "get_session" + ] + }, + "ListSessions": { + "methods": [ + "list_sessions" + ] + }, + "PartitionQuery": { + "methods": [ + "partition_query" + ] + }, + "PartitionRead": { + "methods": [ + "partition_read" + ] + }, + "Read": { + "methods": [ + "read" + ] + }, + "Rollback": { + "methods": [ + "rollback" + ] + }, + "StreamingRead": { + "methods": [ + "streaming_read" + ] + } + } + }, + "grpc-async": { + "libraryClient": "SpannerAsyncClient", + "rpcs": { + "BatchCreateSessions": { + "methods": [ + "batch_create_sessions" + ] + }, + "BatchWrite": { + "methods": [ + "batch_write" + ] + }, + "BeginTransaction": { + "methods": [ + "begin_transaction" + ] + }, + "Commit": { + "methods": [ + "commit" + ] + }, + "CreateSession": { + "methods": [ + "create_session" + ] + }, + "DeleteSession": { + "methods": [ + "delete_session" + ] + }, + "ExecuteBatchDml": { + "methods": [ + "execute_batch_dml" + ] + }, + "ExecuteSql": { + "methods": [ + "execute_sql" + ] + }, + "ExecuteStreamingSql": { + "methods": [ + "execute_streaming_sql" + ] + }, + "GetSession": { + "methods": [ + "get_session" + ] + }, + "ListSessions": { + "methods": [ + "list_sessions" + ] + }, + "PartitionQuery": { + "methods": [ + "partition_query" + ] + }, + "PartitionRead": { + "methods": [ + "partition_read" + ] + }, + "Read": { + "methods": [ + "read" + ] + }, + "Rollback": { + "methods": [ + "rollback" + ] + }, + "StreamingRead": { + "methods": [ + "streaming_read" + ] + } + } + }, + "rest": { + "libraryClient": "SpannerClient", + "rpcs": { + "BatchCreateSessions": { + "methods": [ + "batch_create_sessions" + ] + }, + "BatchWrite": { + "methods": [ + "batch_write" + ] + }, + "BeginTransaction": { + "methods": [ + "begin_transaction" + ] + }, + "Commit": { + "methods": [ + "commit" + ] + }, + "CreateSession": { + "methods": [ + "create_session" + ] + }, + "DeleteSession": { + "methods": [ + "delete_session" + ] + }, + "ExecuteBatchDml": { + "methods": [ + "execute_batch_dml" + ] + }, + "ExecuteSql": { + "methods": [ + "execute_sql" + ] + }, + "ExecuteStreamingSql": { + "methods": [ + "execute_streaming_sql" + ] + }, + "GetSession": { + "methods": [ + "get_session" + ] + }, + "ListSessions": { + "methods": [ + "list_sessions" + ] + }, + "PartitionQuery": { + "methods": [ + "partition_query" + ] + }, + "PartitionRead": { + "methods": [ + "partition_read" + ] + }, + "Read": { + "methods": [ + "read" + ] + }, + "Rollback": { + "methods": [ + "rollback" + ] + }, + "StreamingRead": { + "methods": [ + "streaming_read" + ] + } + } + } + } + } + } +} diff --git a/google/cloud/spanner_v1/gapic_version.py b/google/cloud/spanner_v1/gapic_version.py new file mode 100644 index 0000000000..fa3f4c040d --- /dev/null +++ b/google/cloud/spanner_v1/gapic_version.py @@ -0,0 +1,16 @@ +# -*- coding: utf-8 -*- +# Copyright 2022 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +__version__ = "3.58.0" # {x-release-please-version} diff --git a/google/cloud/spanner_v1/instance.py b/google/cloud/spanner_v1/instance.py index f0809e7d81..a67e0e630b 100644 --- a/google/cloud/spanner_v1/instance.py +++ b/google/cloud/spanner_v1/instance.py @@ -15,55 +15,57 @@ """User friendly container for Cloud Spanner Instance.""" import google.api_core.operation +from google.api_core.exceptions import InvalidArgument import re +import typing -from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, -) -from google.cloud.spanner_admin_database_v1.proto import ( - backup_pb2, - spanner_database_admin_pb2, -) from google.protobuf.empty_pb2 import Empty from google.protobuf.field_mask_pb2 import FieldMask - -# pylint: disable=ungrouped-imports from google.cloud.exceptions import NotFound + +from google.cloud.spanner_admin_instance_v1 import Instance as InstancePB +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.cloud.spanner_admin_database_v1 import DatabaseDialect +from google.cloud.spanner_admin_database_v1 import ListBackupsRequest +from google.cloud.spanner_admin_database_v1 import ListBackupOperationsRequest +from google.cloud.spanner_admin_database_v1 import ListDatabasesRequest +from google.cloud.spanner_admin_database_v1 import ListDatabaseOperationsRequest from google.cloud.spanner_v1._helpers import _metadata_with_prefix from google.cloud.spanner_v1.backup import Backup from google.cloud.spanner_v1.database import Database -from google.cloud.spanner_v1.pool import BurstyPool - -# pylint: enable=ungrouped-imports - +from google.cloud.spanner_v1.testing.database_test import TestDatabase _INSTANCE_NAME_RE = re.compile( r"^projects/(?P[^/]+)/" r"instances/(?P[a-z][-a-z0-9]*)$" ) DEFAULT_NODE_COUNT = 1 - -_OPERATION_METADATA_MESSAGES = ( - backup_pb2.Backup, - backup_pb2.CreateBackupMetadata, - spanner_database_admin_pb2.CreateDatabaseMetadata, - spanner_database_admin_pb2.Database, - spanner_database_admin_pb2.OptimizeRestoredDatabaseMetadata, - spanner_database_admin_pb2.RestoreDatabaseMetadata, - spanner_database_admin_pb2.UpdateDatabaseDdlMetadata, +PROCESSING_UNITS_PER_NODE = 1000 + +_OPERATION_METADATA_MESSAGES: typing.Tuple = ( + backup.Backup, + backup.CreateBackupMetadata, + backup.CopyBackupMetadata, + spanner_database_admin.CreateDatabaseMetadata, + spanner_database_admin.Database, + spanner_database_admin.OptimizeRestoredDatabaseMetadata, + spanner_database_admin.RestoreDatabaseMetadata, + spanner_database_admin.UpdateDatabaseDdlMetadata, ) _OPERATION_METADATA_TYPES = { - "type.googleapis.com/{}".format(message.DESCRIPTOR.full_name): message + "type.googleapis.com/{}".format(message._meta.full_name): message for message in _OPERATION_METADATA_MESSAGES } _OPERATION_RESPONSE_TYPES = { - backup_pb2.CreateBackupMetadata: backup_pb2.Backup, - spanner_database_admin_pb2.CreateDatabaseMetadata: spanner_database_admin_pb2.Database, - spanner_database_admin_pb2.OptimizeRestoredDatabaseMetadata: spanner_database_admin_pb2.Database, - spanner_database_admin_pb2.RestoreDatabaseMetadata: spanner_database_admin_pb2.Database, - spanner_database_admin_pb2.UpdateDatabaseDdlMetadata: Empty, + backup.CreateBackupMetadata: backup.Backup, + backup.CopyBackupMetadata: backup.Backup, + spanner_database_admin.CreateDatabaseMetadata: spanner_database_admin.Database, + spanner_database_admin.OptimizeRestoredDatabaseMetadata: spanner_database_admin.Database, + spanner_database_admin.RestoreDatabaseMetadata: spanner_database_admin.Database, + spanner_database_admin.UpdateDatabaseDdlMetadata: Empty, } @@ -96,11 +98,18 @@ class Instance(object): :type node_count: int :param node_count: (Optional) Number of nodes allocated to the instance. + :type processing_units: int + :param processing_units: (Optional) The number of processing units + allocated to this instance. + :type display_name: str :param display_name: (Optional) The display name for the instance in the Cloud Console UI. (Must be between 4 and 30 characters.) If this value is not set in the constructor, will fall back to the instance ID. + + :type labels: dict (str -> str) or None + :param labels: (Optional) User-assigned labels for this instance. """ def __init__( @@ -108,16 +117,34 @@ def __init__( instance_id, client, configuration_name=None, - node_count=DEFAULT_NODE_COUNT, + node_count=None, display_name=None, emulator_host=None, + labels=None, + processing_units=None, ): self.instance_id = instance_id self._client = client self.configuration_name = configuration_name - self.node_count = node_count + if node_count is not None and processing_units is not None: + if processing_units != node_count * PROCESSING_UNITS_PER_NODE: + raise InvalidArgument( + "Only one of node count and processing units can be set." + ) + if node_count is None and processing_units is None: + self._node_count = DEFAULT_NODE_COUNT + self._processing_units = DEFAULT_NODE_COUNT * PROCESSING_UNITS_PER_NODE + elif node_count is not None: + self._node_count = node_count + self._processing_units = node_count * PROCESSING_UNITS_PER_NODE + else: + self._processing_units = processing_units + self._node_count = processing_units // PROCESSING_UNITS_PER_NODE self.display_name = display_name or instance_id self.emulator_host = emulator_host + if labels is None: + labels = {} + self.labels = labels def _update_from_pb(self, instance_pb): """Refresh self from the server-provided protobuf. @@ -128,7 +155,9 @@ def _update_from_pb(self, instance_pb): raise ValueError("Instance protobuf does not contain display_name") self.display_name = instance_pb.display_name self.configuration_name = instance_pb.config - self.node_count = instance_pb.node_count + self._node_count = instance_pb.node_count + self._processing_units = instance_pb.processing_units + self.labels = instance_pb.labels @classmethod def from_pb(cls, instance_pb, client): @@ -183,6 +212,44 @@ def name(self): """ return self._client.project_name + "/instances/" + self.instance_id + @property + def processing_units(self): + """Processing units used in requests. + + :rtype: int + :returns: The number of processing units allocated to this instance. + """ + return self._processing_units + + @processing_units.setter + def processing_units(self, value): + """Sets the processing units for requests. Affects node_count. + + :param value: The number of processing units allocated to this instance. + """ + self._processing_units = value + self._node_count = value // PROCESSING_UNITS_PER_NODE + + @property + def node_count(self): + """Node count used in requests. + + :rtype: int + :returns: + The number of nodes in the instance's cluster; + used to set up the instance's cluster. + """ + return self._node_count + + @node_count.setter + def node_count(self, value): + """Sets the node count for requests. Affects processing_units. + + :param value: The number of nodes in the instance's cluster. + """ + self._node_count = value + self._processing_units = value * PROCESSING_UNITS_PER_NODE + def __eq__(self, other): if not isinstance(other, self.__class__): return NotImplemented @@ -211,7 +278,8 @@ def copy(self): self.instance_id, new_client, self.configuration_name, - node_count=self.node_count, + node_count=self._node_count, + processing_units=self._processing_units, display_name=self.display_name, ) @@ -239,11 +307,12 @@ def create(self): :raises Conflict: if the instance already exists """ api = self._client.instance_admin_api - instance_pb = admin_v1_pb2.Instance( + instance_pb = InstancePB( name=self.name, config=self.configuration_name, display_name=self.display_name, - node_count=self.node_count, + processing_units=self._processing_units, + labels=self.labels, ) metadata = _metadata_with_prefix(self.name) @@ -269,7 +338,7 @@ def exists(self): metadata = _metadata_with_prefix(self.name) try: - api.get_instance(self.name, metadata=metadata) + api.get_instance(name=self.name, metadata=metadata) except NotFound: return False @@ -286,7 +355,7 @@ def reload(self): api = self._client.instance_admin_api metadata = _metadata_with_prefix(self.name) - instance_pb = api.get_instance(self.name, metadata=metadata) + instance_pb = api.get_instance(name=self.name, metadata=metadata) self._update_from_pb(instance_pb) @@ -298,8 +367,8 @@ def update(self): .. note:: - Updates the ``display_name`` and ``node_count``. To change those - values before updating, set them via + Updates the ``display_name``, ``node_count``, ``processing_units`` + and ``labels``. To change those values before updating, set them via .. code:: python @@ -313,13 +382,19 @@ def update(self): :raises NotFound: if the instance does not exist """ api = self._client.instance_admin_api - instance_pb = admin_v1_pb2.Instance( + instance_pb = InstancePB( name=self.name, config=self.configuration_name, display_name=self.display_name, - node_count=self.node_count, + node_count=self._node_count, + processing_units=self._processing_units, + labels=self.labels, + ) + + # Always update only processing_units, not nodes + field_mask = FieldMask( + paths=["config", "display_name", "processing_units", "labels"] ) - field_mask = FieldMask(paths=["config", "display_name", "node_count"]) metadata = _metadata_with_prefix(self.name) future = api.update_instance( @@ -340,34 +415,105 @@ def delete(self): Soon afterward: - * The instance and all databases within the instance will be deleteed. + * The instance and all databases within the instance will be deleted. All data in the databases will be permanently deleted. """ api = self._client.instance_admin_api metadata = _metadata_with_prefix(self.name) - api.delete_instance(self.name, metadata=metadata) + api.delete_instance(name=self.name, metadata=metadata) - def database(self, database_id, ddl_statements=(), pool=None): + def database( + self, + database_id, + ddl_statements=(), + pool=None, + logger=None, + encryption_config=None, + database_dialect=DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED, + database_role=None, + enable_drop_protection=False, + # should be only set for tests if tests want to use interceptors + enable_interceptors_in_tests=False, + proto_descriptors=None, + ): """Factory to create a database within this instance. :type database_id: str - :param database_id: The ID of the instance. + :param database_id: The ID of the database. :type ddl_statements: list of string :param ddl_statements: (Optional) DDL statements, excluding the - 'CREATE DATABSE' statement. + 'CREATE DATABASE' statement. :type pool: concrete subclass of :class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`. :param pool: (Optional) session pool to be used by database. + :type logger: :class:`logging.Logger` + :param logger: (Optional) a custom logger that is used if `log_commit_stats` + is `True` to log commit statistics. If not passed, a logger + will be created when needed that will log the commit statistics + to stdout. + + :type encryption_config: + :class:`~google.cloud.spanner_admin_database_v1.types.EncryptionConfig` + or :class:`~google.cloud.spanner_admin_database_v1.types.RestoreDatabaseEncryptionConfig` + or :class:`dict` + :param encryption_config: + (Optional) Encryption configuration for the database. + If a dict is provided, it must be of the same form as either of the protobuf + messages :class:`~google.cloud.spanner_admin_database_v1.types.EncryptionConfig` + or :class:`~google.cloud.spanner_admin_database_v1.types.RestoreDatabaseEncryptionConfig` + + :type database_dialect: + :class:`~google.cloud.spanner_admin_database_v1.types.DatabaseDialect` + :param database_dialect: + (Optional) database dialect for the database + + :type enable_drop_protection: boolean + :param enable_drop_protection: (Optional) Represents whether the database + has drop protection enabled or not. + + :type enable_interceptors_in_tests: boolean + :param enable_interceptors_in_tests: (Optional) should only be set to True + for tests if the tests want to use interceptors. + + :type proto_descriptors: bytes + :param proto_descriptors: (Optional) Proto descriptors used by CREATE/ALTER PROTO BUNDLE + statements in 'ddl_statements' above. + :rtype: :class:`~google.cloud.spanner_v1.database.Database` :returns: a database owned by this instance. """ - return Database(database_id, self, ddl_statements=ddl_statements, pool=pool) - def list_databases(self, page_size=None, page_token=None): + if not enable_interceptors_in_tests: + return Database( + database_id, + self, + ddl_statements=ddl_statements, + pool=pool, + logger=logger, + encryption_config=encryption_config, + database_dialect=database_dialect, + database_role=database_role, + enable_drop_protection=enable_drop_protection, + proto_descriptors=proto_descriptors, + ) + else: + return TestDatabase( + database_id, + self, + ddl_statements=ddl_statements, + pool=pool, + logger=logger, + encryption_config=encryption_config, + database_dialect=database_dialect, + database_role=database_role, + enable_drop_protection=enable_drop_protection, + ) + + def list_databases(self, page_size=None): """List databases for the instance. See @@ -379,42 +525,26 @@ def list_databases(self, page_size=None, page_token=None): from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. - :type page_token: str - :param page_token: - Optional. If present, return the next batch of databases, using - the value, which must correspond to the ``nextPageToken`` value - returned in the previous response. Deprecated: use the ``pages`` - property of the returned iterator instead of manually passing - the token. - :rtype: :class:`~google.api._ore.page_iterator.Iterator` :returns: - Iterator of :class:`~google.cloud.spanner_v1.database.Database` + Iterator of :class:`~google.cloud.spanner_admin_database_v1.types.Database` resources within the current instance. """ metadata = _metadata_with_prefix(self.name) + request = ListDatabasesRequest(parent=self.name, page_size=page_size) page_iter = self._client.database_admin_api.list_databases( - self.name, page_size=page_size, metadata=metadata + request=request, metadata=metadata ) - page_iter.next_page_token = page_token - page_iter.item_to_value = self._item_to_database return page_iter - def _item_to_database(self, iterator, database_pb): - """Convert a database protobuf to the native object. - - :type iterator: :class:`~google.api_core.page_iterator.Iterator` - :param iterator: The iterator that is currently in use. - - :type database_pb: :class:`~google.spanner.admin.database.v1.Database` - :param database_pb: A database returned from the API. - - :rtype: :class:`~google.cloud.spanner_v1.database.Database` - :returns: The next database in the page. - """ - return Database.from_pb(database_pb, self, pool=BurstyPool()) - - def backup(self, backup_id, database="", expire_time=None): + def backup( + self, + backup_id, + database="", + expire_time=None, + version_time=None, + encryption_config=None, + ): """Factory to create a backup within this instance. :type backup_id: str @@ -429,13 +559,77 @@ def backup(self, backup_id, database="", expire_time=None): :param expire_time: Optional. The expire time that will be used when creating the backup. Required if the create method needs to be called. + + :type version_time: :class:`datetime.datetime` + :param version_time: + Optional. The version time that will be used to create the externally + consistent copy of the database. If not present, it is the same as + the `create_time` of the backup. + + :type encryption_config: + :class:`~google.cloud.spanner_admin_database_v1.types.CreateBackupEncryptionConfig` + or :class:`dict` + :param encryption_config: + (Optional) Encryption configuration for the backup. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_admin_database_v1.types.CreateBackupEncryptionConfig` + + :rtype: :class:`~google.cloud.spanner_v1.backup.Backup` + :returns: a backup owned by this instance. """ try: return Backup( - backup_id, self, database=database.name, expire_time=expire_time + backup_id, + self, + database=database.name, + expire_time=expire_time, + version_time=version_time, + encryption_config=encryption_config, ) except AttributeError: - return Backup(backup_id, self, database=database, expire_time=expire_time) + return Backup( + backup_id, + self, + database=database, + expire_time=expire_time, + version_time=version_time, + encryption_config=encryption_config, + ) + + def copy_backup( + self, + backup_id, + source_backup, + expire_time=None, + encryption_config=None, + ): + """Factory to create a copy backup within this instance. + + :type backup_id: str + :param backup_id: The ID of the backup copy. + :type source_backup: str + :param source_backup_id: The full path of the source backup to be copied. + :type expire_time: :class:`datetime.datetime` + :param expire_time: + Optional. The expire time that will be used when creating the copy backup. + Required if the create method needs to be called. + :type encryption_config: + :class:`~google.cloud.spanner_admin_database_v1.types.CopyBackupEncryptionConfig` + or :class:`dict` + :param encryption_config: + (Optional) Encryption configuration for the backup. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_admin_database_v1.types.CopyBackupEncryptionConfig` + :rtype: :class:`~google.cloud.spanner_v1.backup.Backup` + :returns: a copy backup owned by this instance. + """ + return Backup( + backup_id, + self, + source_backup=source_backup, + expire_time=expire_time, + encryption_config=encryption_config, + ) def list_backups(self, filter_="", page_size=None): """List backups for the instance. @@ -452,30 +646,20 @@ def list_backups(self, filter_="", page_size=None): :rtype: :class:`~google.api_core.page_iterator.Iterator` :returns: - Iterator of :class:`~google.cloud.spanner_v1.backup.Backup` + Iterator of :class:`~google.cloud.spanner_admin_database_v1.types.Backup` resources within the current instance. """ metadata = _metadata_with_prefix(self.name) + request = ListBackupsRequest( + parent=self.name, + filter=filter_, + page_size=page_size, + ) page_iter = self._client.database_admin_api.list_backups( - self.name, filter_, page_size=page_size, metadata=metadata + request=request, metadata=metadata ) - page_iter.item_to_value = self._item_to_backup return page_iter - def _item_to_backup(self, iterator, backup_pb): - """Convert a backup protobuf to the native object. - - :type iterator: :class:`~google.api_core.page_iterator.Iterator` - :param iterator: The iterator that is currently in use. - - :type backup_pb: :class:`~google.spanner.admin.database.v1.Backup` - :param backup_pb: A backup returned from the API. - - :rtype: :class:`~google.cloud.spanner_v1.backup.Backup` - :returns: The next backup in the page. - """ - return Backup.from_pb(backup_pb, self) - def list_backup_operations(self, filter_="", page_size=None): """List backup operations for the instance. @@ -496,11 +680,15 @@ def list_backup_operations(self, filter_="", page_size=None): resources within the current instance. """ metadata = _metadata_with_prefix(self.name) + request = ListBackupOperationsRequest( + parent=self.name, + filter=filter_, + page_size=page_size, + ) page_iter = self._client.database_admin_api.list_backup_operations( - self.name, filter_, page_size=page_size, metadata=metadata + request=request, metadata=metadata ) - page_iter.item_to_value = self._item_to_operation - return page_iter + return map(self._item_to_operation, page_iter) def list_database_operations(self, filter_="", page_size=None): """List database operations for the instance. @@ -522,25 +710,24 @@ def list_database_operations(self, filter_="", page_size=None): resources within the current instance. """ metadata = _metadata_with_prefix(self.name) + request = ListDatabaseOperationsRequest( + parent=self.name, + filter=filter_, + page_size=page_size, + ) page_iter = self._client.database_admin_api.list_database_operations( - self.name, filter_, page_size=page_size, metadata=metadata + request=request, metadata=metadata ) - page_iter.item_to_value = self._item_to_operation - return page_iter + return map(self._item_to_operation, page_iter) - def _item_to_operation(self, iterator, operation_pb): + def _item_to_operation(self, operation_pb): """Convert an operation protobuf to the native object. - - :type iterator: :class:`~google.api_core.page_iterator.Iterator` - :param iterator: The iterator that is currently in use. - :type operation_pb: :class:`~google.longrunning.operations.Operation` :param operation_pb: An operation returned from the API. - :rtype: :class:`~google.api_core.operation.Operation` :returns: The next operation in the page. """ - operations_client = self._client.database_admin_api.transport._operations_client + operations_client = self._client.database_admin_api.transport.operations_client metadata_type = _type_string_to_type_pb(operation_pb.metadata.type_url) response_type = _OPERATION_RESPONSE_TYPES[metadata_type] return google.api_core.operation.from_gapic( diff --git a/google/cloud/spanner_v1/keyset.py b/google/cloud/spanner_v1/keyset.py index fb45882bec..ab712219f0 100644 --- a/google/cloud/spanner_v1/keyset.py +++ b/google/cloud/spanner_v1/keyset.py @@ -14,8 +14,8 @@ """Wrap representation of Spanner keys / ranges.""" -from google.cloud.spanner_v1.proto.keys_pb2 import KeyRange as KeyRangePB -from google.cloud.spanner_v1.proto.keys_pb2 import KeySet as KeySetPB +from google.cloud.spanner_v1 import KeyRangePB +from google.cloud.spanner_v1 import KeySetPB from google.cloud.spanner_v1._helpers import _make_list_value_pb from google.cloud.spanner_v1._helpers import _make_list_value_pbs @@ -68,7 +68,7 @@ def __init__( def _to_pb(self): """Construct a KeyRange protobuf. - :rtype: :class:`~google.cloud.spanner_v1.proto.keys_pb2.KeyRange` + :rtype: :class:`~google.cloud.spanner_v1.types.KeyRange` :returns: protobuf corresponding to this instance. """ kwargs = {} @@ -88,7 +88,7 @@ def _to_pb(self): return KeyRangePB(**kwargs) def _to_dict(self): - """Return keyrange's state as a dict. + """Return the state of the keyrange as a dict. :rtype: dict :returns: state of this instance. @@ -139,11 +139,11 @@ def __init__(self, keys=(), ranges=(), all_=False): def _to_pb(self): """Construct a KeySet protobuf. - :rtype: :class:`~google.cloud.spanner_v1.proto.keys_pb2.KeySet` + :rtype: :class:`~google.cloud.spanner_v1.types.KeySet` :returns: protobuf corresponding to this instance. """ if self.all_: - return KeySetPB(all=True) + return KeySetPB(all_=True) kwargs = {} if self.keys: @@ -155,7 +155,7 @@ def _to_pb(self): return KeySetPB(**kwargs) def _to_dict(self): - """Return keyset's state as a dict. + """Return the state of the keyset as a dict. The result can be used to serialize the instance and reconstitute it later using :meth:`_from_dict`. diff --git a/google/cloud/spanner_v1/merged_result_set.py b/google/cloud/spanner_v1/merged_result_set.py new file mode 100644 index 0000000000..7af989d696 --- /dev/null +++ b/google/cloud/spanner_v1/merged_result_set.py @@ -0,0 +1,146 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from concurrent.futures import ThreadPoolExecutor +from dataclasses import dataclass +from queue import Queue +from typing import Any, TYPE_CHECKING +from threading import Lock, Event + +from google.cloud.spanner_v1._opentelemetry_tracing import trace_call +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture + +if TYPE_CHECKING: + from google.cloud.spanner_v1.database import BatchSnapshot + +QUEUE_SIZE_PER_WORKER = 32 +MAX_PARALLELISM = 16 + + +class PartitionExecutor: + """ + Executor that executes single partition on a separate thread and inserts + rows in the queue + """ + + def __init__(self, batch_snapshot, partition_id, merged_result_set): + self._batch_snapshot: BatchSnapshot = batch_snapshot + self._partition_id = partition_id + self._merged_result_set: MergedResultSet = merged_result_set + self._queue: Queue[PartitionExecutorResult] = merged_result_set._queue + + def run(self): + observability_options = getattr( + self._batch_snapshot, "observability_options", {} + ) + with trace_call( + "CloudSpanner.PartitionExecutor.run", + observability_options=observability_options, + ), MetricsCapture(): + self.__run() + + def __run(self): + results = None + try: + results = self._batch_snapshot.process_query_batch(self._partition_id) + for row in results: + if self._merged_result_set._metadata is None: + self._set_metadata(results) + self._queue.put(PartitionExecutorResult(data=row)) + # Special case: The result set did not return any rows. + # Push the metadata to the merged result set. + if self._merged_result_set._metadata is None: + self._set_metadata(results) + except Exception as ex: + if self._merged_result_set._metadata is None: + self._set_metadata(results, True) + self._queue.put(PartitionExecutorResult(exception=ex)) + finally: + # Emit a special 'is_last' result to ensure that the MergedResultSet + # is not blocked on a queue that never receives any more results. + self._queue.put(PartitionExecutorResult(is_last=True)) + + def _set_metadata(self, results, is_exception=False): + self._merged_result_set.metadata_lock.acquire() + try: + if not is_exception: + self._merged_result_set._metadata = results.metadata + finally: + self._merged_result_set.metadata_lock.release() + self._merged_result_set.metadata_event.set() + + +@dataclass +class PartitionExecutorResult: + data: Any = None + exception: Exception = None + is_last: bool = False + + +class MergedResultSet: + """ + Executes multiple partitions on different threads and then combines the + results from multiple queries using a synchronized queue. The order of the + records in the MergedResultSet is not guaranteed. + """ + + def __init__(self, batch_snapshot, partition_ids, max_parallelism): + self._exception = None + self._metadata = None + self.metadata_event = Event() + self.metadata_lock = Lock() + + partition_ids_count = len(partition_ids) + self._finished_count_down_latch = partition_ids_count + parallelism = min(MAX_PARALLELISM, partition_ids_count) + if max_parallelism != 0: + parallelism = min(partition_ids_count, max_parallelism) + self._queue = Queue(maxsize=QUEUE_SIZE_PER_WORKER * parallelism) + + partition_executors = [] + for partition_id in partition_ids: + partition_executors.append( + PartitionExecutor(batch_snapshot, partition_id, self) + ) + executor = ThreadPoolExecutor(max_workers=parallelism) + for partition_executor in partition_executors: + executor.submit(partition_executor.run) + executor.shutdown(False) + + def __iter__(self): + return self + + def __next__(self): + if self._exception is not None: + raise self._exception + while True: + partition_result = self._queue.get() + if partition_result.is_last: + self._finished_count_down_latch -= 1 + if self._finished_count_down_latch == 0: + raise StopIteration + elif partition_result.exception is not None: + self._exception = partition_result.exception + raise self._exception + else: + return partition_result.data + + @property + def metadata(self): + self.metadata_event.wait() + return self._metadata + + @property + def stats(self): + # TODO: Implement + return None diff --git a/google/cloud/spanner_v1/metrics/README.md b/google/cloud/spanner_v1/metrics/README.md new file mode 100644 index 0000000000..9619715c85 --- /dev/null +++ b/google/cloud/spanner_v1/metrics/README.md @@ -0,0 +1,19 @@ +# Custom Metric Exporter +The custom metric exporter, as defined in [metrics_exporter.py](./metrics_exporter.py), is designed to work in conjunction with OpenTelemetry and the Spanner client. It converts data into its protobuf equivalent and sends it to Google Cloud Monitoring. + +## Filtering Criteria +The exporter filters metrics based on the following conditions, utilizing values defined in [constants.py](./constants.py): + +* Metrics with a scope set to `gax-python`. +* Metrics with one of the following predefined names: + * `attempt_latencies` + * `attempt_count` + * `operation_latencies` + * `operation_count` + * `gfe_latency` + * `gfe_missing_header_count` + +## Service Endpoint +The exporter sends metrics to the Google Cloud Monitoring [service endpoint](https://cloud.google.com/python/docs/reference/monitoring/latest/google.cloud.monitoring_v3.services.metric_service.MetricServiceClient#google_cloud_monitoring_v3_services_metric_service_MetricServiceClient_create_service_time_series), distinct from the regular client endpoint. This service endpoint operates under a different quota limit than the user endpoint and features an additional server-side filter that only permits a predefined set of metrics to pass through. + +When introducing new service metrics, it is essential to ensure they are allowed through by the server-side filter as well. diff --git a/google/cloud/spanner_v1/metrics/constants.py b/google/cloud/spanner_v1/metrics/constants.py new file mode 100644 index 0000000000..a47aecc9ed --- /dev/null +++ b/google/cloud/spanner_v1/metrics/constants.py @@ -0,0 +1,71 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +BUILT_IN_METRICS_METER_NAME = "gax-python" +NATIVE_METRICS_PREFIX = "spanner.googleapis.com/internal/client" +SPANNER_RESOURCE_TYPE = "spanner_instance_client" +SPANNER_SERVICE_NAME = "spanner-python" +GOOGLE_CLOUD_RESOURCE_KEY = "google-cloud-resource-prefix" +GOOGLE_CLOUD_REGION_KEY = "cloud.region" +GOOGLE_CLOUD_REGION_GLOBAL = "global" +SPANNER_METHOD_PREFIX = "/google.spanner.v1." +ENABLE_SPANNER_METRICS_ENV_VAR = "SPANNER_ENABLE_BUILTIN_METRICS" + +# Monitored resource labels +MONITORED_RES_LABEL_KEY_PROJECT = "project_id" +MONITORED_RES_LABEL_KEY_INSTANCE = "instance_id" +MONITORED_RES_LABEL_KEY_INSTANCE_CONFIG = "instance_config" +MONITORED_RES_LABEL_KEY_LOCATION = "location" +MONITORED_RES_LABEL_KEY_CLIENT_HASH = "client_hash" +MONITORED_RESOURCE_LABELS = [ + MONITORED_RES_LABEL_KEY_PROJECT, + MONITORED_RES_LABEL_KEY_INSTANCE, + MONITORED_RES_LABEL_KEY_INSTANCE_CONFIG, + MONITORED_RES_LABEL_KEY_LOCATION, + MONITORED_RES_LABEL_KEY_CLIENT_HASH, +] + +# Metric labels +METRIC_LABEL_KEY_CLIENT_UID = "client_uid" +METRIC_LABEL_KEY_CLIENT_NAME = "client_name" +METRIC_LABEL_KEY_DATABASE = "database" +METRIC_LABEL_KEY_METHOD = "method" +METRIC_LABEL_KEY_STATUS = "status" +METRIC_LABEL_KEY_DIRECT_PATH_ENABLED = "directpath_enabled" +METRIC_LABEL_KEY_DIRECT_PATH_USED = "directpath_used" +METRIC_LABELS = [ + METRIC_LABEL_KEY_CLIENT_UID, + METRIC_LABEL_KEY_CLIENT_NAME, + METRIC_LABEL_KEY_DATABASE, + METRIC_LABEL_KEY_METHOD, + METRIC_LABEL_KEY_STATUS, + METRIC_LABEL_KEY_DIRECT_PATH_ENABLED, + METRIC_LABEL_KEY_DIRECT_PATH_USED, +] + +# Metric names +METRIC_NAME_OPERATION_LATENCIES = "operation_latencies" +METRIC_NAME_ATTEMPT_LATENCIES = "attempt_latencies" +METRIC_NAME_OPERATION_COUNT = "operation_count" +METRIC_NAME_ATTEMPT_COUNT = "attempt_count" +METRIC_NAME_GFE_LATENCY = "gfe_latency" +METRIC_NAME_GFE_MISSING_HEADER_COUNT = "gfe_missing_header_count" +METRIC_NAMES = [ + METRIC_NAME_OPERATION_LATENCIES, + METRIC_NAME_ATTEMPT_LATENCIES, + METRIC_NAME_OPERATION_COUNT, + METRIC_NAME_ATTEMPT_COUNT, +] + +METRIC_EXPORT_INTERVAL_MS = 60000 # 1 Minute diff --git a/google/cloud/spanner_v1/metrics/metrics_capture.py b/google/cloud/spanner_v1/metrics/metrics_capture.py new file mode 100644 index 0000000000..6197ae5257 --- /dev/null +++ b/google/cloud/spanner_v1/metrics/metrics_capture.py @@ -0,0 +1,75 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" +This module provides functionality for capturing metrics in Cloud Spanner operations. + +It includes a context manager class, MetricsCapture, which automatically handles the +start and completion of metrics tracing for a given operation. This ensures that metrics +are consistently recorded for Cloud Spanner operations, facilitating observability and +performance monitoring. +""" + +from .spanner_metrics_tracer_factory import SpannerMetricsTracerFactory + + +class MetricsCapture: + """Context manager for capturing metrics in Cloud Spanner operations. + + This class provides a context manager interface to automatically handle + the start and completion of metrics tracing for a given operation. + """ + + def __enter__(self): + """Enter the runtime context related to this object. + + This method initializes a new metrics tracer for the operation and + records the start of the operation. + + Returns: + MetricsCapture: The instance of the context manager. + """ + # Short circuit out if metrics are disabled + factory = SpannerMetricsTracerFactory() + if not factory.enabled: + return self + + # Define a new metrics tracer for the new operation + SpannerMetricsTracerFactory.current_metrics_tracer = ( + factory.create_metrics_tracer() + ) + if SpannerMetricsTracerFactory.current_metrics_tracer: + SpannerMetricsTracerFactory.current_metrics_tracer.record_operation_start() + return self + + def __exit__(self, exc_type, exc_value, traceback): + """Exit the runtime context related to this object. + + This method records the completion of the operation. If an exception + occurred, it will be propagated after the metrics are recorded. + + Args: + exc_type (Type[BaseException]): The exception type. + exc_value (BaseException): The exception value. + traceback (TracebackType): The traceback object. + + Returns: + bool: False to propagate the exception if any occurred. + """ + # Short circuit out if metrics are disable + if not SpannerMetricsTracerFactory().enabled: + return False + + if SpannerMetricsTracerFactory.current_metrics_tracer: + SpannerMetricsTracerFactory.current_metrics_tracer.record_operation_completion() + return False # Propagate the exception if any diff --git a/google/cloud/spanner_v1/metrics/metrics_exporter.py b/google/cloud/spanner_v1/metrics/metrics_exporter.py new file mode 100644 index 0000000000..68da08b400 --- /dev/null +++ b/google/cloud/spanner_v1/metrics/metrics_exporter.py @@ -0,0 +1,384 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +from .constants import ( + BUILT_IN_METRICS_METER_NAME, + NATIVE_METRICS_PREFIX, + SPANNER_RESOURCE_TYPE, + MONITORED_RESOURCE_LABELS, + METRIC_LABELS, + METRIC_NAMES, +) + +import logging +from typing import Optional, List, Union, NoReturn, Tuple, Dict + +import google.auth +from google.auth import credentials as ga_credentials +from google.api.distribution_pb2 import ( # pylint: disable=no-name-in-module + Distribution, +) + +# pylint: disable=no-name-in-module +from google.api.metric_pb2 import ( # pylint: disable=no-name-in-module + Metric as GMetric, + MetricDescriptor, +) +from google.api.monitored_resource_pb2 import ( # pylint: disable=no-name-in-module + MonitoredResource, +) + +# pylint: disable=no-name-in-module +from google.protobuf.timestamp_pb2 import Timestamp +from google.cloud.spanner_v1.gapic_version import __version__ + +try: + from opentelemetry.sdk.metrics.export import ( + Gauge, + Histogram, + HistogramDataPoint, + Metric, + MetricExporter, + MetricExportResult, + MetricsData, + NumberDataPoint, + Sum, + ) + from opentelemetry.sdk.resources import Resource + from google.cloud.monitoring_v3.services.metric_service.transports.grpc import ( + MetricServiceGrpcTransport, + ) + from google.cloud.monitoring_v3 import ( + CreateTimeSeriesRequest, + MetricServiceClient, + Point, + TimeInterval, + TimeSeries, + TypedValue, + ) + + HAS_OPENTELEMETRY_INSTALLED = True +except ImportError: # pragma: NO COVER + HAS_OPENTELEMETRY_INSTALLED = False + MetricExporter = object + +logger = logging.getLogger(__name__) +MAX_BATCH_WRITE = 200 +MILLIS_PER_SECOND = 1000 + +_USER_AGENT = f"python-spanner; google-cloud-service-metric-exporter {__version__}" + +# Set user-agent metadata, see https://github.com/grpc/grpc/issues/23644 and default options +# from +# https://github.com/googleapis/python-monitoring/blob/v2.11.3/google/cloud/monitoring_v3/services/metric_service/transports/grpc.py#L175-L178 +_OPTIONS = [ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ("grpc.primary_user_agent", _USER_AGENT), +] + + +# pylint is unable to resolve members of protobuf objects +# pylint: disable=no-member +# pylint: disable=too-many-branches +# pylint: disable=too-many-locals +class CloudMonitoringMetricsExporter(MetricExporter): + """Implementation of Metrics Exporter to Google Cloud Monitoring. + + You can manually pass in project_id and client, or else the + Exporter will take that information from Application Default + Credentials. + + Args: + project_id: project id of your Google Cloud project. + client: Client to upload metrics to Google Cloud Monitoring. + """ + + # Based on the cloud_monitoring exporter found here: https://github.com/GoogleCloudPlatform/opentelemetry-operations-python/blob/main/opentelemetry-exporter-gcp-monitoring/src/opentelemetry/exporter/cloud_monitoring/__init__.py + + def __init__( + self, + project_id: Optional[str] = None, + client: Optional["MetricServiceClient"] = None, + credentials: Optional[ga_credentials.Credentials] = None, + ): + """Initialize a custom exporter to send metrics for the Spanner Service Metrics.""" + # Default preferred_temporality is all CUMULATIVE so need to customize + super().__init__() + + # Create a new GRPC Client for Google Cloud Monitoring if not provided + self.client = client or MetricServiceClient( + transport=MetricServiceGrpcTransport( + channel=MetricServiceGrpcTransport.create_channel( + options=_OPTIONS, + credentials=credentials, + ) + ) + ) + + # Set project information + self.project_id: str + if not project_id: + _, default_project_id = google.auth.default() + self.project_id = str(default_project_id) + else: + self.project_id = project_id + self.project_name = self.client.common_project_path(self.project_id) + + def _batch_write(self, series: List["TimeSeries"], timeout_millis: float) -> None: + """Cloud Monitoring allows writing up to 200 time series at once. + + :param series: ProtoBuf TimeSeries + :return: + """ + write_ind = 0 + timeout = timeout_millis / MILLIS_PER_SECOND + while write_ind < len(series): + request = CreateTimeSeriesRequest( + name=self.project_name, + time_series=series[write_ind : write_ind + MAX_BATCH_WRITE], + ) + + self.client.create_service_time_series( + request=request, + timeout=timeout, + ) + write_ind += MAX_BATCH_WRITE + + @staticmethod + def _resource_to_monitored_resource_pb( + resource: "Resource", labels: Dict[str, str] + ) -> "MonitoredResource": + """ + Convert the resource to a Google Cloud Monitoring monitored resource. + + :param resource: OpenTelemetry resource + :param labels: labels to add to the monitored resource + :return: Google Cloud Monitoring monitored resource + """ + monitored_resource = MonitoredResource( + type=SPANNER_RESOURCE_TYPE, + labels=labels, + ) + return monitored_resource + + @staticmethod + def _to_metric_kind(metric: "Metric") -> MetricDescriptor.MetricKind: + """ + Convert the metric to a Google Cloud Monitoring metric kind. + + :param metric: OpenTelemetry metric + :return: Google Cloud Monitoring metric kind + """ + data = metric.data + if isinstance(data, Sum): + if data.is_monotonic: + return MetricDescriptor.MetricKind.CUMULATIVE + else: + return MetricDescriptor.MetricKind.GAUGE + elif isinstance(data, Gauge): + return MetricDescriptor.MetricKind.GAUGE + elif isinstance(data, Histogram): + return MetricDescriptor.MetricKind.CUMULATIVE + else: + # Exhaustive check + _: NoReturn = data + logger.warning( + "Unsupported metric data type %s, ignoring it", + type(data).__name__, + ) + return None + + @staticmethod + def _extract_metric_labels( + data_point: Union["NumberDataPoint", "HistogramDataPoint"] + ) -> Tuple[dict, dict]: + """ + Extract the metric labels from the data point. + + :param data_point: OpenTelemetry data point + :return: tuple of metric labels and monitored resource labels + """ + metric_labels = {} + monitored_resource_labels = {} + for key, value in (data_point.attributes or {}).items(): + normalized_key = _normalize_label_key(key) + val = str(value) + if key in METRIC_LABELS: + metric_labels[normalized_key] = val + if key in MONITORED_RESOURCE_LABELS: + monitored_resource_labels[normalized_key] = val + return metric_labels, monitored_resource_labels + + # Unchanged from https://github.com/GoogleCloudPlatform/opentelemetry-operations-python/blob/main/opentelemetry-exporter-gcp-monitoring/src/opentelemetry/exporter/cloud_monitoring/__init__.py + @staticmethod + def _to_point( + kind: "MetricDescriptor.MetricKind.V", + data_point: Union["NumberDataPoint", "HistogramDataPoint"], + ) -> "Point": + # Create a Google Cloud Monitoring data point value based on the OpenTelemetry metric data point type + ## For histograms, we need to calculate the mean and bucket counts + if isinstance(data_point, HistogramDataPoint): + mean = data_point.sum / data_point.count if data_point.count else 0.0 + point_value = TypedValue( + distribution_value=Distribution( + count=data_point.count, + mean=mean, + bucket_counts=data_point.bucket_counts, + bucket_options=Distribution.BucketOptions( + explicit_buckets=Distribution.BucketOptions.Explicit( + bounds=data_point.explicit_bounds, + ) + ), + ) + ) + else: + # For other metric types, we can use the data point value directly + if isinstance(data_point.value, int): + point_value = TypedValue(int64_value=data_point.value) + else: + point_value = TypedValue(double_value=data_point.value) + + # DELTA case should never happen but adding it to be future proof + if ( + kind is MetricDescriptor.MetricKind.CUMULATIVE + or kind is MetricDescriptor.MetricKind.DELTA + ): + # Create a Google Cloud Monitoring time interval from the OpenTelemetry data point timestamps + interval = TimeInterval( + start_time=_timestamp_from_nanos(data_point.start_time_unix_nano), + end_time=_timestamp_from_nanos(data_point.time_unix_nano), + ) + else: + # For non time ranged metrics, we only need the end time + interval = TimeInterval( + end_time=_timestamp_from_nanos(data_point.time_unix_nano), + ) + return Point(interval=interval, value=point_value) + + @staticmethod + def _data_point_to_timeseries_pb( + data_point, + metric, + monitored_resource, + labels, + ) -> "TimeSeries": + """ + Convert the data point to a Google Cloud Monitoring time series. + + :param data_point: OpenTelemetry data point + :param metric: OpenTelemetry metric + :param monitored_resource: Google Cloud Monitoring monitored resource + :param labels: metric labels + :return: Google Cloud Monitoring time series + """ + if metric.name not in METRIC_NAMES: + return None + + kind = CloudMonitoringMetricsExporter._to_metric_kind(metric) + point = CloudMonitoringMetricsExporter._to_point(kind, data_point) + type = f"{NATIVE_METRICS_PREFIX}/{metric.name}" + series = TimeSeries( + resource=monitored_resource, + metric_kind=kind, + points=[point], + metric=GMetric(type=type, labels=labels), + unit=metric.unit or "", + ) + return series + + @staticmethod + def _resource_metrics_to_timeseries_pb( + metrics_data: "MetricsData", + ) -> List["TimeSeries"]: + """ + Convert the metrics data to a list of Google Cloud Monitoring time series. + + :param metrics_data: OpenTelemetry metrics data + :return: list of Google Cloud Monitoring time series + """ + timeseries_list = [] + for resource_metric in metrics_data.resource_metrics: + for scope_metric in resource_metric.scope_metrics: + # Filter for spanner builtin metrics + if scope_metric.scope.name != BUILT_IN_METRICS_METER_NAME: + continue + + for metric in scope_metric.metrics: + for data_point in metric.data.data_points: + ( + metric_labels, + monitored_resource_labels, + ) = CloudMonitoringMetricsExporter._extract_metric_labels( + data_point + ) + monitored_resource = CloudMonitoringMetricsExporter._resource_to_monitored_resource_pb( + resource_metric.resource, monitored_resource_labels + ) + timeseries = ( + CloudMonitoringMetricsExporter._data_point_to_timeseries_pb( + data_point, metric, monitored_resource, metric_labels + ) + ) + if timeseries is not None: + timeseries_list.append(timeseries) + + return timeseries_list + + def export( + self, + metrics_data: "MetricsData", + timeout_millis: float = 10_000, + **kwargs, + ) -> "MetricExportResult": + """ + Export the metrics data to Google Cloud Monitoring. + + :param metrics_data: OpenTelemetry metrics data + :param timeout_millis: timeout in milliseconds + :return: MetricExportResult + """ + if not HAS_OPENTELEMETRY_INSTALLED: + logger.warning("Metric exporter called without dependencies installed.") + return False + time_series_list = self._resource_metrics_to_timeseries_pb(metrics_data) + self._batch_write(time_series_list, timeout_millis) + return True + + def force_flush(self, timeout_millis: float = 10_000) -> bool: + """Not implemented.""" + return True + + def shutdown(self, timeout_millis: float = 30_000, **kwargs) -> None: + """Safely shuts down the exporter and closes all opened GRPC channels.""" + self.client.transport.close() + + +def _timestamp_from_nanos(nanos: int) -> Timestamp: + ts = Timestamp() + ts.FromNanoseconds(nanos) + return ts + + +def _normalize_label_key(key: str) -> str: + """Make the key into a valid Google Cloud Monitoring label key. + + See reference impl + https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/blob/e955c204f4f2bfdc92ff0ad52786232b975efcc2/exporter/metric/metric.go#L595-L604 + """ + sanitized = "".join(c if c.isalpha() or c.isnumeric() else "_" for c in key) + if sanitized[0].isdigit(): + sanitized = "key_" + sanitized + return sanitized diff --git a/google/cloud/spanner_v1/metrics/metrics_interceptor.py b/google/cloud/spanner_v1/metrics/metrics_interceptor.py new file mode 100644 index 0000000000..4b55056dab --- /dev/null +++ b/google/cloud/spanner_v1/metrics/metrics_interceptor.py @@ -0,0 +1,156 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Interceptor for collecting Cloud Spanner metrics.""" + +from grpc_interceptor import ClientInterceptor +from .constants import ( + GOOGLE_CLOUD_RESOURCE_KEY, + SPANNER_METHOD_PREFIX, +) + +from typing import Dict +from .spanner_metrics_tracer_factory import SpannerMetricsTracerFactory +import re + + +class MetricsInterceptor(ClientInterceptor): + """Interceptor that collects metrics for Cloud Spanner operations.""" + + @staticmethod + def _parse_resource_path(path: str) -> dict: + """Parse the resource path to extract project, instance and database. + + Args: + path (str): The resource path from the request + + Returns: + dict: Extracted resource components + """ + # Match paths like: + # projects/{project}/instances/{instance}/databases/{database}/sessions/{session} + # projects/{project}/instances/{instance}/databases/{database} + # projects/{project}/instances/{instance} + pattern = r"^projects/(?P[^/]+)(/instances/(?P[^/]+))?(/databases/(?P[^/]+))?(/sessions/(?P[^/]+))?.*$" + match = re.match(pattern, path) + if match: + return {k: v for k, v in match.groupdict().items() if v is not None} + return {} + + @staticmethod + def _extract_resource_from_path(metadata: Dict[str, str]) -> Dict[str, str]: + """ + Extracts resource information from the metadata based on the path. + + This method iterates through the metadata dictionary to find the first tuple containing the key 'google-cloud-resource-prefix'. It then extracts the path from this tuple and parses it to extract project, instance, and database information using the _parse_resource_path method. + + Args: + metadata (Dict[str, str]): A dictionary containing metadata information. + + Returns: + Dict[str, str]: A dictionary containing extracted project, instance, and database information. + """ + # Extract resource info from the first metadata tuple containing :path + path = next( + (value for key, value in metadata if key == GOOGLE_CLOUD_RESOURCE_KEY), "" + ) + + resources = MetricsInterceptor._parse_resource_path(path) + return resources + + @staticmethod + def _remove_prefix(s: str, prefix: str) -> str: + """ + This function removes the prefix from the given string. + + Args: + s (str): The string from which the prefix is to be removed. + prefix (str): The prefix to be removed from the string. + + Returns: + str: The string with the prefix removed. + + Note: + This function is used because the `removeprefix` method does not exist in Python 3.8. + """ + if s.startswith(prefix): + return s[len(prefix) :] + return s + + def _set_metrics_tracer_attributes(self, resources: Dict[str, str]) -> None: + """ + Sets the metric tracer attributes based on the provided resources. + + This method updates the current metric tracer's attributes with the project, instance, and database information extracted from the resources dictionary. If the current metric tracer is not set, the method does nothing. + + Args: + resources (Dict[str, str]): A dictionary containing project, instance, and database information. + """ + if SpannerMetricsTracerFactory.current_metrics_tracer is None: + return + + if resources: + if "project" in resources: + SpannerMetricsTracerFactory.current_metrics_tracer.set_project( + resources["project"] + ) + if "instance" in resources: + SpannerMetricsTracerFactory.current_metrics_tracer.set_instance( + resources["instance"] + ) + if "database" in resources: + SpannerMetricsTracerFactory.current_metrics_tracer.set_database( + resources["database"] + ) + + def intercept(self, invoked_method, request_or_iterator, call_details): + """Intercept gRPC calls to collect metrics. + + Args: + invoked_method: The RPC method + request_or_iterator: The RPC request + call_details: Details about the RPC call + + Returns: + The RPC response + """ + factory = SpannerMetricsTracerFactory() + if ( + SpannerMetricsTracerFactory.current_metrics_tracer is None + or not factory.enabled + ): + return invoked_method(request_or_iterator, call_details) + + # Setup Metric Tracer attributes from call details + ## Extract Project / Instance / Databse from header information + resources = self._extract_resource_from_path(call_details.metadata) + self._set_metrics_tracer_attributes(resources) + + ## Format method to be be spanner. + method_name = self._remove_prefix( + call_details.method, SPANNER_METHOD_PREFIX + ).replace("/", ".") + + SpannerMetricsTracerFactory.current_metrics_tracer.set_method(method_name) + SpannerMetricsTracerFactory.current_metrics_tracer.record_attempt_start() + response = invoked_method(request_or_iterator, call_details) + SpannerMetricsTracerFactory.current_metrics_tracer.record_attempt_completion() + + # Process and send GFE metrics if enabled + if SpannerMetricsTracerFactory.current_metrics_tracer.gfe_enabled: + metadata = response.initial_metadata() + SpannerMetricsTracerFactory.current_metrics_trace.record_gfe_metrics( + metadata + ) + return response diff --git a/google/cloud/spanner_v1/metrics/metrics_tracer.py b/google/cloud/spanner_v1/metrics/metrics_tracer.py new file mode 100644 index 0000000000..87035d9c22 --- /dev/null +++ b/google/cloud/spanner_v1/metrics/metrics_tracer.py @@ -0,0 +1,588 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +This module contains the MetricTracer class and its related helper classes. + +The MetricTracer class is responsible for collecting and tracing metrics, +while the helper classes provide additional functionality and context for the metrics being traced. +""" + +from datetime import datetime +from typing import Dict +from grpc import StatusCode +from .constants import ( + METRIC_LABEL_KEY_CLIENT_NAME, + METRIC_LABEL_KEY_CLIENT_UID, + METRIC_LABEL_KEY_DATABASE, + METRIC_LABEL_KEY_DIRECT_PATH_ENABLED, + METRIC_LABEL_KEY_METHOD, + METRIC_LABEL_KEY_STATUS, + MONITORED_RES_LABEL_KEY_CLIENT_HASH, + MONITORED_RES_LABEL_KEY_INSTANCE, + MONITORED_RES_LABEL_KEY_INSTANCE_CONFIG, + MONITORED_RES_LABEL_KEY_LOCATION, + MONITORED_RES_LABEL_KEY_PROJECT, +) + +try: + from opentelemetry.metrics import Counter, Histogram + + HAS_OPENTELEMETRY_INSTALLED = True +except ImportError: # pragma: NO COVER + HAS_OPENTELEMETRY_INSTALLED = False + + +class MetricAttemptTracer: + """ + This class is designed to hold information related to a metric attempt. + + It captures the start time of the attempt, whether the direct path was used, and the status of the attempt. + """ + + _start_time: datetime + direct_path_used: bool + status: str + + def __init__(self) -> None: + """ + Initialize a MetricAttemptTracer instance with default values. + + This constructor sets the start time of the metric attempt to the current datetime, initializes the status as an empty string, and sets direct path used flag to False by default. + """ + self._start_time = datetime.now() + self.status = "" + self.direct_path_used = False + + @property + def start_time(self): + """Getter method for the start_time property. + + This method returns the start time of the metric attempt. + + Returns: + datetime: The start time of the metric attempt. + """ + return self._start_time + + +class MetricOpTracer: + """ + This class is designed to store and manage information related to metric operations. + It captures the method name, start time, attempt count, current attempt, status, and direct path enabled status of a metric operation. + """ + + _attempt_count: int + _start_time: datetime + _current_attempt: MetricAttemptTracer + status: str + + def __init__(self, is_direct_path_enabled: bool = False): + """ + Initialize a MetricOpTracer instance with the given parameters. + + This constructor sets up a MetricOpTracer instance with the provided instrumentations for attempt latency, + attempt counter, operation latency and operation counter. + + Args: + instrument_attempt_latency (Histogram): The instrumentation for measuring attempt latency. + instrument_attempt_counter (Counter): The instrumentation for counting attempts. + instrument_operation_latency (Histogram): The instrumentation for measuring operation latency. + instrument_operation_counter (Counter): The instrumentation for counting operations. + """ + self._attempt_count = 0 + self._start_time = datetime.now() + self._current_attempt = None + self.status = "" + + @property + def attempt_count(self): + """ + Getter method for the attempt_count property. + + This method returns the current count of attempts made for the metric operation. + + Returns: + int: The current count of attempts. + """ + return self._attempt_count + + @property + def current_attempt(self): + """ + Getter method for the current_attempt property. + + This method returns the current MetricAttemptTracer instance associated with the metric operation. + + Returns: + MetricAttemptTracer: The current MetricAttemptTracer instance. + """ + return self._current_attempt + + @property + def start_time(self): + """ + Getter method for the start_time property. + + This method returns the start time of the metric operation. + + Returns: + datetime: The start time of the metric operation. + """ + return self._start_time + + def increment_attempt_count(self): + """ + Increments the attempt count by 1. + + This method updates the attempt count by incrementing it by 1, indicating a new attempt has been made. + """ + self._attempt_count += 1 + + def start(self): + """ + Set the start time of the metric operation to the current time. + + This method updates the start time of the metric operation to the current time, indicating the operation has started. + """ + self._start_time = datetime.now() + + def new_attempt(self): + """ + Initialize a new MetricAttemptTracer instance for the current metric operation. + + This method sets up a new MetricAttemptTracer instance, indicating a new attempt is being made within the metric operation. + """ + self._current_attempt = MetricAttemptTracer() + + +class MetricsTracer: + """ + This class computes generic metrics that can be observed in the lifecycle of an RPC operation. + + The responsibility of recording metrics should delegate to MetricsRecorder, hence this + class should not have any knowledge about the observability framework used for metrics recording. + """ + + _client_attributes: Dict[str, str] + _instrument_attempt_counter: "Counter" + _instrument_attempt_latency: "Histogram" + _instrument_operation_counter: "Counter" + _instrument_operation_latency: "Histogram" + _instrument_gfe_latency: "Histogram" + _instrument_gfe_missing_header_count: "Counter" + current_op: MetricOpTracer + enabled: bool + gfe_enabled: bool + method: str + + def __init__( + self, + enabled: bool, + instrument_attempt_latency: "Histogram", + instrument_attempt_counter: "Counter", + instrument_operation_latency: "Histogram", + instrument_operation_counter: "Counter", + client_attributes: Dict[str, str], + gfe_enabled: bool = False, + ): + """ + Initialize a MetricsTracer instance with the given parameters. + + This constructor sets up a MetricsTracer instance with the specified parameters, including the enabled status, + instruments for measuring and counting attempt and operation metrics, and client attributes. It prepares the + infrastructure needed for recording metrics related to RPC operations. + + Args: + enabled (bool): Indicates if metrics tracing is enabled. + instrument_attempt_latency (Histogram): Instrument for measuring attempt latency. + instrument_attempt_counter (Counter): Instrument for counting attempts. + instrument_operation_latency (Histogram): Instrument for measuring operation latency. + instrument_operation_counter (Counter): Instrument for counting operations. + client_attributes (Dict[str, str]): Dictionary of client attributes used for metrics tracing. + gfe_enabled (bool, optional): Indicates if GFE metrics are enabled. Defaults to False. + """ + self.current_op = MetricOpTracer() + self._client_attributes = client_attributes + self._instrument_attempt_latency = instrument_attempt_latency + self._instrument_attempt_counter = instrument_attempt_counter + self._instrument_operation_latency = instrument_operation_latency + self._instrument_operation_counter = instrument_operation_counter + self.enabled = enabled + self.gfe_enabled = gfe_enabled + + @staticmethod + def _get_ms_time_diff(start: datetime, end: datetime) -> float: + """ + Calculate the time difference in milliseconds between two datetime objects. + + This method calculates the time difference between two datetime objects and returns the result in milliseconds. + This is useful for measuring the duration of operations or attempts for metrics tracing. + Note: total_seconds() returns a float value of seconds. + + Args: + start (datetime): The start datetime. + end (datetime): The end datetime. + + Returns: + float: The time difference in milliseconds. + """ + time_delta = end - start + return time_delta.total_seconds() * 1000 + + @property + def client_attributes(self) -> Dict[str, str]: + """ + Return a dictionary of client attributes used for metrics tracing. + + This property returns a dictionary containing client attributes such as project, instance, + instance configuration, location, client hash, client UID, client name, and database. + These attributes are used to provide context to the metrics being traced. + + Returns: + dict[str, str]: A dictionary of client attributes. + """ + return self._client_attributes + + @property + def instrument_attempt_counter(self) -> "Counter": + """ + Return the instrument for counting attempts. + + This property returns the Counter instrument used to count the number of attempts made during RPC operations. + This metric is useful for tracking the frequency of attempts and can help identify patterns or issues in the operation flow. + + Returns: + Counter: The instrument for counting attempts. + """ + return self._instrument_attempt_counter + + @property + def instrument_attempt_latency(self) -> "Histogram": + """ + Return the instrument for measuring attempt latency. + + This property returns the Histogram instrument used to measure the latency of individual attempts. + This metric is useful for tracking the performance of attempts and can help identify bottlenecks or issues in the operation flow. + + Returns: + Histogram: The instrument for measuring attempt latency. + """ + return self._instrument_attempt_latency + + @property + def instrument_operation_counter(self) -> "Counter": + """ + Return the instrument for counting operations. + + This property returns the Counter instrument used to count the number of operations made during RPC operations. + This metric is useful for tracking the frequency of operations and can help identify patterns or issues in the operation flow. + + Returns: + Counter: The instrument for counting operations. + """ + return self._instrument_operation_counter + + @property + def instrument_operation_latency(self) -> "Histogram": + """ + Return the instrument for measuring operation latency. + + This property returns the Histogram instrument used to measure the latency of operations. + This metric is useful for tracking the performance of operations and can help identify bottlenecks or issues in the operation flow. + + Returns: + Histogram: The instrument for measuring operation latency. + """ + return self._instrument_operation_latency + + def record_attempt_start(self) -> None: + """ + Record the start of a new attempt within the current operation. + + This method increments the attempt count for the current operation and marks the start of a new attempt. + It is used to track the number of attempts made during an operation and to identify the start of each attempt for metrics and tracing purposes. + """ + self.current_op.increment_attempt_count() + self.current_op.new_attempt() + + def record_attempt_completion(self, status: str = StatusCode.OK.name) -> None: + """ + Record the completion of an attempt within the current operation. + + This method updates the status of the current attempt to indicate its completion and records the latency of the attempt. + It calculates the elapsed time since the attempt started and uses this value to record the attempt latency metric. + This metric is useful for tracking the performance of individual attempts and can help identify bottlenecks or issues in the operation flow. + + If metrics tracing is not enabled, this method does not perform any operations. + """ + if not self.enabled or not HAS_OPENTELEMETRY_INSTALLED: + return + self.current_op.current_attempt.status = status + + # Build Attributes + attempt_attributes = self._create_attempt_otel_attributes() + + # Calculate elapsed time + attempt_latency_ms = self._get_ms_time_diff( + start=self.current_op.current_attempt.start_time, end=datetime.now() + ) + + # Record attempt latency + self.instrument_attempt_latency.record( + amount=attempt_latency_ms, attributes=attempt_attributes + ) + + def record_operation_start(self) -> None: + """ + Record the start of a new operation. + + This method marks the beginning of a new operation and initializes the operation's metrics tracking. + It is used to track the start time of an operation, which is essential for calculating operation latency and other metrics. + If metrics tracing is not enabled, this method does not perform any operations. + """ + if not self.enabled or not HAS_OPENTELEMETRY_INSTALLED: + return + self.current_op.start() + + def record_operation_completion(self) -> None: + """ + Record the completion of an operation. + + This method marks the end of an operation and updates the metrics accordingly. + It calculates the operation latency by measuring the time elapsed since the operation started and records this metric. + Additionally, it increments the operation count and records the attempt count for the operation. + If metrics tracing is not enabled, this method does not perform any operations. + """ + if not self.enabled or not HAS_OPENTELEMETRY_INSTALLED: + return + end_time = datetime.now() + # Build Attributes + operation_attributes = self._create_operation_otel_attributes() + attempt_attributes = self._create_attempt_otel_attributes() + + # Calculate elapsed time + operation_latency_ms = self._get_ms_time_diff( + start=self.current_op.start_time, end=end_time + ) + + # Increase operation count + self.instrument_operation_counter.add(amount=1, attributes=operation_attributes) + + # Record operation latency + self.instrument_operation_latency.record( + amount=operation_latency_ms, attributes=operation_attributes + ) + + # Record Attempt Count + self.instrument_attempt_counter.add( + self.current_op.attempt_count, attributes=attempt_attributes + ) + + def record_gfe_latency(self, latency: int) -> None: + """ + Records the GFE latency using the Histogram instrument. + + Args: + latency (int): The latency duration to be recorded. + """ + if not self.enabled or not HAS_OPENTELEMETRY_INSTALLED or not self.gfe_enabled: + return + self._instrument_gfe_latency.record( + amount=latency, attributes=self.client_attributes + ) + + def record_gfe_missing_header_count(self) -> None: + """ + Increments the counter for missing GFE headers. + """ + if not self.enabled or not HAS_OPENTELEMETRY_INSTALLED or not self.gfe_enabled: + return + self._instrument_gfe_missing_header_count.add( + amount=1, attributes=self.client_attributes + ) + + def _create_operation_otel_attributes(self) -> dict: + """ + Create additional attributes for operation metrics tracing. + + This method populates the client attributes dictionary with the operation status if metrics tracing is enabled. + It returns the updated client attributes dictionary. + """ + if not self.enabled or not HAS_OPENTELEMETRY_INSTALLED: + return {} + attributes = self._client_attributes.copy() + attributes[METRIC_LABEL_KEY_STATUS] = self.current_op.status + return attributes + + def _create_attempt_otel_attributes(self) -> dict: + """ + Create additional attributes for attempt metrics tracing. + + This method populates the attributes dictionary with the attempt status if metrics tracing is enabled and an attempt exists. + It returns the updated attributes dictionary. + """ + if not self.enabled or not HAS_OPENTELEMETRY_INSTALLED: + return {} + + attributes = self._client_attributes.copy() + + # Short circuit out if we don't have an attempt + if self.current_op.current_attempt is None: + return attributes + + attributes[METRIC_LABEL_KEY_STATUS] = self.current_op.current_attempt.status + return attributes + + def set_project(self, project: str) -> "MetricsTracer": + """ + Set the project attribute for metrics tracing. + + This method updates the project attribute in the client attributes dictionary for metrics tracing purposes. + If the project attribute already has a value, this method does nothing and returns. + + :param project: The project name to set. + :return: This instance of MetricsTracer for method chaining. + """ + if MONITORED_RES_LABEL_KEY_PROJECT not in self._client_attributes: + self._client_attributes[MONITORED_RES_LABEL_KEY_PROJECT] = project + return self + + def set_instance(self, instance: str) -> "MetricsTracer": + """ + Set the instance attribute for metrics tracing. + + This method updates the instance attribute in the client attributes dictionary for metrics tracing purposes. + If the instance attribute already has a value, this method does nothing and returns. + + :param instance: The instance name to set. + :return: This instance of MetricsTracer for method chaining. + """ + if MONITORED_RES_LABEL_KEY_INSTANCE not in self._client_attributes: + self._client_attributes[MONITORED_RES_LABEL_KEY_INSTANCE] = instance + return self + + def set_instance_config(self, instance_config: str) -> "MetricsTracer": + """ + Set the instance configuration attribute for metrics tracing. + + This method updates the instance configuration attribute in the client attributes dictionary for metrics tracing purposes. + If the instance configuration attribute already has a value, this method does nothing and returns. + + :param instance_config: The instance configuration name to set. + :return: This instance of MetricsTracer for method chaining. + """ + if MONITORED_RES_LABEL_KEY_INSTANCE_CONFIG not in self._client_attributes: + self._client_attributes[ + MONITORED_RES_LABEL_KEY_INSTANCE_CONFIG + ] = instance_config + return self + + def set_location(self, location: str) -> "MetricsTracer": + """ + Set the location attribute for metrics tracing. + + This method updates the location attribute in the client attributes dictionary for metrics tracing purposes. + If the location attribute already has a value, this method does nothing and returns. + + :param location: The location name to set. + :return: This instance of MetricsTracer for method chaining. + """ + if MONITORED_RES_LABEL_KEY_LOCATION not in self._client_attributes: + self._client_attributes[MONITORED_RES_LABEL_KEY_LOCATION] = location + return self + + def set_client_hash(self, hash: str) -> "MetricsTracer": + """ + Set the client hash attribute for metrics tracing. + + This method updates the client hash attribute in the client attributes dictionary for metrics tracing purposes. + If the client hash attribute already has a value, this method does nothing and returns. + + :param hash: The client hash to set. + :return: This instance of MetricsTracer for method chaining. + """ + if MONITORED_RES_LABEL_KEY_CLIENT_HASH not in self._client_attributes: + self._client_attributes[MONITORED_RES_LABEL_KEY_CLIENT_HASH] = hash + return self + + def set_client_uid(self, client_uid: str) -> "MetricsTracer": + """ + Set the client UID attribute for metrics tracing. + + This method updates the client UID attribute in the client attributes dictionary for metrics tracing purposes. + If the client UID attribute already has a value, this method does nothing and returns. + + :param client_uid: The client UID to set. + :return: This instance of MetricsTracer for method chaining. + """ + if METRIC_LABEL_KEY_CLIENT_UID not in self._client_attributes: + self._client_attributes[METRIC_LABEL_KEY_CLIENT_UID] = client_uid + return self + + def set_client_name(self, client_name: str) -> "MetricsTracer": + """ + Set the client name attribute for metrics tracing. + + This method updates the client name attribute in the client attributes dictionary for metrics tracing purposes. + If the client name attribute already has a value, this method does nothing and returns. + + :param client_name: The client name to set. + :return: This instance of MetricsTracer for method chaining. + """ + if METRIC_LABEL_KEY_CLIENT_NAME not in self._client_attributes: + self._client_attributes[METRIC_LABEL_KEY_CLIENT_NAME] = client_name + return self + + def set_database(self, database: str) -> "MetricsTracer": + """ + Set the database attribute for metrics tracing. + + This method updates the database attribute in the client attributes dictionary for metrics tracing purposes. + If the database attribute already has a value, this method does nothing and returns. + + :param database: The database name to set. + :return: This instance of MetricsTracer for method chaining. + """ + if METRIC_LABEL_KEY_DATABASE not in self._client_attributes: + self._client_attributes[METRIC_LABEL_KEY_DATABASE] = database + return self + + def set_method(self, method: str) -> "MetricsTracer": + """ + Set the method attribute for metrics tracing. + + This method updates the method attribute in the client attributes dictionary for metrics tracing purposes. + If the database attribute already has a value, this method does nothing and returns. + + :param method: The method name to set. + :return: This instance of MetricsTracer for method chaining. + """ + if METRIC_LABEL_KEY_METHOD not in self._client_attributes: + self.client_attributes[METRIC_LABEL_KEY_METHOD] = method + return self + + def enable_direct_path(self, enable: bool = False) -> "MetricsTracer": + """ + Enable or disable the direct path for metrics tracing. + + This method updates the direct path enabled attribute in the client attributes dictionary for metrics tracing purposes. + If the direct path enabled attribute already has a value, this method does nothing and returns. + + :param enable: Boolean indicating whether to enable the direct path. + :return: This instance of MetricsTracer for method chaining. + """ + if METRIC_LABEL_KEY_DIRECT_PATH_ENABLED not in self._client_attributes: + self._client_attributes[METRIC_LABEL_KEY_DIRECT_PATH_ENABLED] = str(enable) + return self diff --git a/google/cloud/spanner_v1/metrics/metrics_tracer_factory.py b/google/cloud/spanner_v1/metrics/metrics_tracer_factory.py new file mode 100644 index 0000000000..ed4b270f06 --- /dev/null +++ b/google/cloud/spanner_v1/metrics/metrics_tracer_factory.py @@ -0,0 +1,328 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Factory for creating MetricTracer instances, facilitating metrics collection and tracing.""" + +from google.cloud.spanner_v1.metrics.metrics_tracer import MetricsTracer + +from google.cloud.spanner_v1.metrics.constants import ( + METRIC_NAME_OPERATION_LATENCIES, + MONITORED_RES_LABEL_KEY_PROJECT, + METRIC_NAME_ATTEMPT_LATENCIES, + METRIC_NAME_OPERATION_COUNT, + METRIC_NAME_ATTEMPT_COUNT, + MONITORED_RES_LABEL_KEY_INSTANCE, + MONITORED_RES_LABEL_KEY_INSTANCE_CONFIG, + MONITORED_RES_LABEL_KEY_LOCATION, + MONITORED_RES_LABEL_KEY_CLIENT_HASH, + METRIC_LABEL_KEY_CLIENT_UID, + METRIC_LABEL_KEY_CLIENT_NAME, + METRIC_LABEL_KEY_DATABASE, + METRIC_LABEL_KEY_DIRECT_PATH_ENABLED, + BUILT_IN_METRICS_METER_NAME, + METRIC_NAME_GFE_LATENCY, + METRIC_NAME_GFE_MISSING_HEADER_COUNT, +) + +from typing import Dict + +try: + from opentelemetry.metrics import Counter, Histogram, get_meter_provider + + HAS_OPENTELEMETRY_INSTALLED = True +except ImportError: # pragma: NO COVER + HAS_OPENTELEMETRY_INSTALLED = False + +from google.cloud.spanner_v1 import __version__ + + +class MetricsTracerFactory: + """Factory class for creating MetricTracer instances. This class facilitates the creation of MetricTracer objects, which are responsible for collecting and tracing metrics.""" + + enabled: bool + gfe_enabled: bool + _instrument_attempt_latency: "Histogram" + _instrument_attempt_counter: "Counter" + _instrument_operation_latency: "Histogram" + _instrument_operation_counter: "Counter" + _instrument_gfe_latency: "Histogram" + _instrument_gfe_missing_header_count: "Counter" + _client_attributes: Dict[str, str] + + @property + def instrument_attempt_latency(self) -> "Histogram": + return self._instrument_attempt_latency + + @property + def instrument_attempt_counter(self) -> "Counter": + return self._instrument_attempt_counter + + @property + def instrument_operation_latency(self) -> "Histogram": + return self._instrument_operation_latency + + @property + def instrument_operation_counter(self) -> "Counter": + return self._instrument_operation_counter + + def __init__(self, enabled: bool, service_name: str): + """Initialize a MetricsTracerFactory instance with the given parameters. + + This constructor initializes a MetricsTracerFactory instance with the provided service name, project, instance, instance configuration, location, client hash, client UID, client name, and database. It sets up the necessary metric instruments and client attributes for metrics tracing. + + Args: + service_name (str): The name of the service for which metrics are being traced. + project (str): The project ID for the monitored resource. + """ + self.enabled = enabled + self._create_metric_instruments(service_name) + self._client_attributes = {} + + @property + def client_attributes(self) -> Dict[str, str]: + """Return a dictionary of client attributes used for metrics tracing. + + This property returns a dictionary containing client attributes such as project, instance, + instance configuration, location, client hash, client UID, client name, and database. + These attributes are used to provide context to the metrics being traced. + + Returns: + dict[str, str]: A dictionary of client attributes. + """ + return self._client_attributes + + def set_project(self, project: str) -> "MetricsTracerFactory": + """Set the project attribute for metrics tracing. + + This method updates the client attributes dictionary with the provided project name. + The project name is used to identify the project for which metrics are being traced + and is passed to the created MetricsTracer. + + Args: + project (str): The name of the project for metrics tracing. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[MONITORED_RES_LABEL_KEY_PROJECT] = project + return self + + def set_instance(self, instance: str) -> "MetricsTracerFactory": + """Set the instance attribute for metrics tracing. + + This method updates the client attributes dictionary with the provided instance name. + The instance name is used to identify the instance for which metrics are being traced + and is passed to the created MetricsTracer. + + Args: + instance (str): The name of the instance for metrics tracing. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[MONITORED_RES_LABEL_KEY_INSTANCE] = instance + return self + + def set_instance_config(self, instance_config: str) -> "MetricsTracerFactory": + """Sets the instance configuration attribute for metrics tracing. + + This method updates the client attributes dictionary with the provided instance configuration. + The instance configuration is used to identify the configuration of the instance for which + metrics are being traced and is passed to the created MetricsTracer. + + Args: + instance_config (str): The configuration of the instance for metrics tracing. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[ + MONITORED_RES_LABEL_KEY_INSTANCE_CONFIG + ] = instance_config + return self + + def set_location(self, location: str) -> "MetricsTracerFactory": + """Set the location attribute for metrics tracing. + + This method updates the client attributes dictionary with the provided location. + The location is used to identify the location for which metrics are being traced + and is passed to the created MetricsTracer. + + Args: + location (str): The location for metrics tracing. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[MONITORED_RES_LABEL_KEY_LOCATION] = location + return self + + def set_client_hash(self, hash: str) -> "MetricsTracerFactory": + """Set the client hash attribute for metrics tracing. + + This method updates the client attributes dictionary with the provided client hash. + The client hash is used to identify the client for which metrics are being traced + and is passed to the created MetricsTracer. + + Args: + hash (str): The hash of the client for metrics tracing. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[MONITORED_RES_LABEL_KEY_CLIENT_HASH] = hash + return self + + def set_client_uid(self, client_uid: str) -> "MetricsTracerFactory": + """Set the client UID attribute for metrics tracing. + + This method updates the client attributes dictionary with the provided client UID. + The client UID is used to identify the client for which metrics are being traced + and is passed to the created MetricsTracer. + + Args: + client_uid (str): The UID of the client for metrics tracing. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[METRIC_LABEL_KEY_CLIENT_UID] = client_uid + return self + + def set_client_name(self, client_name: str) -> "MetricsTracerFactory": + """Set the client name attribute for metrics tracing. + + This method updates the client attributes dictionary with the provided client name. + The client name is used to identify the client for which metrics are being traced + and is passed to the created MetricsTracer. + + Args: + client_name (str): The name of the client for metrics tracing. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[METRIC_LABEL_KEY_CLIENT_NAME] = client_name + return self + + def set_database(self, database: str) -> "MetricsTracerFactory": + """Set the database attribute for metrics tracing. + + This method updates the client attributes dictionary with the provided database name. + The database name is used to identify the database for which metrics are being traced + and is passed to the created MetricsTracer. + + Args: + database (str): The name of the database for metrics tracing. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[METRIC_LABEL_KEY_DATABASE] = database + return self + + def enable_direct_path(self, enable: bool = False) -> "MetricsTracerFactory": + """Enable or disable the direct path for metrics tracing. + + This method updates the client attributes dictionary with the provided enable status. + The direct path enabled status is used to determine whether to use the direct path for metrics tracing + and is passed to the created MetricsTracer. + + Args: + enable (bool, optional): Whether to enable the direct path for metrics tracing. Defaults to False. + + Returns: + MetricsTracerFactory: The current instance of MetricsTracerFactory to enable method chaining. + """ + self._client_attributes[METRIC_LABEL_KEY_DIRECT_PATH_ENABLED] = enable + return self + + def create_metrics_tracer(self) -> MetricsTracer: + """ + Create and return a MetricsTracer instance with default settings and client attributes. + + This method initializes a MetricsTracer instance with default settings for metrics tracing, + including metrics tracing enabled if OpenTelemetry is installed and the direct path disabled by default. + It also sets the client attributes based on the factory's configuration. + + Returns: + MetricsTracer: A MetricsTracer instance with default settings and client attributes. + """ + if not HAS_OPENTELEMETRY_INSTALLED: + return None + + metrics_tracer = MetricsTracer( + enabled=self.enabled and HAS_OPENTELEMETRY_INSTALLED, + instrument_attempt_latency=self._instrument_attempt_latency, + instrument_attempt_counter=self._instrument_attempt_counter, + instrument_operation_latency=self._instrument_operation_latency, + instrument_operation_counter=self._instrument_operation_counter, + client_attributes=self._client_attributes.copy(), + ) + return metrics_tracer + + def _create_metric_instruments(self, service_name: str) -> None: + """ + Creates and sets up metric instruments for the given service name. + + This method initializes and configures metric instruments for attempt latency, attempt counter, + operation latency, and operation counter. These instruments are used to measure and track + metrics related to attempts and operations within the service. + + Args: + service_name (str): The name of the service for which metric instruments are being created. + """ + if not HAS_OPENTELEMETRY_INSTALLED: # pragma: NO COVER + return + + meter_provider = get_meter_provider() + meter = meter_provider.get_meter( + name=BUILT_IN_METRICS_METER_NAME, version=__version__ + ) + + self._instrument_attempt_latency = meter.create_histogram( + name=METRIC_NAME_ATTEMPT_LATENCIES, + unit="ms", + description="Time an individual attempt took.", + ) + + self._instrument_attempt_counter = meter.create_counter( + name=METRIC_NAME_ATTEMPT_COUNT, + unit="1", + description="Number of attempts.", + ) + + self._instrument_operation_latency = meter.create_histogram( + name=METRIC_NAME_OPERATION_LATENCIES, + unit="ms", + description="Total time until final operation success or failure, including retries and backoff.", + ) + + self._instrument_operation_counter = meter.create_counter( + name=METRIC_NAME_OPERATION_COUNT, + unit="1", + description="Number of operations.", + ) + + self._instrument_gfe_latency = meter.create_histogram( + name=METRIC_NAME_GFE_LATENCY, + unit="ms", + description="GFE Latency.", + ) + + self._instrument_gfe_missing_header_count = meter.create_counter( + name=METRIC_NAME_GFE_MISSING_HEADER_COUNT, + unit="1", + description="GFE missing header count.", + ) diff --git a/google/cloud/spanner_v1/metrics/spanner_metrics_tracer_factory.py b/google/cloud/spanner_v1/metrics/spanner_metrics_tracer_factory.py new file mode 100644 index 0000000000..fd00c4de9c --- /dev/null +++ b/google/cloud/spanner_v1/metrics/spanner_metrics_tracer_factory.py @@ -0,0 +1,172 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +"""This module provides a singleton factory for creating SpannerMetricsTracer instances.""" + +from .metrics_tracer_factory import MetricsTracerFactory +import os +from .constants import ( + SPANNER_SERVICE_NAME, + GOOGLE_CLOUD_REGION_KEY, + GOOGLE_CLOUD_REGION_GLOBAL, +) + +try: + from opentelemetry.resourcedetector import gcp_resource_detector + + # Overwrite the requests timeout for the detector. + # This is necessary as the client will wait the full timeout if the + # code is not run in a GCP environment, with the location endpoints available. + gcp_resource_detector._TIMEOUT_SEC = 0.2 + + import mmh3 + + # Override Resource detector logging to not warn when GCP resources are not detected + import logging + + logging.getLogger("opentelemetry.resourcedetector.gcp_resource_detector").setLevel( + logging.ERROR + ) + + HAS_OPENTELEMETRY_INSTALLED = True +except ImportError: # pragma: NO COVER + HAS_OPENTELEMETRY_INSTALLED = False + +from .metrics_tracer import MetricsTracer +from google.cloud.spanner_v1 import __version__ +from uuid import uuid4 + + +class SpannerMetricsTracerFactory(MetricsTracerFactory): + """A factory for creating SpannerMetricsTracer instances.""" + + _metrics_tracer_factory: "SpannerMetricsTracerFactory" = None + current_metrics_tracer: MetricsTracer = None + + def __new__( + cls, enabled: bool = True, gfe_enabled: bool = False + ) -> "SpannerMetricsTracerFactory": + """ + Create a new instance of SpannerMetricsTracerFactory if it doesn't already exist. + + This method implements the singleton pattern for the SpannerMetricsTracerFactory class. + It initializes the factory with the necessary client attributes and configuration settings + if it hasn't been created yet. + + Args: + enabled (bool): A flag indicating whether metrics tracing is enabled. Defaults to True. + gfe_enabled (bool): A flag indicating whether GFE metrics are enabled. Defaults to False. + + Returns: + SpannerMetricsTracerFactory: The singleton instance of SpannerMetricsTracerFactory. + """ + if cls._metrics_tracer_factory is None: + cls._metrics_tracer_factory = MetricsTracerFactory( + enabled, SPANNER_SERVICE_NAME + ) + if not HAS_OPENTELEMETRY_INSTALLED: + return cls._metrics_tracer_factory + + client_uid = cls._generate_client_uid() + cls._metrics_tracer_factory.set_client_uid(client_uid) + cls._metrics_tracer_factory.set_instance_config(cls._get_instance_config()) + cls._metrics_tracer_factory.set_client_name(cls._get_client_name()) + cls._metrics_tracer_factory.set_client_hash( + cls._generate_client_hash(client_uid) + ) + cls._metrics_tracer_factory.set_location(cls._get_location()) + cls._metrics_tracer_factory.gfe_enabled = gfe_enabled + + if cls._metrics_tracer_factory.enabled != enabled: + cls._metrics_tracer_factory.enabeld = enabled + + return cls._metrics_tracer_factory + + @staticmethod + def _generate_client_uid() -> str: + """Generate a client UID in the form of uuidv4@pid@hostname. + + This method generates a unique client identifier (UID) by combining a UUID version 4, + the process ID (PID), and the hostname. The PID is limited to the first 10 characters. + + Returns: + str: A string representing the client UID in the format uuidv4@pid@hostname. + """ + try: + hostname = os.uname()[1] + pid = str(os.getpid())[0:10] # Limit PID to 10 characters + uuid = uuid4() + return f"{uuid}@{pid}@{hostname}" + except Exception: + return "" + + @staticmethod + def _get_instance_config() -> str: + """Get the instance configuration.""" + # TODO: unknown until there's a good way to get it. + return "unknown" + + @staticmethod + def _get_client_name() -> str: + """Get the client name.""" + return f"{SPANNER_SERVICE_NAME}/{__version__}" + + @staticmethod + def _generate_client_hash(client_uid: str) -> str: + """ + Generate a 6-digit zero-padded lowercase hexadecimal hash using the 10 most significant bits of a 64-bit hash value. + + The primary purpose of this function is to generate a hash value for the `client_hash` + resource label using `client_uid` metric field. The range of values is chosen to be small + enough to keep the cardinality of the Resource targets under control. Note: If at later time + the range needs to be increased, it can be done by increasing the value of `kPrefixLength` to + up to 24 bits without changing the format of the returned value. + + Args: + client_uid (str): The client UID used to generate the hash. + + Returns: + str: A 6-digit zero-padded lowercase hexadecimal hash. + """ + if not client_uid: + return "000000" + hashed_client = mmh3.hash64(client_uid) + + # Join the hashes back together since mmh3 splits into high and low 32bits + full_hash = (hashed_client[0] << 32) | (hashed_client[1] & 0xFFFFFFFF) + unsigned_hash = full_hash & 0xFFFFFFFFFFFFFFFF + + k_prefix_length = 10 + sig_figs = unsigned_hash >> (64 - k_prefix_length) + + # Return as 6 digit zero padded hex string + return f"{sig_figs:06x}" + + @staticmethod + def _get_location() -> str: + """Get the location of the resource. + + Returns: + str: The location of the resource. If OpenTelemetry is not installed, returns a global region. + """ + if not HAS_OPENTELEMETRY_INSTALLED: + return GOOGLE_CLOUD_REGION_GLOBAL + detector = gcp_resource_detector.GoogleCloudResourceDetector() + resources = detector.detect() + + if GOOGLE_CLOUD_REGION_KEY not in resources.attributes: + return GOOGLE_CLOUD_REGION_GLOBAL + else: + return resources[GOOGLE_CLOUD_REGION_KEY] diff --git a/google/cloud/spanner_v1/param_types.py b/google/cloud/spanner_v1/param_types.py index 47442bfc4b..72127c0e0b 100644 --- a/google/cloud/spanner_v1/param_types.py +++ b/google/cloud/spanner_v1/param_types.py @@ -14,55 +14,97 @@ """Types exported from this package.""" -from google.cloud.spanner_v1.proto import type_pb2 +from google.cloud.spanner_v1 import Type +from google.cloud.spanner_v1 import TypeAnnotationCode +from google.cloud.spanner_v1 import TypeCode +from google.cloud.spanner_v1 import StructType +from google.protobuf.message import Message +from google.protobuf.internal.enum_type_wrapper import EnumTypeWrapper # Scalar parameter types -STRING = type_pb2.Type(code=type_pb2.STRING) -BYTES = type_pb2.Type(code=type_pb2.BYTES) -BOOL = type_pb2.Type(code=type_pb2.BOOL) -INT64 = type_pb2.Type(code=type_pb2.INT64) -FLOAT64 = type_pb2.Type(code=type_pb2.FLOAT64) -DATE = type_pb2.Type(code=type_pb2.DATE) -TIMESTAMP = type_pb2.Type(code=type_pb2.TIMESTAMP) - - -def Array(element_type): # pylint: disable=invalid-name +STRING = Type(code=TypeCode.STRING) +BYTES = Type(code=TypeCode.BYTES) +BOOL = Type(code=TypeCode.BOOL) +INT64 = Type(code=TypeCode.INT64) +FLOAT64 = Type(code=TypeCode.FLOAT64) +FLOAT32 = Type(code=TypeCode.FLOAT32) +DATE = Type(code=TypeCode.DATE) +TIMESTAMP = Type(code=TypeCode.TIMESTAMP) +NUMERIC = Type(code=TypeCode.NUMERIC) +JSON = Type(code=TypeCode.JSON) +PG_NUMERIC = Type(code=TypeCode.NUMERIC, type_annotation=TypeAnnotationCode.PG_NUMERIC) +PG_JSONB = Type(code=TypeCode.JSON, type_annotation=TypeAnnotationCode.PG_JSONB) +PG_OID = Type(code=TypeCode.INT64, type_annotation=TypeAnnotationCode.PG_OID) +INTERVAL = Type(code=TypeCode.INTERVAL) + + +def Array(element_type): """Construct an array parameter type description protobuf. - :type element_type: :class:`type_pb2.Type` + :type element_type: :class:`~google.cloud.spanner_v1.types.Type` :param element_type: the type of elements of the array - :rtype: :class:`type_pb2.Type` + :rtype: :class:`google.cloud.spanner_v1.types.Type` :returns: the appropriate array-type protobuf """ - return type_pb2.Type(code=type_pb2.ARRAY, array_element_type=element_type) + return Type(code=TypeCode.ARRAY, array_element_type=element_type) -def StructField(name, field_type): # pylint: disable=invalid-name +def StructField(name, field_type): """Construct a field description protobuf. :type name: str :param name: the name of the field - :type field_type: :class:`type_pb2.Type` + :type field_type: :class:`google.cloud.spanner_v1.types.Type` :param field_type: the type of the field - :rtype: :class:`type_pb2.StructType.Field` + :rtype: :class:`google.cloud.spanner_v1.types.StructType.Field` :returns: the appropriate struct-field-type protobuf """ - return type_pb2.StructType.Field(name=name, type=field_type) + return StructType.Field(name=name, type_=field_type) -def Struct(fields): # pylint: disable=invalid-name +def Struct(fields): """Construct a struct parameter type description protobuf. - :type fields: list of :class:`type_pb2.StructType.Field` + :type fields: list of :class:`google.cloud.spanner_v1.types.StructType.Field` :param fields: the fields of the struct :rtype: :class:`type_pb2.Type` :returns: the appropriate struct-type protobuf """ - return type_pb2.Type( - code=type_pb2.STRUCT, struct_type=type_pb2.StructType(fields=fields) + return Type(code=TypeCode.STRUCT, struct_type=StructType(fields=fields)) + + +def ProtoMessage(proto_message_object): + """Construct a proto message type description protobuf. + + :type proto_message_object: :class:`google.protobuf.message.Message` + :param proto_message_object: the proto message instance + + :rtype: :class:`type_pb2.Type` + :returns: the appropriate proto-message-type protobuf + """ + if not isinstance(proto_message_object, Message): + raise ValueError("Expected input object of type Proto Message.") + return Type( + code=TypeCode.PROTO, proto_type_fqn=proto_message_object.DESCRIPTOR.full_name + ) + + +def ProtoEnum(proto_enum_object): + """Construct a proto enum type description protobuf. + + :type proto_enum_object: :class:`google.protobuf.internal.enum_type_wrapper.EnumTypeWrapper` + :param proto_enum_object: the proto enum instance + + :rtype: :class:`type_pb2.Type` + :returns: the appropriate proto-enum-type protobuf + """ + if not isinstance(proto_enum_object, EnumTypeWrapper): + raise ValueError("Expected input object of type Proto Enum") + return Type( + code=TypeCode.ENUM, proto_type_fqn=proto_enum_object.DESCRIPTOR.full_name ) diff --git a/google/cloud/spanner_v1/pool.py b/google/cloud/spanner_v1/pool.py index cf3413ceb1..a75c13cb7a 100644 --- a/google/cloud/spanner_v1/pool.py +++ b/google/cloud/spanner_v1/pool.py @@ -15,12 +15,25 @@ """Pools managing shared Session objects.""" import datetime - -from six.moves import queue +import queue +import time from google.cloud.exceptions import NotFound -from google.cloud.spanner_v1._helpers import _metadata_with_prefix - +from google.cloud.spanner_v1 import BatchCreateSessionsRequest +from google.cloud.spanner_v1 import Session as SessionProto +from google.cloud.spanner_v1.session import Session +from google.cloud.spanner_v1._helpers import ( + _metadata_with_prefix, + _metadata_with_leader_aware_routing, +) +from google.cloud.spanner_v1._opentelemetry_tracing import ( + add_span_event, + get_current_span, + trace_call, +) +from warnings import warn + +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture _NOW = datetime.datetime.utcnow # unit tests may replace @@ -31,29 +44,42 @@ class AbstractSessionPool(object): :type labels: dict (str -> str) or None :param labels: (Optional) user-assigned labels for sessions created by the pool. + + :type database_role: str + :param database_role: (Optional) user-assigned database_role for the session. """ _database = None - def __init__(self, labels=None): + def __init__(self, labels=None, database_role=None): if labels is None: labels = {} self._labels = labels + self._database_role = database_role @property def labels(self): - """User-assigned labels for sesions created by the pool. + """User-assigned labels for sessions created by the pool. :rtype: dict (str -> str) :returns: labels assigned by the user """ return self._labels + @property + def database_role(self): + """User-assigned database_role for sessions created by the pool. + + :rtype: str + :returns: database_role assigned by the user + """ + return self._database_role + def bind(self, database): """Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` - :param database: database used by the pool: used to create sessions + :param database: database used by the pool to create sessions when needed. Concrete implementations of this method may pre-fill the pool @@ -105,13 +131,17 @@ def _new_session(self): :rtype: :class:`~google.cloud.spanner_v1.session.Session` :returns: new session instance. """ - if self.labels: - return self._database.session(labels=self.labels) - return self._database.session() + + role = self.database_role or self._database.database_role + return Session(database=self._database, labels=self.labels, database_role=role) def session(self, **kwargs): """Check out a session from the pool. + Deprecated. Sessions should be checked out indirectly using context + managers or :meth:`~google.cloud.spanner_v1.database.Database.run_in_transaction`, + rather than checked out directly from the pool. + :param kwargs: (optional) keyword arguments, passed through to the returned checkout. @@ -128,7 +158,8 @@ class FixedSizePool(AbstractSessionPool): - Pre-allocates / creates a fixed number of sessions. - "Pings" existing sessions via :meth:`session.exists` before returning - them, and replaces expired sessions. + sessions that have not been used for more than 55 minutes and replaces + expired sessions. - Blocks, with a timeout, when :meth:`get` is called on an empty pool. Raises after timing out. @@ -147,38 +178,116 @@ class FixedSizePool(AbstractSessionPool): :type labels: dict (str -> str) or None :param labels: (Optional) user-assigned labels for sessions created by the pool. + + :type database_role: str + :param database_role: (Optional) user-assigned database_role for the session. """ DEFAULT_SIZE = 10 DEFAULT_TIMEOUT = 10 - - def __init__(self, size=DEFAULT_SIZE, default_timeout=DEFAULT_TIMEOUT, labels=None): - super(FixedSizePool, self).__init__(labels=labels) + DEFAULT_MAX_AGE_MINUTES = 55 + + def __init__( + self, + size=DEFAULT_SIZE, + default_timeout=DEFAULT_TIMEOUT, + labels=None, + database_role=None, + max_age_minutes=DEFAULT_MAX_AGE_MINUTES, + ): + super(FixedSizePool, self).__init__(labels=labels, database_role=database_role) self.size = size self.default_timeout = default_timeout self._sessions = queue.LifoQueue(size) + self._max_age = datetime.timedelta(minutes=max_age_minutes) def bind(self, database): """Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` - :param database: database used by the pool: used to create sessions + :param database: database used by the pool to used to create sessions when needed. """ self._database = database + requested_session_count = self.size - self._sessions.qsize() + span = get_current_span() + span_event_attributes = {"kind": type(self).__name__} + + if requested_session_count <= 0: + add_span_event( + span, + f"Invalid session pool size({requested_session_count}) <= 0", + span_event_attributes, + ) + return + api = database.spanner_api metadata = _metadata_with_prefix(database.name) + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + self._database_role = self._database_role or self._database.database_role + if requested_session_count > 0: + add_span_event( + span, + f"Requesting {requested_session_count} sessions", + span_event_attributes, + ) + + if self._sessions.full(): + add_span_event(span, "Session pool is already full", span_event_attributes) + return - while not self._sessions.full(): - resp = api.batch_create_sessions( - database.name, self.size - self._sessions.qsize(), metadata=metadata + request = BatchCreateSessionsRequest( + database=database.name, + session_count=requested_session_count, + session_template=SessionProto(creator_role=self.database_role), + ) + + observability_options = getattr(self._database, "observability_options", None) + with trace_call( + "CloudSpanner.FixedPool.BatchCreateSessions", + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + returned_session_count = 0 + while not self._sessions.full(): + request.session_count = requested_session_count - self._sessions.qsize() + add_span_event( + span, + f"Creating {request.session_count} sessions", + span_event_attributes, + ) + resp = api.batch_create_sessions( + request=request, + metadata=database.metadata_with_request_id( + database._next_nth_request, + 1, + metadata, + span, + ), + ) + + add_span_event( + span, + "Created sessions", + dict(count=len(resp.session)), + ) + + for session_pb in resp.session: + session = self._new_session() + session._session_id = session_pb.name.split("/")[-1] + self._sessions.put(session) + returned_session_count += 1 + + add_span_event( + span, + f"Requested for {requested_session_count} sessions, returned {returned_session_count}", + span_event_attributes, ) - for session_pb in resp.session: - session = self._new_session() - session._session_id = session_pb.name.split("/")[-1] - self._sessions.put(session) - def get(self, timeout=None): # pylint: disable=arguments-differ + def get(self, timeout=None): """Check a session out from the pool. :type timeout: int @@ -187,16 +296,48 @@ def get(self, timeout=None): # pylint: disable=arguments-differ :rtype: :class:`~google.cloud.spanner_v1.session.Session` :returns: an existing session from the pool, or a newly-created session. - :raises: :exc:`six.moves.queue.Empty` if the queue is empty. + :raises: :exc:`queue.Empty` if the queue is empty. """ if timeout is None: timeout = self.default_timeout - session = self._sessions.get(block=True, timeout=timeout) + start_time = time.time() + current_span = get_current_span() + span_event_attributes = {"kind": type(self).__name__} + add_span_event(current_span, "Acquiring session", span_event_attributes) - if not session.exists(): - session = self._database.session() - session.create() + session = None + try: + add_span_event( + current_span, + "Waiting for a session to become available", + span_event_attributes, + ) + + session = self._sessions.get(block=True, timeout=timeout) + age = _NOW() - session.last_use_time + + if age >= self._max_age and not session.exists(): + if not session.exists(): + add_span_event( + current_span, + "Session is not valid, recreating it", + span_event_attributes, + ) + session = self._new_session() + session.create() + # Replacing with the updated session.id. + span_event_attributes["session.id"] = session._session_id + + span_event_attributes["session.id"] = session._session_id + span_event_attributes["time.elapsed"] = time.time() - start_time + add_span_event(current_span, "Acquired session", span_event_attributes) + + except queue.Empty as e: + add_span_event( + current_span, "No sessions available in the pool", span_event_attributes + ) + raise e return session @@ -208,7 +349,7 @@ def put(self, session): :type session: :class:`~google.cloud.spanner_v1.session.Session` :param session: the session being returned. - :raises: :exc:`six.moves.queue.Full` if the queue is full. + :raises: :exc:`queue.Full` if the queue is full. """ self._sessions.put_nowait(session) @@ -242,10 +383,13 @@ class BurstyPool(AbstractSessionPool): :type labels: dict (str -> str) or None :param labels: (Optional) user-assigned labels for sessions created by the pool. + + :type database_role: str + :param database_role: (Optional) user-assigned database_role for the session. """ - def __init__(self, target_size=10, labels=None): - super(BurstyPool, self).__init__(labels=labels) + def __init__(self, target_size=10, labels=None, database_role=None): + super(BurstyPool, self).__init__(labels=labels, database_role=database_role) self.target_size = target_size self._database = None self._sessions = queue.LifoQueue(target_size) @@ -254,10 +398,11 @@ def bind(self, database): """Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` - :param database: database used by the pool: used to create sessions + :param database: database used by the pool to create sessions when needed. """ self._database = database + self._database_role = self._database_role or self._database.database_role def get(self): """Check a session out from the pool. @@ -266,13 +411,32 @@ def get(self): :returns: an existing session from the pool, or a newly-created session. """ + current_span = get_current_span() + span_event_attributes = {"kind": type(self).__name__} + add_span_event(current_span, "Acquiring session", span_event_attributes) + try: + add_span_event( + current_span, + "Waiting for a session to become available", + span_event_attributes, + ) session = self._sessions.get_nowait() except queue.Empty: + add_span_event( + current_span, + "No sessions available in pool. Creating session", + span_event_attributes, + ) session = self._new_session() session.create() else: if not session.exists(): + add_span_event( + current_span, + "Session is not valid, recreating it", + span_event_attributes, + ) session = self._new_session() session.create() return session @@ -290,6 +454,7 @@ def put(self, session): self._sessions.put_nowait(session) except queue.Full: try: + # Sessions from pools are never multiplexed, so we can always delete them session.delete() except NotFound: pass @@ -314,7 +479,7 @@ class PingingPool(AbstractSessionPool): - Sessions are used in "round-robin" order (LRU first). - "Pings" existing sessions in the background after a specified interval - via an API call (``session.exists()``). + via an API call (``session.ping()``). - Blocks, with a timeout, when :meth:`get` is called on an empty pool. Raises after timing out. @@ -339,10 +504,20 @@ class PingingPool(AbstractSessionPool): :type labels: dict (str -> str) or None :param labels: (Optional) user-assigned labels for sessions created by the pool. + + :type database_role: str + :param database_role: (Optional) user-assigned database_role for the session. """ - def __init__(self, size=10, default_timeout=10, ping_interval=3000, labels=None): - super(PingingPool, self).__init__(labels=labels) + def __init__( + self, + size=10, + default_timeout=10, + ping_interval=3000, + labels=None, + database_role=None, + ): + super(PingingPool, self).__init__(labels=labels, database_role=database_role) self.size = size self.default_timeout = default_timeout self._delta = datetime.timedelta(seconds=ping_interval) @@ -352,25 +527,77 @@ def bind(self, database): """Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` - :param database: database used by the pool: used to create sessions + :param database: database used by the pool to create sessions when needed. """ self._database = database api = database.spanner_api metadata = _metadata_with_prefix(database.name) - created_session_count = 0 + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + self._database_role = self._database_role or self._database.database_role + + request = BatchCreateSessionsRequest( + database=database.name, + session_count=self.size, + session_template=SessionProto(creator_role=self.database_role), + ) - while created_session_count < self.size: - resp = api.batch_create_sessions( - database.name, self.size - created_session_count, metadata=metadata + span_event_attributes = {"kind": type(self).__name__} + current_span = get_current_span() + requested_session_count = request.session_count + if requested_session_count <= 0: + add_span_event( + current_span, + f"Invalid session pool size({requested_session_count}) <= 0", + span_event_attributes, ) - for session_pb in resp.session: - session = self._new_session() - session._session_id = session_pb.name.split("/")[-1] - self.put(session) - created_session_count += len(resp.session) + return - def get(self, timeout=None): # pylint: disable=arguments-differ + add_span_event( + current_span, + f"Requesting {requested_session_count} sessions", + span_event_attributes, + ) + + observability_options = getattr(self._database, "observability_options", None) + with trace_call( + "CloudSpanner.PingingPool.BatchCreateSessions", + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + returned_session_count = 0 + while returned_session_count < self.size: + resp = api.batch_create_sessions( + request=request, + metadata=database.metadata_with_request_id( + database._next_nth_request, + 1, + metadata, + span, + ), + ) + + add_span_event( + span, + f"Created {len(resp.session)} sessions", + ) + + for session_pb in resp.session: + session = self._new_session() + returned_session_count += 1 + session._session_id = session_pb.name.split("/")[-1] + self.put(session) + + add_span_event( + span, + f"Requested for {requested_session_count} sessions, returned {returned_session_count}", + span_event_attributes, + ) + + def get(self, timeout=None): """Check a session out from the pool. :type timeout: int @@ -379,18 +606,48 @@ def get(self, timeout=None): # pylint: disable=arguments-differ :rtype: :class:`~google.cloud.spanner_v1.session.Session` :returns: an existing session from the pool, or a newly-created session. - :raises: :exc:`six.moves.queue.Empty` if the queue is empty. + :raises: :exc:`queue.Empty` if the queue is empty. """ if timeout is None: timeout = self.default_timeout - ping_after, session = self._sessions.get(block=True, timeout=timeout) + start_time = time.time() + span_event_attributes = {"kind": type(self).__name__} + current_span = get_current_span() + add_span_event( + current_span, + "Waiting for a session to become available", + span_event_attributes, + ) + + ping_after = None + session = None + try: + ping_after, session = self._sessions.get(block=True, timeout=timeout) + except queue.Empty as e: + add_span_event( + current_span, + "No sessions available in the pool within the specified timeout", + span_event_attributes, + ) + raise e if _NOW() > ping_after: + # Using session.exists() guarantees the returned session exists. + # session.ping() uses a cached result in the backend which could + # result in a recently deleted session being returned. if not session.exists(): session = self._new_session() session.create() + span_event_attributes.update( + { + "time.elapsed": time.time() - start_time, + "session.id": session._session_id, + "kind": "pinging_pool", + } + ) + add_span_event(current_span, "Acquired session", span_event_attributes) return session def put(self, session): @@ -401,7 +658,7 @@ def put(self, session): :type session: :class:`~google.cloud.spanner_v1.session.Session` :param session: the session being returned. - :raises: :exc:`six.moves.queue.Full` if the queue is full. + :raises: :exc:`queue.Full` if the queue is full. """ self._sessions.put_nowait((_NOW() + self._delta, session)) @@ -430,7 +687,9 @@ def ping(self): # Re-add to queue with existing expiration self._sessions.put((ping_after, session)) break - if not session.exists(): # stale + try: + session.ping() + except NotFound: session = self._new_session() session.create() # Re-add to queue with new expiration @@ -440,6 +699,10 @@ def ping(self): class TransactionPingingPool(PingingPool): """Concrete session pool implementation: + Deprecated: TransactionPingingPool no longer begins a transaction for each of its sessions at startup. + Hence the TransactionPingingPool is same as :class:`PingingPool` and maybe removed in the future. + + In addition to the features of :class:`PingingPool`, this class creates and begins a transaction for each of its sessions at startup. @@ -462,13 +725,33 @@ class TransactionPingingPool(PingingPool): :type labels: dict (str -> str) or None :param labels: (Optional) user-assigned labels for sessions created by the pool. + + :type database_role: str + :param database_role: (Optional) user-assigned database_role for the session. """ - def __init__(self, size=10, default_timeout=10, ping_interval=3000, labels=None): + def __init__( + self, + size=10, + default_timeout=10, + ping_interval=3000, + labels=None, + database_role=None, + ): + """This throws a deprecation warning on initialization.""" + warn( + f"{self.__class__.__name__} is deprecated.", + DeprecationWarning, + stacklevel=2, + ) self._pending_sessions = queue.Queue() super(TransactionPingingPool, self).__init__( - size, default_timeout, ping_interval, labels=labels + size, + default_timeout, + ping_interval, + labels=labels, + database_role=database_role, ) self.begin_pending_transactions() @@ -477,10 +760,11 @@ def bind(self, database): """Associate the pool with a database. :type database: :class:`~google.cloud.spanner_v1.database.Database` - :param database: database used by the pool: used to create sessions + :param database: database used by the pool to create sessions when needed. """ super(TransactionPingingPool, self).bind(database) + self._database_role = self._database_role or self._database.database_role self.begin_pending_transactions() def put(self, session): @@ -491,7 +775,7 @@ def put(self, session): :type session: :class:`~google.cloud.spanner_v1.session.Session` :param session: the session being returned. - :raises: :exc:`six.moves.queue.Full` if the queue is full. + :raises: :exc:`queue.Full` if the queue is full. """ if self._sessions.full(): raise queue.Full @@ -507,21 +791,24 @@ def begin_pending_transactions(self): """Begin all transactions for sessions added to the pool.""" while not self._pending_sessions.empty(): session = self._pending_sessions.get() - session._transaction.begin() super(TransactionPingingPool, self).put(session) class SessionCheckout(object): """Context manager: hold session checked out from a pool. + Deprecated. Sessions should be checked out indirectly using context + managers or :meth:`~google.cloud.spanner_v1.database.Database.run_in_transaction`, + rather than checked out directly from the pool. + :type pool: concrete subclass of - :class:`~google.cloud.spanner_v1.session.AbstractSessionPool` + :class:`~google.cloud.spanner_v1.pool.AbstractSessionPool` :param pool: Pool from which to check out a session. :param kwargs: extra keyword arguments to be passed to :meth:`pool.get`. """ - _session = None # Not checked out until '__enter__'. + _session = None def __init__(self, pool, **kwargs): self._pool = pool diff --git a/google/cloud/spanner_v1/proto/__init__.py b/google/cloud/spanner_v1/proto/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/google/cloud/spanner_v1/proto/keys.proto b/google/cloud/spanner_v1/proto/keys.proto deleted file mode 100644 index d129255c45..0000000000 --- a/google/cloud/spanner_v1/proto/keys.proto +++ /dev/null @@ -1,163 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.v1; - -import "google/protobuf/struct.proto"; -import "google/api/annotations.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/v1;spanner"; -option java_multiple_files = true; -option java_outer_classname = "KeysProto"; -option java_package = "com.google.spanner.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\V1"; - -// KeyRange represents a range of rows in a table or index. -// -// A range has a start key and an end key. These keys can be open or -// closed, indicating if the range includes rows with that key. -// -// Keys are represented by lists, where the ith value in the list -// corresponds to the ith component of the table or index primary key. -// Individual values are encoded as described -// [here][google.spanner.v1.TypeCode]. -// -// For example, consider the following table definition: -// -// CREATE TABLE UserEvents ( -// UserName STRING(MAX), -// EventDate STRING(10) -// ) PRIMARY KEY(UserName, EventDate); -// -// The following keys name rows in this table: -// -// ["Bob", "2014-09-23"] -// ["Alfred", "2015-06-12"] -// -// Since the `UserEvents` table's `PRIMARY KEY` clause names two -// columns, each `UserEvents` key has two elements; the first is the -// `UserName`, and the second is the `EventDate`. -// -// Key ranges with multiple components are interpreted -// lexicographically by component using the table or index key's declared -// sort order. For example, the following range returns all events for -// user `"Bob"` that occurred in the year 2015: -// -// "start_closed": ["Bob", "2015-01-01"] -// "end_closed": ["Bob", "2015-12-31"] -// -// Start and end keys can omit trailing key components. This affects the -// inclusion and exclusion of rows that exactly match the provided key -// components: if the key is closed, then rows that exactly match the -// provided components are included; if the key is open, then rows -// that exactly match are not included. -// -// For example, the following range includes all events for `"Bob"` that -// occurred during and after the year 2000: -// -// "start_closed": ["Bob", "2000-01-01"] -// "end_closed": ["Bob"] -// -// The next example retrieves all events for `"Bob"`: -// -// "start_closed": ["Bob"] -// "end_closed": ["Bob"] -// -// To retrieve events before the year 2000: -// -// "start_closed": ["Bob"] -// "end_open": ["Bob", "2000-01-01"] -// -// The following range includes all rows in the table: -// -// "start_closed": [] -// "end_closed": [] -// -// This range returns all users whose `UserName` begins with any -// character from A to C: -// -// "start_closed": ["A"] -// "end_open": ["D"] -// -// This range returns all users whose `UserName` begins with B: -// -// "start_closed": ["B"] -// "end_open": ["C"] -// -// Key ranges honor column sort order. For example, suppose a table is -// defined as follows: -// -// CREATE TABLE DescendingSortedTable { -// Key INT64, -// ... -// ) PRIMARY KEY(Key DESC); -// -// The following range retrieves all rows with key values between 1 -// and 100 inclusive: -// -// "start_closed": ["100"] -// "end_closed": ["1"] -// -// Note that 100 is passed as the start, and 1 is passed as the end, -// because `Key` is a descending column in the schema. -message KeyRange { - // The start key must be provided. It can be either closed or open. - oneof start_key_type { - // If the start is closed, then the range includes all rows whose - // first `len(start_closed)` key columns exactly match `start_closed`. - google.protobuf.ListValue start_closed = 1; - - // If the start is open, then the range excludes rows whose first - // `len(start_open)` key columns exactly match `start_open`. - google.protobuf.ListValue start_open = 2; - } - - // The end key must be provided. It can be either closed or open. - oneof end_key_type { - // If the end is closed, then the range includes all rows whose - // first `len(end_closed)` key columns exactly match `end_closed`. - google.protobuf.ListValue end_closed = 3; - - // If the end is open, then the range excludes rows whose first - // `len(end_open)` key columns exactly match `end_open`. - google.protobuf.ListValue end_open = 4; - } -} - -// `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All -// the keys are expected to be in the same table or index. The keys need -// not be sorted in any particular way. -// -// If the same key is specified multiple times in the set (for example -// if two ranges, two keys, or a key and a range overlap), Cloud Spanner -// behaves as if the key were only specified once. -message KeySet { - // A list of specific keys. Entries in `keys` should have exactly as - // many elements as there are columns in the primary or index key - // with which this `KeySet` is used. Individual key values are - // encoded as described [here][google.spanner.v1.TypeCode]. - repeated google.protobuf.ListValue keys = 1; - - // A list of key ranges. See [KeyRange][google.spanner.v1.KeyRange] for more information about - // key range specifications. - repeated KeyRange ranges = 2; - - // For convenience `all` can be set to `true` to indicate that this - // `KeySet` matches all keys in the table or index. Note that any keys - // specified in `keys` or `ranges` are only yielded once. - bool all = 3; -} diff --git a/google/cloud/spanner_v1/proto/keys_pb2.py b/google/cloud/spanner_v1/proto/keys_pb2.py deleted file mode 100644 index 9d38124847..0000000000 --- a/google/cloud/spanner_v1/proto/keys_pb2.py +++ /dev/null @@ -1,448 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner_v1/proto/keys.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner_v1/proto/keys.proto", - package="google.spanner.v1", - syntax="proto3", - serialized_options=_b( - "\n\025com.google.spanner.v1B\tKeysProtoP\001Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\252\002\027Google.Cloud.Spanner.V1\312\002\027Google\\Cloud\\Spanner\\V1" - ), - serialized_pb=_b( - '\n(google/cloud/spanner_v1/proto/keys.proto\x12\x11google.spanner.v1\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1cgoogle/api/annotations.proto"\xf4\x01\n\x08KeyRange\x12\x32\n\x0cstart_closed\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.ListValueH\x00\x12\x30\n\nstart_open\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.ListValueH\x00\x12\x30\n\nend_closed\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.ListValueH\x01\x12.\n\x08\x65nd_open\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.ListValueH\x01\x42\x10\n\x0estart_key_typeB\x0e\n\x0c\x65nd_key_type"l\n\x06KeySet\x12(\n\x04keys\x18\x01 \x03(\x0b\x32\x1a.google.protobuf.ListValue\x12+\n\x06ranges\x18\x02 \x03(\x0b\x32\x1b.google.spanner.v1.KeyRange\x12\x0b\n\x03\x61ll\x18\x03 \x01(\x08\x42\x92\x01\n\x15\x63om.google.spanner.v1B\tKeysProtoP\x01Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\xaa\x02\x17Google.Cloud.Spanner.V1\xca\x02\x17Google\\Cloud\\Spanner\\V1b\x06proto3' - ), - dependencies=[ - google_dot_protobuf_dot_struct__pb2.DESCRIPTOR, - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - ], -) - - -_KEYRANGE = _descriptor.Descriptor( - name="KeyRange", - full_name="google.spanner.v1.KeyRange", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="start_closed", - full_name="google.spanner.v1.KeyRange.start_closed", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="start_open", - full_name="google.spanner.v1.KeyRange.start_open", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="end_closed", - full_name="google.spanner.v1.KeyRange.end_closed", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="end_open", - full_name="google.spanner.v1.KeyRange.end_open", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="start_key_type", - full_name="google.spanner.v1.KeyRange.start_key_type", - index=0, - containing_type=None, - fields=[], - ), - _descriptor.OneofDescriptor( - name="end_key_type", - full_name="google.spanner.v1.KeyRange.end_key_type", - index=1, - containing_type=None, - fields=[], - ), - ], - serialized_start=124, - serialized_end=368, -) - - -_KEYSET = _descriptor.Descriptor( - name="KeySet", - full_name="google.spanner.v1.KeySet", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="keys", - full_name="google.spanner.v1.KeySet.keys", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="ranges", - full_name="google.spanner.v1.KeySet.ranges", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="all", - full_name="google.spanner.v1.KeySet.all", - index=2, - number=3, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=370, - serialized_end=478, -) - -_KEYRANGE.fields_by_name[ - "start_closed" -].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE -_KEYRANGE.fields_by_name[ - "start_open" -].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE -_KEYRANGE.fields_by_name[ - "end_closed" -].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE -_KEYRANGE.fields_by_name[ - "end_open" -].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE -_KEYRANGE.oneofs_by_name["start_key_type"].fields.append( - _KEYRANGE.fields_by_name["start_closed"] -) -_KEYRANGE.fields_by_name["start_closed"].containing_oneof = _KEYRANGE.oneofs_by_name[ - "start_key_type" -] -_KEYRANGE.oneofs_by_name["start_key_type"].fields.append( - _KEYRANGE.fields_by_name["start_open"] -) -_KEYRANGE.fields_by_name["start_open"].containing_oneof = _KEYRANGE.oneofs_by_name[ - "start_key_type" -] -_KEYRANGE.oneofs_by_name["end_key_type"].fields.append( - _KEYRANGE.fields_by_name["end_closed"] -) -_KEYRANGE.fields_by_name["end_closed"].containing_oneof = _KEYRANGE.oneofs_by_name[ - "end_key_type" -] -_KEYRANGE.oneofs_by_name["end_key_type"].fields.append( - _KEYRANGE.fields_by_name["end_open"] -) -_KEYRANGE.fields_by_name["end_open"].containing_oneof = _KEYRANGE.oneofs_by_name[ - "end_key_type" -] -_KEYSET.fields_by_name[ - "keys" -].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE -_KEYSET.fields_by_name["ranges"].message_type = _KEYRANGE -DESCRIPTOR.message_types_by_name["KeyRange"] = _KEYRANGE -DESCRIPTOR.message_types_by_name["KeySet"] = _KEYSET -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -KeyRange = _reflection.GeneratedProtocolMessageType( - "KeyRange", - (_message.Message,), - dict( - DESCRIPTOR=_KEYRANGE, - __module__="google.cloud.spanner_v1.proto.keys_pb2", - __doc__="""KeyRange represents a range of rows in a table or index. - - A range has a start key and an end key. These keys can be open or - closed, indicating if the range includes rows with that key. - - Keys are represented by lists, where the ith value in the list - corresponds to the ith component of the table or index primary key. - Individual values are encoded as described - [here][google.spanner.v1.TypeCode]. - - For example, consider the following table definition: - - :: - - CREATE TABLE UserEvents ( - UserName STRING(MAX), - EventDate STRING(10) - ) PRIMARY KEY(UserName, EventDate); - - The following keys name rows in this table: - - :: - - ["Bob", "2014-09-23"] - ["Alfred", "2015-06-12"] - - Since the ``UserEvents`` table's ``PRIMARY KEY`` clause names two - columns, each ``UserEvents`` key has two elements; the first is the - ``UserName``, and the second is the ``EventDate``. - - Key ranges with multiple components are interpreted lexicographically by - component using the table or index key's declared sort order. For - example, the following range returns all events for user ``"Bob"`` that - occurred in the year 2015: - - :: - - "start_closed": ["Bob", "2015-01-01"] - "end_closed": ["Bob", "2015-12-31"] - - Start and end keys can omit trailing key components. This affects the - inclusion and exclusion of rows that exactly match the provided key - components: if the key is closed, then rows that exactly match the - provided components are included; if the key is open, then rows that - exactly match are not included. - - For example, the following range includes all events for ``"Bob"`` that - occurred during and after the year 2000: - - :: - - "start_closed": ["Bob", "2000-01-01"] - "end_closed": ["Bob"] - - The next example retrieves all events for ``"Bob"``: - - :: - - "start_closed": ["Bob"] - "end_closed": ["Bob"] - - To retrieve events before the year 2000: - - :: - - "start_closed": ["Bob"] - "end_open": ["Bob", "2000-01-01"] - - The following range includes all rows in the table: - - :: - - "start_closed": [] - "end_closed": [] - - This range returns all users whose ``UserName`` begins with any - character from A to C: - - :: - - "start_closed": ["A"] - "end_open": ["D"] - - This range returns all users whose ``UserName`` begins with B: - - :: - - "start_closed": ["B"] - "end_open": ["C"] - - Key ranges honor column sort order. For example, suppose a table is - defined as follows: - - :: - - CREATE TABLE DescendingSortedTable { - Key INT64, - ... - ) PRIMARY KEY(Key DESC); - - The following range retrieves all rows with key values between 1 and 100 - inclusive: - - :: - - "start_closed": ["100"] - "end_closed": ["1"] - - Note that 100 is passed as the start, and 1 is passed as the end, - because ``Key`` is a descending column in the schema. - - - Attributes: - start_key_type: - The start key must be provided. It can be either closed or - open. - start_closed: - If the start is closed, then the range includes all rows whose - first ``len(start_closed)`` key columns exactly match - ``start_closed``. - start_open: - If the start is open, then the range excludes rows whose first - ``len(start_open)`` key columns exactly match ``start_open``. - end_key_type: - The end key must be provided. It can be either closed or open. - end_closed: - If the end is closed, then the range includes all rows whose - first ``len(end_closed)`` key columns exactly match - ``end_closed``. - end_open: - If the end is open, then the range excludes rows whose first - ``len(end_open)`` key columns exactly match ``end_open``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.KeyRange) - ), -) -_sym_db.RegisterMessage(KeyRange) - -KeySet = _reflection.GeneratedProtocolMessageType( - "KeySet", - (_message.Message,), - dict( - DESCRIPTOR=_KEYSET, - __module__="google.cloud.spanner_v1.proto.keys_pb2", - __doc__="""\ ``KeySet`` defines a collection of Cloud Spanner keys - and/or key ranges. All the keys are expected to be in the same table or - index. The keys need not be sorted in any particular way. - - If the same key is specified multiple times in the set (for example if - two ranges, two keys, or a key and a range overlap), Cloud Spanner - behaves as if the key were only specified once. - - - Attributes: - keys: - A list of specific keys. Entries in ``keys`` should have - exactly as many elements as there are columns in the primary - or index key with which this ``KeySet`` is used. Individual - key values are encoded as described - [here][google.spanner.v1.TypeCode]. - ranges: - A list of key ranges. See - [KeyRange][google.spanner.v1.KeyRange] for more information - about key range specifications. - all: - For convenience ``all`` can be set to ``true`` to indicate - that this ``KeySet`` matches all keys in the table or index. - Note that any keys specified in ``keys`` or ``ranges`` are - only yielded once. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.KeySet) - ), -) -_sym_db.RegisterMessage(KeySet) - - -DESCRIPTOR._options = None -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_v1/proto/keys_pb2_grpc.py b/google/cloud/spanner_v1/proto/keys_pb2_grpc.py deleted file mode 100644 index 07cb78fe03..0000000000 --- a/google/cloud/spanner_v1/proto/keys_pb2_grpc.py +++ /dev/null @@ -1,2 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc diff --git a/google/cloud/spanner_v1/proto/mutation.proto b/google/cloud/spanner_v1/proto/mutation.proto deleted file mode 100644 index 2c675830f0..0000000000 --- a/google/cloud/spanner_v1/proto/mutation.proto +++ /dev/null @@ -1,106 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.v1; - -import "google/protobuf/struct.proto"; -import "google/spanner/v1/keys.proto"; -import "google/api/annotations.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/v1;spanner"; -option java_multiple_files = true; -option java_outer_classname = "MutationProto"; -option java_package = "com.google.spanner.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\V1"; - -// A modification to one or more Cloud Spanner rows. Mutations can be -// applied to a Cloud Spanner database by sending them in a -// [Commit][google.spanner.v1.Spanner.Commit] call. -message Mutation { - // Arguments to [insert][google.spanner.v1.Mutation.insert], [update][google.spanner.v1.Mutation.update], [insert_or_update][google.spanner.v1.Mutation.insert_or_update], and - // [replace][google.spanner.v1.Mutation.replace] operations. - message Write { - // Required. The table whose rows will be written. - string table = 1; - - // The names of the columns in [table][google.spanner.v1.Mutation.Write.table] to be written. - // - // The list of columns must contain enough columns to allow - // Cloud Spanner to derive values for all primary key columns in the - // row(s) to be modified. - repeated string columns = 2; - - // The values to be written. `values` can contain more than one - // list of values. If it does, then multiple rows are written, one - // for each entry in `values`. Each list in `values` must have - // exactly as many entries as there are entries in [columns][google.spanner.v1.Mutation.Write.columns] - // above. Sending multiple lists is equivalent to sending multiple - // `Mutation`s, each containing one `values` entry and repeating - // [table][google.spanner.v1.Mutation.Write.table] and [columns][google.spanner.v1.Mutation.Write.columns]. Individual values in each list are - // encoded as described [here][google.spanner.v1.TypeCode]. - repeated google.protobuf.ListValue values = 3; - } - - // Arguments to [delete][google.spanner.v1.Mutation.delete] operations. - message Delete { - // Required. The table whose rows will be deleted. - string table = 1; - - // Required. The primary keys of the rows within [table][google.spanner.v1.Mutation.Delete.table] to delete. The - // primary keys must be specified in the order in which they appear in the - // `PRIMARY KEY()` clause of the table's equivalent DDL statement (the DDL - // statement used to create the table). - // Delete is idempotent. The transaction will succeed even if some or all - // rows do not exist. - KeySet key_set = 2; - } - - // Required. The operation to perform. - oneof operation { - // Insert new rows in a table. If any of the rows already exist, - // the write or transaction fails with error `ALREADY_EXISTS`. - Write insert = 1; - - // Update existing rows in a table. If any of the rows does not - // already exist, the transaction fails with error `NOT_FOUND`. - Write update = 2; - - // Like [insert][google.spanner.v1.Mutation.insert], except that if the row already exists, then - // its column values are overwritten with the ones provided. Any - // column values not explicitly written are preserved. - // - // When using [insert_or_update][google.spanner.v1.Mutation.insert_or_update], just as when using [insert][google.spanner.v1.Mutation.insert], all `NOT - // NULL` columns in the table must be given a value. This holds true - // even when the row already exists and will therefore actually be updated. - Write insert_or_update = 3; - - // Like [insert][google.spanner.v1.Mutation.insert], except that if the row already exists, it is - // deleted, and the column values provided are inserted - // instead. Unlike [insert_or_update][google.spanner.v1.Mutation.insert_or_update], this means any values not - // explicitly written become `NULL`. - // - // In an interleaved table, if you create the child table with the - // `ON DELETE CASCADE` annotation, then replacing a parent row - // also deletes the child rows. Otherwise, you must delete the - // child rows before you replace the parent row. - Write replace = 4; - - // Delete rows from a table. Succeeds whether or not the named - // rows were present. - Delete delete = 5; - } -} diff --git a/google/cloud/spanner_v1/proto/mutation_pb2.py b/google/cloud/spanner_v1/proto/mutation_pb2.py deleted file mode 100644 index b6ad0429b8..0000000000 --- a/google/cloud/spanner_v1/proto/mutation_pb2.py +++ /dev/null @@ -1,444 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner_v1/proto/mutation.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 -from google.cloud.spanner_v1.proto import ( - keys_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_keys__pb2, -) -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner_v1/proto/mutation.proto", - package="google.spanner.v1", - syntax="proto3", - serialized_options=_b( - "\n\025com.google.spanner.v1B\rMutationProtoP\001Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\252\002\027Google.Cloud.Spanner.V1\312\002\027Google\\Cloud\\Spanner\\V1" - ), - serialized_pb=_b( - '\n,google/cloud/spanner_v1/proto/mutation.proto\x12\x11google.spanner.v1\x1a\x1cgoogle/protobuf/struct.proto\x1a(google/cloud/spanner_v1/proto/keys.proto\x1a\x1cgoogle/api/annotations.proto"\xc6\x03\n\x08Mutation\x12\x33\n\x06insert\x18\x01 \x01(\x0b\x32!.google.spanner.v1.Mutation.WriteH\x00\x12\x33\n\x06update\x18\x02 \x01(\x0b\x32!.google.spanner.v1.Mutation.WriteH\x00\x12=\n\x10insert_or_update\x18\x03 \x01(\x0b\x32!.google.spanner.v1.Mutation.WriteH\x00\x12\x34\n\x07replace\x18\x04 \x01(\x0b\x32!.google.spanner.v1.Mutation.WriteH\x00\x12\x34\n\x06\x64\x65lete\x18\x05 \x01(\x0b\x32".google.spanner.v1.Mutation.DeleteH\x00\x1aS\n\x05Write\x12\r\n\x05table\x18\x01 \x01(\t\x12\x0f\n\x07\x63olumns\x18\x02 \x03(\t\x12*\n\x06values\x18\x03 \x03(\x0b\x32\x1a.google.protobuf.ListValue\x1a\x43\n\x06\x44\x65lete\x12\r\n\x05table\x18\x01 \x01(\t\x12*\n\x07key_set\x18\x02 \x01(\x0b\x32\x19.google.spanner.v1.KeySetB\x0b\n\toperationB\x96\x01\n\x15\x63om.google.spanner.v1B\rMutationProtoP\x01Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\xaa\x02\x17Google.Cloud.Spanner.V1\xca\x02\x17Google\\Cloud\\Spanner\\V1b\x06proto3' - ), - dependencies=[ - google_dot_protobuf_dot_struct__pb2.DESCRIPTOR, - google_dot_cloud_dot_spanner__v1_dot_proto_dot_keys__pb2.DESCRIPTOR, - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - ], -) - - -_MUTATION_WRITE = _descriptor.Descriptor( - name="Write", - full_name="google.spanner.v1.Mutation.Write", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="table", - full_name="google.spanner.v1.Mutation.Write.table", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="columns", - full_name="google.spanner.v1.Mutation.Write.columns", - index=1, - number=2, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="values", - full_name="google.spanner.v1.Mutation.Write.values", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=459, - serialized_end=542, -) - -_MUTATION_DELETE = _descriptor.Descriptor( - name="Delete", - full_name="google.spanner.v1.Mutation.Delete", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="table", - full_name="google.spanner.v1.Mutation.Delete.table", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="key_set", - full_name="google.spanner.v1.Mutation.Delete.key_set", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=544, - serialized_end=611, -) - -_MUTATION = _descriptor.Descriptor( - name="Mutation", - full_name="google.spanner.v1.Mutation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="insert", - full_name="google.spanner.v1.Mutation.insert", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="update", - full_name="google.spanner.v1.Mutation.update", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="insert_or_update", - full_name="google.spanner.v1.Mutation.insert_or_update", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="replace", - full_name="google.spanner.v1.Mutation.replace", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="delete", - full_name="google.spanner.v1.Mutation.delete", - index=4, - number=5, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[_MUTATION_WRITE, _MUTATION_DELETE], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="operation", - full_name="google.spanner.v1.Mutation.operation", - index=0, - containing_type=None, - fields=[], - ) - ], - serialized_start=170, - serialized_end=624, -) - -_MUTATION_WRITE.fields_by_name[ - "values" -].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE -_MUTATION_WRITE.containing_type = _MUTATION -_MUTATION_DELETE.fields_by_name[ - "key_set" -].message_type = google_dot_cloud_dot_spanner__v1_dot_proto_dot_keys__pb2._KEYSET -_MUTATION_DELETE.containing_type = _MUTATION -_MUTATION.fields_by_name["insert"].message_type = _MUTATION_WRITE -_MUTATION.fields_by_name["update"].message_type = _MUTATION_WRITE -_MUTATION.fields_by_name["insert_or_update"].message_type = _MUTATION_WRITE -_MUTATION.fields_by_name["replace"].message_type = _MUTATION_WRITE -_MUTATION.fields_by_name["delete"].message_type = _MUTATION_DELETE -_MUTATION.oneofs_by_name["operation"].fields.append(_MUTATION.fields_by_name["insert"]) -_MUTATION.fields_by_name["insert"].containing_oneof = _MUTATION.oneofs_by_name[ - "operation" -] -_MUTATION.oneofs_by_name["operation"].fields.append(_MUTATION.fields_by_name["update"]) -_MUTATION.fields_by_name["update"].containing_oneof = _MUTATION.oneofs_by_name[ - "operation" -] -_MUTATION.oneofs_by_name["operation"].fields.append( - _MUTATION.fields_by_name["insert_or_update"] -) -_MUTATION.fields_by_name[ - "insert_or_update" -].containing_oneof = _MUTATION.oneofs_by_name["operation"] -_MUTATION.oneofs_by_name["operation"].fields.append(_MUTATION.fields_by_name["replace"]) -_MUTATION.fields_by_name["replace"].containing_oneof = _MUTATION.oneofs_by_name[ - "operation" -] -_MUTATION.oneofs_by_name["operation"].fields.append(_MUTATION.fields_by_name["delete"]) -_MUTATION.fields_by_name["delete"].containing_oneof = _MUTATION.oneofs_by_name[ - "operation" -] -DESCRIPTOR.message_types_by_name["Mutation"] = _MUTATION -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -Mutation = _reflection.GeneratedProtocolMessageType( - "Mutation", - (_message.Message,), - dict( - Write=_reflection.GeneratedProtocolMessageType( - "Write", - (_message.Message,), - dict( - DESCRIPTOR=_MUTATION_WRITE, - __module__="google.cloud.spanner_v1.proto.mutation_pb2", - __doc__="""Arguments to [insert][google.spanner.v1.Mutation.insert], - [update][google.spanner.v1.Mutation.update], - [insert\_or\_update][google.spanner.v1.Mutation.insert\_or\_update], and - [replace][google.spanner.v1.Mutation.replace] operations. - - - Attributes: - table: - Required. The table whose rows will be written. - columns: - The names of the columns in - [table][google.spanner.v1.Mutation.Write.table] to be written. - The list of columns must contain enough columns to allow Cloud - Spanner to derive values for all primary key columns in the - row(s) to be modified. - values: - The values to be written. ``values`` can contain more than one - list of values. If it does, then multiple rows are written, - one for each entry in ``values``. Each list in ``values`` must - have exactly as many entries as there are entries in - [columns][google.spanner.v1.Mutation.Write.columns] above. - Sending multiple lists is equivalent to sending multiple - ``Mutation``\ s, each containing one ``values`` entry and - repeating [table][google.spanner.v1.Mutation.Write.table] and - [columns][google.spanner.v1.Mutation.Write.columns]. - Individual values in each list are encoded as described - [here][google.spanner.v1.TypeCode]. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.Mutation.Write) - ), - ), - Delete=_reflection.GeneratedProtocolMessageType( - "Delete", - (_message.Message,), - dict( - DESCRIPTOR=_MUTATION_DELETE, - __module__="google.cloud.spanner_v1.proto.mutation_pb2", - __doc__="""Arguments to [delete][google.spanner.v1.Mutation.delete] - operations. - - - Attributes: - table: - Required. The table whose rows will be deleted. - key_set: - Required. The primary keys of the rows within - [table][google.spanner.v1.Mutation.Delete.table] to delete. - The primary keys must be specified in the order in which they - appear in the ``PRIMARY KEY()`` clause of the table's - equivalent DDL statement (the DDL statement used to create the - table). Delete is idempotent. The transaction will succeed - even if some or all rows do not exist. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.Mutation.Delete) - ), - ), - DESCRIPTOR=_MUTATION, - __module__="google.cloud.spanner_v1.proto.mutation_pb2", - __doc__="""A modification to one or more Cloud Spanner rows. - Mutations can be applied to a Cloud Spanner database by sending them in - a [Commit][google.spanner.v1.Spanner.Commit] call. - - - Attributes: - operation: - Required. The operation to perform. - insert: - Insert new rows in a table. If any of the rows already exist, - the write or transaction fails with error ``ALREADY_EXISTS``. - update: - Update existing rows in a table. If any of the rows does not - already exist, the transaction fails with error ``NOT_FOUND``. - insert_or_update: - Like [insert][google.spanner.v1.Mutation.insert], except that - if the row already exists, then its column values are - overwritten with the ones provided. Any column values not - explicitly written are preserved. When using [insert\_or\_upd - ate][google.spanner.v1.Mutation.insert\_or\_update], just as - when using [insert][google.spanner.v1.Mutation.insert], all - ``NOT NULL`` columns in the table must be given a value. This - holds true even when the row already exists and will therefore - actually be updated. - replace: - Like [insert][google.spanner.v1.Mutation.insert], except that - if the row already exists, it is deleted, and the column - values provided are inserted instead. Unlike [insert\_or\_upda - te][google.spanner.v1.Mutation.insert\_or\_update], this means - any values not explicitly written become ``NULL``. In an - interleaved table, if you create the child table with the ``ON - DELETE CASCADE`` annotation, then replacing a parent row also - deletes the child rows. Otherwise, you must delete the child - rows before you replace the parent row. - delete: - Delete rows from a table. Succeeds whether or not the named - rows were present. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.Mutation) - ), -) -_sym_db.RegisterMessage(Mutation) -_sym_db.RegisterMessage(Mutation.Write) -_sym_db.RegisterMessage(Mutation.Delete) - - -DESCRIPTOR._options = None -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_v1/proto/mutation_pb2_grpc.py b/google/cloud/spanner_v1/proto/mutation_pb2_grpc.py deleted file mode 100644 index 07cb78fe03..0000000000 --- a/google/cloud/spanner_v1/proto/mutation_pb2_grpc.py +++ /dev/null @@ -1,2 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc diff --git a/google/cloud/spanner_v1/proto/query_plan.proto b/google/cloud/spanner_v1/proto/query_plan.proto deleted file mode 100644 index 6ad13a77b0..0000000000 --- a/google/cloud/spanner_v1/proto/query_plan.proto +++ /dev/null @@ -1,128 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.v1; - -import "google/protobuf/struct.proto"; -import "google/api/annotations.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/v1;spanner"; -option java_multiple_files = true; -option java_outer_classname = "QueryPlanProto"; -option java_package = "com.google.spanner.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\V1"; - -// Node information for nodes appearing in a [QueryPlan.plan_nodes][google.spanner.v1.QueryPlan.plan_nodes]. -message PlanNode { - // Metadata associated with a parent-child relationship appearing in a - // [PlanNode][google.spanner.v1.PlanNode]. - message ChildLink { - // The node to which the link points. - int32 child_index = 1; - - // The type of the link. For example, in Hash Joins this could be used to - // distinguish between the build child and the probe child, or in the case - // of the child being an output variable, to represent the tag associated - // with the output variable. - string type = 2; - - // Only present if the child node is [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] and corresponds - // to an output variable of the parent node. The field carries the name of - // the output variable. - // For example, a `TableScan` operator that reads rows from a table will - // have child links to the `SCALAR` nodes representing the output variables - // created for each column that is read by the operator. The corresponding - // `variable` fields will be set to the variable names assigned to the - // columns. - string variable = 3; - } - - // Condensed representation of a node and its subtree. Only present for - // `SCALAR` [PlanNode(s)][google.spanner.v1.PlanNode]. - message ShortRepresentation { - // A string representation of the expression subtree rooted at this node. - string description = 1; - - // A mapping of (subquery variable name) -> (subquery node id) for cases - // where the `description` string of this node references a `SCALAR` - // subquery contained in the expression subtree rooted at this node. The - // referenced `SCALAR` subquery may not necessarily be a direct child of - // this node. - map subqueries = 2; - } - - // The kind of [PlanNode][google.spanner.v1.PlanNode]. Distinguishes between the two different kinds of - // nodes that can appear in a query plan. - enum Kind { - // Not specified. - KIND_UNSPECIFIED = 0; - - // Denotes a Relational operator node in the expression tree. Relational - // operators represent iterative processing of rows during query execution. - // For example, a `TableScan` operation that reads rows from a table. - RELATIONAL = 1; - - // Denotes a Scalar node in the expression tree. Scalar nodes represent - // non-iterable entities in the query plan. For example, constants or - // arithmetic operators appearing inside predicate expressions or references - // to column names. - SCALAR = 2; - } - - // The `PlanNode`'s index in [node list][google.spanner.v1.QueryPlan.plan_nodes]. - int32 index = 1; - - // Used to determine the type of node. May be needed for visualizing - // different kinds of nodes differently. For example, If the node is a - // [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] node, it will have a condensed representation - // which can be used to directly embed a description of the node in its - // parent. - Kind kind = 2; - - // The display name for the node. - string display_name = 3; - - // List of child node `index`es and their relationship to this parent. - repeated ChildLink child_links = 4; - - // Condensed representation for [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] nodes. - ShortRepresentation short_representation = 5; - - // Attributes relevant to the node contained in a group of key-value pairs. - // For example, a Parameter Reference node could have the following - // information in its metadata: - // - // { - // "parameter_reference": "param1", - // "parameter_type": "array" - // } - google.protobuf.Struct metadata = 6; - - // The execution statistics associated with the node, contained in a group of - // key-value pairs. Only present if the plan was returned as a result of a - // profile query. For example, number of executions, number of rows/time per - // execution etc. - google.protobuf.Struct execution_stats = 7; -} - -// Contains an ordered list of nodes appearing in the query plan. -message QueryPlan { - // The nodes in the query plan. Plan nodes are returned in pre-order starting - // with the plan root. Each [PlanNode][google.spanner.v1.PlanNode]'s `id` corresponds to its index in - // `plan_nodes`. - repeated PlanNode plan_nodes = 1; -} diff --git a/google/cloud/spanner_v1/proto/query_plan_pb2.py b/google/cloud/spanner_v1/proto/query_plan_pb2.py deleted file mode 100644 index bc715b4549..0000000000 --- a/google/cloud/spanner_v1/proto/query_plan_pb2.py +++ /dev/null @@ -1,602 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner_v1/proto/query_plan.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner_v1/proto/query_plan.proto", - package="google.spanner.v1", - syntax="proto3", - serialized_options=_b( - "\n\025com.google.spanner.v1B\016QueryPlanProtoP\001Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\252\002\027Google.Cloud.Spanner.V1\312\002\027Google\\Cloud\\Spanner\\V1" - ), - serialized_pb=_b( - '\n.google/cloud/spanner_v1/proto/query_plan.proto\x12\x11google.spanner.v1\x1a\x1cgoogle/protobuf/struct.proto\x1a\x1cgoogle/api/annotations.proto"\xf8\x04\n\x08PlanNode\x12\r\n\x05index\x18\x01 \x01(\x05\x12.\n\x04kind\x18\x02 \x01(\x0e\x32 .google.spanner.v1.PlanNode.Kind\x12\x14\n\x0c\x64isplay_name\x18\x03 \x01(\t\x12:\n\x0b\x63hild_links\x18\x04 \x03(\x0b\x32%.google.spanner.v1.PlanNode.ChildLink\x12M\n\x14short_representation\x18\x05 \x01(\x0b\x32/.google.spanner.v1.PlanNode.ShortRepresentation\x12)\n\x08metadata\x18\x06 \x01(\x0b\x32\x17.google.protobuf.Struct\x12\x30\n\x0f\x65xecution_stats\x18\x07 \x01(\x0b\x32\x17.google.protobuf.Struct\x1a@\n\tChildLink\x12\x13\n\x0b\x63hild_index\x18\x01 \x01(\x05\x12\x0c\n\x04type\x18\x02 \x01(\t\x12\x10\n\x08variable\x18\x03 \x01(\t\x1a\xb2\x01\n\x13ShortRepresentation\x12\x13\n\x0b\x64\x65scription\x18\x01 \x01(\t\x12S\n\nsubqueries\x18\x02 \x03(\x0b\x32?.google.spanner.v1.PlanNode.ShortRepresentation.SubqueriesEntry\x1a\x31\n\x0fSubqueriesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x05:\x02\x38\x01"8\n\x04Kind\x12\x14\n\x10KIND_UNSPECIFIED\x10\x00\x12\x0e\n\nRELATIONAL\x10\x01\x12\n\n\x06SCALAR\x10\x02"<\n\tQueryPlan\x12/\n\nplan_nodes\x18\x01 \x03(\x0b\x32\x1b.google.spanner.v1.PlanNodeB\x97\x01\n\x15\x63om.google.spanner.v1B\x0eQueryPlanProtoP\x01Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\xaa\x02\x17Google.Cloud.Spanner.V1\xca\x02\x17Google\\Cloud\\Spanner\\V1b\x06proto3' - ), - dependencies=[ - google_dot_protobuf_dot_struct__pb2.DESCRIPTOR, - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - ], -) - - -_PLANNODE_KIND = _descriptor.EnumDescriptor( - name="Kind", - full_name="google.spanner.v1.PlanNode.Kind", - filename=None, - file=DESCRIPTOR, - values=[ - _descriptor.EnumValueDescriptor( - name="KIND_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - ), - _descriptor.EnumValueDescriptor( - name="RELATIONAL", index=1, number=1, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="SCALAR", index=2, number=2, serialized_options=None, type=None - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=706, - serialized_end=762, -) -_sym_db.RegisterEnumDescriptor(_PLANNODE_KIND) - - -_PLANNODE_CHILDLINK = _descriptor.Descriptor( - name="ChildLink", - full_name="google.spanner.v1.PlanNode.ChildLink", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="child_index", - full_name="google.spanner.v1.PlanNode.ChildLink.child_index", - index=0, - number=1, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="type", - full_name="google.spanner.v1.PlanNode.ChildLink.type", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="variable", - full_name="google.spanner.v1.PlanNode.ChildLink.variable", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=459, - serialized_end=523, -) - -_PLANNODE_SHORTREPRESENTATION_SUBQUERIESENTRY = _descriptor.Descriptor( - name="SubqueriesEntry", - full_name="google.spanner.v1.PlanNode.ShortRepresentation.SubqueriesEntry", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="key", - full_name="google.spanner.v1.PlanNode.ShortRepresentation.SubqueriesEntry.key", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="value", - full_name="google.spanner.v1.PlanNode.ShortRepresentation.SubqueriesEntry.value", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=_b("8\001"), - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=655, - serialized_end=704, -) - -_PLANNODE_SHORTREPRESENTATION = _descriptor.Descriptor( - name="ShortRepresentation", - full_name="google.spanner.v1.PlanNode.ShortRepresentation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="description", - full_name="google.spanner.v1.PlanNode.ShortRepresentation.description", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="subqueries", - full_name="google.spanner.v1.PlanNode.ShortRepresentation.subqueries", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[_PLANNODE_SHORTREPRESENTATION_SUBQUERIESENTRY], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=526, - serialized_end=704, -) - -_PLANNODE = _descriptor.Descriptor( - name="PlanNode", - full_name="google.spanner.v1.PlanNode", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="index", - full_name="google.spanner.v1.PlanNode.index", - index=0, - number=1, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="kind", - full_name="google.spanner.v1.PlanNode.kind", - index=1, - number=2, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="display_name", - full_name="google.spanner.v1.PlanNode.display_name", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="child_links", - full_name="google.spanner.v1.PlanNode.child_links", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="short_representation", - full_name="google.spanner.v1.PlanNode.short_representation", - index=4, - number=5, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="metadata", - full_name="google.spanner.v1.PlanNode.metadata", - index=5, - number=6, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="execution_stats", - full_name="google.spanner.v1.PlanNode.execution_stats", - index=6, - number=7, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[_PLANNODE_CHILDLINK, _PLANNODE_SHORTREPRESENTATION], - enum_types=[_PLANNODE_KIND], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=130, - serialized_end=762, -) - - -_QUERYPLAN = _descriptor.Descriptor( - name="QueryPlan", - full_name="google.spanner.v1.QueryPlan", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="plan_nodes", - full_name="google.spanner.v1.QueryPlan.plan_nodes", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ) - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=764, - serialized_end=824, -) - -_PLANNODE_CHILDLINK.containing_type = _PLANNODE -_PLANNODE_SHORTREPRESENTATION_SUBQUERIESENTRY.containing_type = ( - _PLANNODE_SHORTREPRESENTATION -) -_PLANNODE_SHORTREPRESENTATION.fields_by_name[ - "subqueries" -].message_type = _PLANNODE_SHORTREPRESENTATION_SUBQUERIESENTRY -_PLANNODE_SHORTREPRESENTATION.containing_type = _PLANNODE -_PLANNODE.fields_by_name["kind"].enum_type = _PLANNODE_KIND -_PLANNODE.fields_by_name["child_links"].message_type = _PLANNODE_CHILDLINK -_PLANNODE.fields_by_name[ - "short_representation" -].message_type = _PLANNODE_SHORTREPRESENTATION -_PLANNODE.fields_by_name[ - "metadata" -].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT -_PLANNODE.fields_by_name[ - "execution_stats" -].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT -_PLANNODE_KIND.containing_type = _PLANNODE -_QUERYPLAN.fields_by_name["plan_nodes"].message_type = _PLANNODE -DESCRIPTOR.message_types_by_name["PlanNode"] = _PLANNODE -DESCRIPTOR.message_types_by_name["QueryPlan"] = _QUERYPLAN -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -PlanNode = _reflection.GeneratedProtocolMessageType( - "PlanNode", - (_message.Message,), - dict( - ChildLink=_reflection.GeneratedProtocolMessageType( - "ChildLink", - (_message.Message,), - dict( - DESCRIPTOR=_PLANNODE_CHILDLINK, - __module__="google.cloud.spanner_v1.proto.query_plan_pb2", - __doc__="""Metadata associated with a parent-child relationship - appearing in a [PlanNode][google.spanner.v1.PlanNode]. - - - Attributes: - child_index: - The node to which the link points. - type: - The type of the link. For example, in Hash Joins this could be - used to distinguish between the build child and the probe - child, or in the case of the child being an output variable, - to represent the tag associated with the output variable. - variable: - Only present if the child node is - [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] and - corresponds to an output variable of the parent node. The - field carries the name of the output variable. For example, a - ``TableScan`` operator that reads rows from a table will have - child links to the ``SCALAR`` nodes representing the output - variables created for each column that is read by the - operator. The corresponding ``variable`` fields will be set to - the variable names assigned to the columns. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.PlanNode.ChildLink) - ), - ), - ShortRepresentation=_reflection.GeneratedProtocolMessageType( - "ShortRepresentation", - (_message.Message,), - dict( - SubqueriesEntry=_reflection.GeneratedProtocolMessageType( - "SubqueriesEntry", - (_message.Message,), - dict( - DESCRIPTOR=_PLANNODE_SHORTREPRESENTATION_SUBQUERIESENTRY, - __module__="google.cloud.spanner_v1.proto.query_plan_pb2" - # @@protoc_insertion_point(class_scope:google.spanner.v1.PlanNode.ShortRepresentation.SubqueriesEntry) - ), - ), - DESCRIPTOR=_PLANNODE_SHORTREPRESENTATION, - __module__="google.cloud.spanner_v1.proto.query_plan_pb2", - __doc__="""Condensed representation of a node and its subtree. Only present for - ``SCALAR`` [PlanNode(s)][google.spanner.v1.PlanNode]. - - - Attributes: - description: - A string representation of the expression subtree rooted at - this node. - subqueries: - A mapping of (subquery variable name) -> (subquery node id) - for cases where the ``description`` string of this node - references a ``SCALAR`` subquery contained in the expression - subtree rooted at this node. The referenced ``SCALAR`` - subquery may not necessarily be a direct child of this node. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.PlanNode.ShortRepresentation) - ), - ), - DESCRIPTOR=_PLANNODE, - __module__="google.cloud.spanner_v1.proto.query_plan_pb2", - __doc__="""Node information for nodes appearing in a - [QueryPlan.plan\_nodes][google.spanner.v1.QueryPlan.plan\_nodes]. - - - Attributes: - index: - The ``PlanNode``'s index in [node - list][google.spanner.v1.QueryPlan.plan\_nodes]. - kind: - Used to determine the type of node. May be needed for - visualizing different kinds of nodes differently. For example, - If the node is a - [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] node, it will - have a condensed representation which can be used to directly - embed a description of the node in its parent. - display_name: - The display name for the node. - child_links: - List of child node ``index``\ es and their relationship to - this parent. - short_representation: - Condensed representation for - [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] nodes. - metadata: - Attributes relevant to the node contained in a group of key- - value pairs. For example, a Parameter Reference node could - have the following information in its metadata: :: { - "parameter_reference": "param1", "parameter_type": - "array" } - execution_stats: - The execution statistics associated with the node, contained - in a group of key-value pairs. Only present if the plan was - returned as a result of a profile query. For example, number - of executions, number of rows/time per execution etc. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.PlanNode) - ), -) -_sym_db.RegisterMessage(PlanNode) -_sym_db.RegisterMessage(PlanNode.ChildLink) -_sym_db.RegisterMessage(PlanNode.ShortRepresentation) -_sym_db.RegisterMessage(PlanNode.ShortRepresentation.SubqueriesEntry) - -QueryPlan = _reflection.GeneratedProtocolMessageType( - "QueryPlan", - (_message.Message,), - dict( - DESCRIPTOR=_QUERYPLAN, - __module__="google.cloud.spanner_v1.proto.query_plan_pb2", - __doc__="""Contains an ordered list of nodes appearing in the query - plan. - - - Attributes: - plan_nodes: - The nodes in the query plan. Plan nodes are returned in pre- - order starting with the plan root. Each - [PlanNode][google.spanner.v1.PlanNode]'s ``id`` corresponds to - its index in ``plan_nodes``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.QueryPlan) - ), -) -_sym_db.RegisterMessage(QueryPlan) - - -DESCRIPTOR._options = None -_PLANNODE_SHORTREPRESENTATION_SUBQUERIESENTRY._options = None -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_v1/proto/query_plan_pb2_grpc.py b/google/cloud/spanner_v1/proto/query_plan_pb2_grpc.py deleted file mode 100644 index 07cb78fe03..0000000000 --- a/google/cloud/spanner_v1/proto/query_plan_pb2_grpc.py +++ /dev/null @@ -1,2 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc diff --git a/google/cloud/spanner_v1/proto/result_set.proto b/google/cloud/spanner_v1/proto/result_set.proto deleted file mode 100644 index e24a35aaf8..0000000000 --- a/google/cloud/spanner_v1/proto/result_set.proto +++ /dev/null @@ -1,204 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.v1; - -import "google/protobuf/struct.proto"; -import "google/spanner/v1/query_plan.proto"; -import "google/spanner/v1/transaction.proto"; -import "google/spanner/v1/type.proto"; -import "google/api/annotations.proto"; - -option cc_enable_arenas = true; -option csharp_namespace = "Google.Cloud.Spanner.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/v1;spanner"; -option java_multiple_files = true; -option java_outer_classname = "ResultSetProto"; -option java_package = "com.google.spanner.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\V1"; - -// Results from [Read][google.spanner.v1.Spanner.Read] or -// [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. -message ResultSet { - // Metadata about the result set, such as row type information. - ResultSetMetadata metadata = 1; - - // Each element in `rows` is a row whose format is defined by - // [metadata.row_type][google.spanner.v1.ResultSetMetadata.row_type]. The ith element - // in each row matches the ith field in - // [metadata.row_type][google.spanner.v1.ResultSetMetadata.row_type]. Elements are - // encoded based on type as described - // [here][google.spanner.v1.TypeCode]. - repeated google.protobuf.ListValue rows = 2; - - // Query plan and execution statistics for the SQL statement that - // produced this result set. These can be requested by setting - // [ExecuteSqlRequest.query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode]. - // DML statements always produce stats containing the number of rows - // modified, unless executed using the - // [ExecuteSqlRequest.QueryMode.PLAN][google.spanner.v1.ExecuteSqlRequest.QueryMode.PLAN] [ExecuteSqlRequest.query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode]. - // Other fields may or may not be populated, based on the - // [ExecuteSqlRequest.query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode]. - ResultSetStats stats = 3; -} - -// Partial results from a streaming read or SQL query. Streaming reads and -// SQL queries better tolerate large result sets, large rows, and large -// values, but are a little trickier to consume. -message PartialResultSet { - // Metadata about the result set, such as row type information. - // Only present in the first response. - ResultSetMetadata metadata = 1; - - // A streamed result set consists of a stream of values, which might - // be split into many `PartialResultSet` messages to accommodate - // large rows and/or large values. Every N complete values defines a - // row, where N is equal to the number of entries in - // [metadata.row_type.fields][google.spanner.v1.StructType.fields]. - // - // Most values are encoded based on type as described - // [here][google.spanner.v1.TypeCode]. - // - // It is possible that the last value in values is "chunked", - // meaning that the rest of the value is sent in subsequent - // `PartialResultSet`(s). This is denoted by the [chunked_value][google.spanner.v1.PartialResultSet.chunked_value] - // field. Two or more chunked values can be merged to form a - // complete value as follows: - // - // * `bool/number/null`: cannot be chunked - // * `string`: concatenate the strings - // * `list`: concatenate the lists. If the last element in a list is a - // `string`, `list`, or `object`, merge it with the first element in - // the next list by applying these rules recursively. - // * `object`: concatenate the (field name, field value) pairs. If a - // field name is duplicated, then apply these rules recursively - // to merge the field values. - // - // Some examples of merging: - // - // # Strings are concatenated. - // "foo", "bar" => "foobar" - // - // # Lists of non-strings are concatenated. - // [2, 3], [4] => [2, 3, 4] - // - // # Lists are concatenated, but the last and first elements are merged - // # because they are strings. - // ["a", "b"], ["c", "d"] => ["a", "bc", "d"] - // - // # Lists are concatenated, but the last and first elements are merged - // # because they are lists. Recursively, the last and first elements - // # of the inner lists are merged because they are strings. - // ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"] - // - // # Non-overlapping object fields are combined. - // {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} - // - // # Overlapping object fields are merged. - // {"a": "1"}, {"a": "2"} => {"a": "12"} - // - // # Examples of merging objects containing lists of strings. - // {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]} - // - // For a more complete example, suppose a streaming SQL query is - // yielding a result set whose rows contain a single string - // field. The following `PartialResultSet`s might be yielded: - // - // { - // "metadata": { ... } - // "values": ["Hello", "W"] - // "chunked_value": true - // "resume_token": "Af65..." - // } - // { - // "values": ["orl"] - // "chunked_value": true - // "resume_token": "Bqp2..." - // } - // { - // "values": ["d"] - // "resume_token": "Zx1B..." - // } - // - // This sequence of `PartialResultSet`s encodes two rows, one - // containing the field value `"Hello"`, and a second containing the - // field value `"World" = "W" + "orl" + "d"`. - repeated google.protobuf.Value values = 2; - - // If true, then the final value in [values][google.spanner.v1.PartialResultSet.values] is chunked, and must - // be combined with more values from subsequent `PartialResultSet`s - // to obtain a complete field value. - bool chunked_value = 3; - - // Streaming calls might be interrupted for a variety of reasons, such - // as TCP connection loss. If this occurs, the stream of results can - // be resumed by re-sending the original request and including - // `resume_token`. Note that executing any other transaction in the - // same session invalidates the token. - bytes resume_token = 4; - - // Query plan and execution statistics for the statement that produced this - // streaming result set. These can be requested by setting - // [ExecuteSqlRequest.query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode] and are sent - // only once with the last response in the stream. - // This field will also be present in the last response for DML - // statements. - ResultSetStats stats = 5; -} - -// Metadata about a [ResultSet][google.spanner.v1.ResultSet] or [PartialResultSet][google.spanner.v1.PartialResultSet]. -message ResultSetMetadata { - // Indicates the field names and types for the rows in the result - // set. For example, a SQL query like `"SELECT UserId, UserName FROM - // Users"` could return a `row_type` value like: - // - // "fields": [ - // { "name": "UserId", "type": { "code": "INT64" } }, - // { "name": "UserName", "type": { "code": "STRING" } }, - // ] - StructType row_type = 1; - - // If the read or SQL query began a transaction as a side-effect, the - // information about the new transaction is yielded here. - Transaction transaction = 2; -} - -// Additional statistics about a [ResultSet][google.spanner.v1.ResultSet] or [PartialResultSet][google.spanner.v1.PartialResultSet]. -message ResultSetStats { - // [QueryPlan][google.spanner.v1.QueryPlan] for the query associated with this result. - QueryPlan query_plan = 1; - - // Aggregated statistics from the execution of the query. Only present when - // the query is profiled. For example, a query could return the statistics as - // follows: - // - // { - // "rows_returned": "3", - // "elapsed_time": "1.22 secs", - // "cpu_time": "1.19 secs" - // } - google.protobuf.Struct query_stats = 2; - - // The number of rows modified by the DML statement. - oneof row_count { - // Standard DML returns an exact count of rows that were modified. - int64 row_count_exact = 3; - - // Partitioned DML does not offer exactly-once semantics, so it - // returns a lower bound of the rows modified. - int64 row_count_lower_bound = 4; - } -} diff --git a/google/cloud/spanner_v1/proto/result_set_pb2.py b/google/cloud/spanner_v1/proto/result_set_pb2.py deleted file mode 100644 index 3740450e6b..0000000000 --- a/google/cloud/spanner_v1/proto/result_set_pb2.py +++ /dev/null @@ -1,625 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner_v1/proto/result_set.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 -from google.cloud.spanner_v1.proto import ( - query_plan_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_query__plan__pb2, -) -from google.cloud.spanner_v1.proto import ( - transaction_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_transaction__pb2, -) -from google.cloud.spanner_v1.proto import ( - type_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_type__pb2, -) -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner_v1/proto/result_set.proto", - package="google.spanner.v1", - syntax="proto3", - serialized_options=_b( - "\n\025com.google.spanner.v1B\016ResultSetProtoP\001Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\370\001\001\252\002\027Google.Cloud.Spanner.V1\312\002\027Google\\Cloud\\Spanner\\V1" - ), - serialized_pb=_b( - '\n.google/cloud/spanner_v1/proto/result_set.proto\x12\x11google.spanner.v1\x1a\x1cgoogle/protobuf/struct.proto\x1a.google/cloud/spanner_v1/proto/query_plan.proto\x1a/google/cloud/spanner_v1/proto/transaction.proto\x1a(google/cloud/spanner_v1/proto/type.proto\x1a\x1cgoogle/api/annotations.proto"\x9f\x01\n\tResultSet\x12\x36\n\x08metadata\x18\x01 \x01(\x0b\x32$.google.spanner.v1.ResultSetMetadata\x12(\n\x04rows\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.ListValue\x12\x30\n\x05stats\x18\x03 \x01(\x0b\x32!.google.spanner.v1.ResultSetStats"\xd1\x01\n\x10PartialResultSet\x12\x36\n\x08metadata\x18\x01 \x01(\x0b\x32$.google.spanner.v1.ResultSetMetadata\x12&\n\x06values\x18\x02 \x03(\x0b\x32\x16.google.protobuf.Value\x12\x15\n\rchunked_value\x18\x03 \x01(\x08\x12\x14\n\x0cresume_token\x18\x04 \x01(\x0c\x12\x30\n\x05stats\x18\x05 \x01(\x0b\x32!.google.spanner.v1.ResultSetStats"y\n\x11ResultSetMetadata\x12/\n\x08row_type\x18\x01 \x01(\x0b\x32\x1d.google.spanner.v1.StructType\x12\x33\n\x0btransaction\x18\x02 \x01(\x0b\x32\x1e.google.spanner.v1.Transaction"\xb9\x01\n\x0eResultSetStats\x12\x30\n\nquery_plan\x18\x01 \x01(\x0b\x32\x1c.google.spanner.v1.QueryPlan\x12,\n\x0bquery_stats\x18\x02 \x01(\x0b\x32\x17.google.protobuf.Struct\x12\x19\n\x0frow_count_exact\x18\x03 \x01(\x03H\x00\x12\x1f\n\x15row_count_lower_bound\x18\x04 \x01(\x03H\x00\x42\x0b\n\trow_countB\x9a\x01\n\x15\x63om.google.spanner.v1B\x0eResultSetProtoP\x01Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\xf8\x01\x01\xaa\x02\x17Google.Cloud.Spanner.V1\xca\x02\x17Google\\Cloud\\Spanner\\V1b\x06proto3' - ), - dependencies=[ - google_dot_protobuf_dot_struct__pb2.DESCRIPTOR, - google_dot_cloud_dot_spanner__v1_dot_proto_dot_query__plan__pb2.DESCRIPTOR, - google_dot_cloud_dot_spanner__v1_dot_proto_dot_transaction__pb2.DESCRIPTOR, - google_dot_cloud_dot_spanner__v1_dot_proto_dot_type__pb2.DESCRIPTOR, - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - ], -) - - -_RESULTSET = _descriptor.Descriptor( - name="ResultSet", - full_name="google.spanner.v1.ResultSet", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="metadata", - full_name="google.spanner.v1.ResultSet.metadata", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="rows", - full_name="google.spanner.v1.ResultSet.rows", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="stats", - full_name="google.spanner.v1.ResultSet.stats", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=269, - serialized_end=428, -) - - -_PARTIALRESULTSET = _descriptor.Descriptor( - name="PartialResultSet", - full_name="google.spanner.v1.PartialResultSet", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="metadata", - full_name="google.spanner.v1.PartialResultSet.metadata", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="values", - full_name="google.spanner.v1.PartialResultSet.values", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="chunked_value", - full_name="google.spanner.v1.PartialResultSet.chunked_value", - index=2, - number=3, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="resume_token", - full_name="google.spanner.v1.PartialResultSet.resume_token", - index=3, - number=4, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b(""), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="stats", - full_name="google.spanner.v1.PartialResultSet.stats", - index=4, - number=5, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=431, - serialized_end=640, -) - - -_RESULTSETMETADATA = _descriptor.Descriptor( - name="ResultSetMetadata", - full_name="google.spanner.v1.ResultSetMetadata", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="row_type", - full_name="google.spanner.v1.ResultSetMetadata.row_type", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="transaction", - full_name="google.spanner.v1.ResultSetMetadata.transaction", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=642, - serialized_end=763, -) - - -_RESULTSETSTATS = _descriptor.Descriptor( - name="ResultSetStats", - full_name="google.spanner.v1.ResultSetStats", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="query_plan", - full_name="google.spanner.v1.ResultSetStats.query_plan", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="query_stats", - full_name="google.spanner.v1.ResultSetStats.query_stats", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="row_count_exact", - full_name="google.spanner.v1.ResultSetStats.row_count_exact", - index=2, - number=3, - type=3, - cpp_type=2, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="row_count_lower_bound", - full_name="google.spanner.v1.ResultSetStats.row_count_lower_bound", - index=3, - number=4, - type=3, - cpp_type=2, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="row_count", - full_name="google.spanner.v1.ResultSetStats.row_count", - index=0, - containing_type=None, - fields=[], - ) - ], - serialized_start=766, - serialized_end=951, -) - -_RESULTSET.fields_by_name["metadata"].message_type = _RESULTSETMETADATA -_RESULTSET.fields_by_name[ - "rows" -].message_type = google_dot_protobuf_dot_struct__pb2._LISTVALUE -_RESULTSET.fields_by_name["stats"].message_type = _RESULTSETSTATS -_PARTIALRESULTSET.fields_by_name["metadata"].message_type = _RESULTSETMETADATA -_PARTIALRESULTSET.fields_by_name[ - "values" -].message_type = google_dot_protobuf_dot_struct__pb2._VALUE -_PARTIALRESULTSET.fields_by_name["stats"].message_type = _RESULTSETSTATS -_RESULTSETMETADATA.fields_by_name[ - "row_type" -].message_type = google_dot_cloud_dot_spanner__v1_dot_proto_dot_type__pb2._STRUCTTYPE -_RESULTSETMETADATA.fields_by_name[ - "transaction" -].message_type = ( - google_dot_cloud_dot_spanner__v1_dot_proto_dot_transaction__pb2._TRANSACTION -) -_RESULTSETSTATS.fields_by_name[ - "query_plan" -].message_type = ( - google_dot_cloud_dot_spanner__v1_dot_proto_dot_query__plan__pb2._QUERYPLAN -) -_RESULTSETSTATS.fields_by_name[ - "query_stats" -].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT -_RESULTSETSTATS.oneofs_by_name["row_count"].fields.append( - _RESULTSETSTATS.fields_by_name["row_count_exact"] -) -_RESULTSETSTATS.fields_by_name[ - "row_count_exact" -].containing_oneof = _RESULTSETSTATS.oneofs_by_name["row_count"] -_RESULTSETSTATS.oneofs_by_name["row_count"].fields.append( - _RESULTSETSTATS.fields_by_name["row_count_lower_bound"] -) -_RESULTSETSTATS.fields_by_name[ - "row_count_lower_bound" -].containing_oneof = _RESULTSETSTATS.oneofs_by_name["row_count"] -DESCRIPTOR.message_types_by_name["ResultSet"] = _RESULTSET -DESCRIPTOR.message_types_by_name["PartialResultSet"] = _PARTIALRESULTSET -DESCRIPTOR.message_types_by_name["ResultSetMetadata"] = _RESULTSETMETADATA -DESCRIPTOR.message_types_by_name["ResultSetStats"] = _RESULTSETSTATS -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -ResultSet = _reflection.GeneratedProtocolMessageType( - "ResultSet", - (_message.Message,), - dict( - DESCRIPTOR=_RESULTSET, - __module__="google.cloud.spanner_v1.proto.result_set_pb2", - __doc__="""Results from [Read][google.spanner.v1.Spanner.Read] or - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. - - - Attributes: - metadata: - Metadata about the result set, such as row type information. - rows: - Each element in ``rows`` is a row whose format is defined by [ - metadata.row\_type][google.spanner.v1.ResultSetMetadata.row\_t - ype]. The ith element in each row matches the ith field in [me - tadata.row\_type][google.spanner.v1.ResultSetMetadata.row\_typ - e]. Elements are encoded based on type as described - [here][google.spanner.v1.TypeCode]. - stats: - Query plan and execution statistics for the SQL statement that - produced this result set. These can be requested by setting [E - xecuteSqlRequest.query\_mode][google.spanner.v1.ExecuteSqlRequ - est.query\_mode]. DML statements always produce stats - containing the number of rows modified, unless executed using - the [ExecuteSqlRequest.QueryMode.PLAN][google.spanner.v1.Execu - teSqlRequest.QueryMode.PLAN] [ExecuteSqlRequest.query\_mode][g - oogle.spanner.v1.ExecuteSqlRequest.query\_mode]. Other fields - may or may not be populated, based on the [ExecuteSqlRequest.q - uery\_mode][google.spanner.v1.ExecuteSqlRequest.query\_mode]. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ResultSet) - ), -) -_sym_db.RegisterMessage(ResultSet) - -PartialResultSet = _reflection.GeneratedProtocolMessageType( - "PartialResultSet", - (_message.Message,), - dict( - DESCRIPTOR=_PARTIALRESULTSET, - __module__="google.cloud.spanner_v1.proto.result_set_pb2", - __doc__="""Partial results from a streaming read or SQL query. - Streaming reads and SQL queries better tolerate large result sets, large - rows, and large values, but are a little trickier to consume. - - - Attributes: - metadata: - Metadata about the result set, such as row type information. - Only present in the first response. - values: - A streamed result set consists of a stream of values, which - might be split into many ``PartialResultSet`` messages to - accommodate large rows and/or large values. Every N complete - values defines a row, where N is equal to the number of - entries in [metadata.row\_type.fields][google.spanner.v1.Struc - tType.fields]. Most values are encoded based on type as - described [here][google.spanner.v1.TypeCode]. It is possible - that the last value in values is "chunked", meaning that the - rest of the value is sent in subsequent ``PartialResultSet``\ - (s). This is denoted by the [chunked\_value][google.spanner.v1 - .PartialResultSet.chunked\_value] field. Two or more chunked - values can be merged to form a complete value as follows: - - ``bool/number/null``: cannot be chunked - ``string``: - concatenate the strings - ``list``: concatenate the lists. If - the last element in a list is a ``string``, ``list``, or - ``object``, merge it with the first element in the next - list by applying these rules recursively. - ``object``: - concatenate the (field name, field value) pairs. If a field - name is duplicated, then apply these rules recursively to - merge the field values. Some examples of merging: :: - # Strings are concatenated. "foo", "bar" => "foobar" - # Lists of non-strings are concatenated. [2, 3], [4] => - [2, 3, 4] # Lists are concatenated, but the last and - first elements are merged # because they are strings. - ["a", "b"], ["c", "d"] => ["a", "bc", "d"] # Lists are - concatenated, but the last and first elements are merged # - because they are lists. Recursively, the last and first - elements # of the inner lists are merged because they are - strings. ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", - "cd"], "e"] # Non-overlapping object fields are combined. - {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} # - Overlapping object fields are merged. {"a": "1"}, {"a": - "2"} => {"a": "12"} # Examples of merging objects - containing lists of strings. {"a": ["1"]}, {"a": ["2"]} => - {"a": ["12"]} For a more complete example, suppose a - streaming SQL query is yielding a result set whose rows - contain a single string field. The following - ``PartialResultSet``\ s might be yielded: :: { - "metadata": { ... } "values": ["Hello", "W"] - "chunked_value": true "resume_token": "Af65..." } - { "values": ["orl"] "chunked_value": true - "resume_token": "Bqp2..." } { "values": ["d"] - "resume_token": "Zx1B..." } This sequence of - ``PartialResultSet``\ s encodes two rows, one containing the - field value ``"Hello"``, and a second containing the field - value ``"World" = "W" + "orl" + "d"``. - chunked_value: - If true, then the final value in - [values][google.spanner.v1.PartialResultSet.values] is - chunked, and must be combined with more values from subsequent - ``PartialResultSet``\ s to obtain a complete field value. - resume_token: - Streaming calls might be interrupted for a variety of reasons, - such as TCP connection loss. If this occurs, the stream of - results can be resumed by re-sending the original request and - including ``resume_token``. Note that executing any other - transaction in the same session invalidates the token. - stats: - Query plan and execution statistics for the statement that - produced this streaming result set. These can be requested by - setting [ExecuteSqlRequest.query\_mode][google.spanner.v1.Exec - uteSqlRequest.query\_mode] and are sent only once with the - last response in the stream. This field will also be present - in the last response for DML statements. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.PartialResultSet) - ), -) -_sym_db.RegisterMessage(PartialResultSet) - -ResultSetMetadata = _reflection.GeneratedProtocolMessageType( - "ResultSetMetadata", - (_message.Message,), - dict( - DESCRIPTOR=_RESULTSETMETADATA, - __module__="google.cloud.spanner_v1.proto.result_set_pb2", - __doc__="""Metadata about a [ResultSet][google.spanner.v1.ResultSet] - or [PartialResultSet][google.spanner.v1.PartialResultSet]. - - - Attributes: - row_type: - Indicates the field names and types for the rows in the result - set. For example, a SQL query like ``"SELECT UserId, UserName - FROM Users"`` could return a ``row_type`` value like: :: - "fields": [ { "name": "UserId", "type": { "code": - "INT64" } }, { "name": "UserName", "type": { "code": - "STRING" } }, ] - transaction: - If the read or SQL query began a transaction as a side-effect, - the information about the new transaction is yielded here. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ResultSetMetadata) - ), -) -_sym_db.RegisterMessage(ResultSetMetadata) - -ResultSetStats = _reflection.GeneratedProtocolMessageType( - "ResultSetStats", - (_message.Message,), - dict( - DESCRIPTOR=_RESULTSETSTATS, - __module__="google.cloud.spanner_v1.proto.result_set_pb2", - __doc__="""Additional statistics about a - [ResultSet][google.spanner.v1.ResultSet] or - [PartialResultSet][google.spanner.v1.PartialResultSet]. - - - Attributes: - query_plan: - [QueryPlan][google.spanner.v1.QueryPlan] for the query - associated with this result. - query_stats: - Aggregated statistics from the execution of the query. Only - present when the query is profiled. For example, a query could - return the statistics as follows: :: { - "rows_returned": "3", "elapsed_time": "1.22 secs", - "cpu_time": "1.19 secs" } - row_count: - The number of rows modified by the DML statement. - row_count_exact: - Standard DML returns an exact count of rows that were - modified. - row_count_lower_bound: - Partitioned DML does not offer exactly-once semantics, so it - returns a lower bound of the rows modified. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ResultSetStats) - ), -) -_sym_db.RegisterMessage(ResultSetStats) - - -DESCRIPTOR._options = None -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_v1/proto/result_set_pb2_grpc.py b/google/cloud/spanner_v1/proto/result_set_pb2_grpc.py deleted file mode 100644 index 07cb78fe03..0000000000 --- a/google/cloud/spanner_v1/proto/result_set_pb2_grpc.py +++ /dev/null @@ -1,2 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc diff --git a/google/cloud/spanner_v1/proto/spanner.proto b/google/cloud/spanner_v1/proto/spanner.proto deleted file mode 100644 index 0c7da37c72..0000000000 --- a/google/cloud/spanner_v1/proto/spanner.proto +++ /dev/null @@ -1,906 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.v1; - -import "google/api/annotations.proto"; -import "google/api/client.proto"; -import "google/api/field_behavior.proto"; -import "google/api/resource.proto"; -import "google/protobuf/empty.proto"; -import "google/protobuf/struct.proto"; -import "google/protobuf/timestamp.proto"; -import "google/rpc/status.proto"; -import "google/spanner/v1/keys.proto"; -import "google/spanner/v1/mutation.proto"; -import "google/spanner/v1/result_set.proto"; -import "google/spanner/v1/transaction.proto"; -import "google/spanner/v1/type.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/v1;spanner"; -option java_multiple_files = true; -option java_outer_classname = "SpannerProto"; -option java_package = "com.google.spanner.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\V1"; -option (google.api.resource_definition) = { - type: "spanner.googleapis.com/Database" - pattern: "projects/{project}/instances/{instance}/databases/{database}" -}; - -// Cloud Spanner API -// -// The Cloud Spanner API can be used to manage sessions and execute -// transactions on data stored in Cloud Spanner databases. -service Spanner { - option (google.api.default_host) = "spanner.googleapis.com"; - option (google.api.oauth_scopes) = - "https://www.googleapis.com/auth/cloud-platform," - "https://www.googleapis.com/auth/spanner.data"; - - // Creates a new session. A session can be used to perform - // transactions that read and/or modify data in a Cloud Spanner database. - // Sessions are meant to be reused for many consecutive - // transactions. - // - // Sessions can only execute one transaction at a time. To execute - // multiple concurrent read-write/write-only transactions, create - // multiple sessions. Note that standalone reads and queries use a - // transaction internally, and count toward the one transaction - // limit. - // - // Active sessions use additional server resources, so it is a good idea to - // delete idle and unneeded sessions. - // Aside from explicit deletes, Cloud Spanner may delete sessions for which no - // operations are sent for more than an hour. If a session is deleted, - // requests to it return `NOT_FOUND`. - // - // Idle sessions can be kept alive by sending a trivial SQL query - // periodically, e.g., `"SELECT 1"`. - rpc CreateSession(CreateSessionRequest) returns (Session) { - option (google.api.http) = { - post: "/v1/{database=projects/*/instances/*/databases/*}/sessions" - body: "*" - }; - option (google.api.method_signature) = "database"; - } - - // Creates multiple new sessions. - // - // This API can be used to initialize a session cache on the clients. - // See https://goo.gl/TgSFN2 for best practices on session cache management. - rpc BatchCreateSessions(BatchCreateSessionsRequest) returns (BatchCreateSessionsResponse) { - option (google.api.http) = { - post: "/v1/{database=projects/*/instances/*/databases/*}/sessions:batchCreate" - body: "*" - }; - option (google.api.method_signature) = "database,session_count"; - } - - // Gets a session. Returns `NOT_FOUND` if the session does not exist. - // This is mainly useful for determining whether a session is still - // alive. - rpc GetSession(GetSessionRequest) returns (Session) { - option (google.api.http) = { - get: "/v1/{name=projects/*/instances/*/databases/*/sessions/*}" - }; - option (google.api.method_signature) = "name"; - } - - // Lists all sessions in a given database. - rpc ListSessions(ListSessionsRequest) returns (ListSessionsResponse) { - option (google.api.http) = { - get: "/v1/{database=projects/*/instances/*/databases/*}/sessions" - }; - option (google.api.method_signature) = "database"; - } - - // Ends a session, releasing server resources associated with it. This will - // asynchronously trigger cancellation of any operations that are running with - // this session. - rpc DeleteSession(DeleteSessionRequest) returns (google.protobuf.Empty) { - option (google.api.http) = { - delete: "/v1/{name=projects/*/instances/*/databases/*/sessions/*}" - }; - option (google.api.method_signature) = "name"; - } - - // Executes an SQL statement, returning all results in a single reply. This - // method cannot be used to return a result set larger than 10 MiB; - // if the query yields more data than that, the query fails with - // a `FAILED_PRECONDITION` error. - // - // Operations inside read-write transactions might return `ABORTED`. If - // this occurs, the application should restart the transaction from - // the beginning. See [Transaction][google.spanner.v1.Transaction] for more details. - // - // Larger result sets can be fetched in streaming fashion by calling - // [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] instead. - rpc ExecuteSql(ExecuteSqlRequest) returns (ResultSet) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeSql" - body: "*" - }; - } - - // Like [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], except returns the result - // set as a stream. Unlike [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], there - // is no limit on the size of the returned result set. However, no - // individual row in the result set can exceed 100 MiB, and no - // column value can exceed 10 MiB. - rpc ExecuteStreamingSql(ExecuteSqlRequest) returns (stream PartialResultSet) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeStreamingSql" - body: "*" - }; - } - - // Executes a batch of SQL DML statements. This method allows many statements - // to be run with lower latency than submitting them sequentially with - // [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. - // - // Statements are executed in sequential order. A request can succeed even if - // a statement fails. The [ExecuteBatchDmlResponse.status][google.spanner.v1.ExecuteBatchDmlResponse.status] field in the - // response provides information about the statement that failed. Clients must - // inspect this field to determine whether an error occurred. - // - // Execution stops after the first failed statement; the remaining statements - // are not executed. - rpc ExecuteBatchDml(ExecuteBatchDmlRequest) returns (ExecuteBatchDmlResponse) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeBatchDml" - body: "*" - }; - } - - // Reads rows from the database using key lookups and scans, as a - // simple key/value style alternative to - // [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. This method cannot be used to - // return a result set larger than 10 MiB; if the read matches more - // data than that, the read fails with a `FAILED_PRECONDITION` - // error. - // - // Reads inside read-write transactions might return `ABORTED`. If - // this occurs, the application should restart the transaction from - // the beginning. See [Transaction][google.spanner.v1.Transaction] for more details. - // - // Larger result sets can be yielded in streaming fashion by calling - // [StreamingRead][google.spanner.v1.Spanner.StreamingRead] instead. - rpc Read(ReadRequest) returns (ResultSet) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:read" - body: "*" - }; - } - - // Like [Read][google.spanner.v1.Spanner.Read], except returns the result set as a - // stream. Unlike [Read][google.spanner.v1.Spanner.Read], there is no limit on the - // size of the returned result set. However, no individual row in - // the result set can exceed 100 MiB, and no column value can exceed - // 10 MiB. - rpc StreamingRead(ReadRequest) returns (stream PartialResultSet) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:streamingRead" - body: "*" - }; - } - - // Begins a new transaction. This step can often be skipped: - // [Read][google.spanner.v1.Spanner.Read], [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and - // [Commit][google.spanner.v1.Spanner.Commit] can begin a new transaction as a - // side-effect. - rpc BeginTransaction(BeginTransactionRequest) returns (Transaction) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:beginTransaction" - body: "*" - }; - option (google.api.method_signature) = "session,options"; - } - - // Commits a transaction. The request includes the mutations to be - // applied to rows in the database. - // - // `Commit` might return an `ABORTED` error. This can occur at any time; - // commonly, the cause is conflicts with concurrent - // transactions. However, it can also happen for a variety of other - // reasons. If `Commit` returns `ABORTED`, the caller should re-attempt - // the transaction from the beginning, re-using the same session. - rpc Commit(CommitRequest) returns (CommitResponse) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:commit" - body: "*" - }; - option (google.api.method_signature) = "session,transaction_id,mutations"; - option (google.api.method_signature) = "session,single_use_transaction,mutations"; - } - - // Rolls back a transaction, releasing any locks it holds. It is a good - // idea to call this for any transaction that includes one or more - // [Read][google.spanner.v1.Spanner.Read] or [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] requests and - // ultimately decides not to commit. - // - // `Rollback` returns `OK` if it successfully aborts the transaction, the - // transaction was already aborted, or the transaction is not - // found. `Rollback` never returns `ABORTED`. - rpc Rollback(RollbackRequest) returns (google.protobuf.Empty) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:rollback" - body: "*" - }; - option (google.api.method_signature) = "session,transaction_id"; - } - - // Creates a set of partition tokens that can be used to execute a query - // operation in parallel. Each of the returned partition tokens can be used - // by [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] to specify a subset - // of the query result to read. The same session and read-only transaction - // must be used by the PartitionQueryRequest used to create the - // partition tokens and the ExecuteSqlRequests that use the partition tokens. - // - // Partition tokens become invalid when the session used to create them - // is deleted, is idle for too long, begins a new transaction, or becomes too - // old. When any of these happen, it is not possible to resume the query, and - // the whole operation must be restarted from the beginning. - rpc PartitionQuery(PartitionQueryRequest) returns (PartitionResponse) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:partitionQuery" - body: "*" - }; - } - - // Creates a set of partition tokens that can be used to execute a read - // operation in parallel. Each of the returned partition tokens can be used - // by [StreamingRead][google.spanner.v1.Spanner.StreamingRead] to specify a subset of the read - // result to read. The same session and read-only transaction must be used by - // the PartitionReadRequest used to create the partition tokens and the - // ReadRequests that use the partition tokens. There are no ordering - // guarantees on rows returned among the returned partition tokens, or even - // within each individual StreamingRead call issued with a partition_token. - // - // Partition tokens become invalid when the session used to create them - // is deleted, is idle for too long, begins a new transaction, or becomes too - // old. When any of these happen, it is not possible to resume the read, and - // the whole operation must be restarted from the beginning. - rpc PartitionRead(PartitionReadRequest) returns (PartitionResponse) { - option (google.api.http) = { - post: "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:partitionRead" - body: "*" - }; - } -} - -// The request for [CreateSession][google.spanner.v1.Spanner.CreateSession]. -message CreateSessionRequest { - // Required. The database in which the new session is created. - string database = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - } - ]; - - // The session to create. - Session session = 2; -} - -// The request for [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. -message BatchCreateSessionsRequest { - // Required. The database in which the new sessions are created. - string database = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - } - ]; - - // Parameters to be applied to each created session. - Session session_template = 2; - - // Required. The number of sessions to be created in this batch call. - // The API may return fewer than the requested number of sessions. If a - // specific number of sessions are desired, the client can make additional - // calls to BatchCreateSessions (adjusting - // [session_count][google.spanner.v1.BatchCreateSessionsRequest.session_count] as necessary). - int32 session_count = 3 [(google.api.field_behavior) = REQUIRED]; -} - -// The response for [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. -message BatchCreateSessionsResponse { - // The freshly created sessions. - repeated Session session = 1; -} - -// A session in the Cloud Spanner API. -message Session { - option (google.api.resource) = { - type: "spanner.googleapis.com/Session" - pattern: "projects/{project}/instances/{instance}/databases/{database}/sessions/{session}" - }; - - // The name of the session. This is always system-assigned; values provided - // when creating a session are ignored. - string name = 1; - - // The labels for the session. - // - // * Label keys must be between 1 and 63 characters long and must conform to - // the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`. - // * Label values must be between 0 and 63 characters long and must conform - // to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`. - // * No more than 64 labels can be associated with a given session. - // - // See https://goo.gl/xmQnxf for more information on and examples of labels. - map labels = 2; - - // Output only. The timestamp when the session is created. - google.protobuf.Timestamp create_time = 3; - - // Output only. The approximate timestamp when the session is last used. It is - // typically earlier than the actual last use time. - google.protobuf.Timestamp approximate_last_use_time = 4; -} - -// The request for [GetSession][google.spanner.v1.Spanner.GetSession]. -message GetSessionRequest { - // Required. The name of the session to retrieve. - string name = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; -} - -// The request for [ListSessions][google.spanner.v1.Spanner.ListSessions]. -message ListSessionsRequest { - // Required. The database in which to list sessions. - string database = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Database" - } - ]; - - // Number of sessions to be returned in the response. If 0 or less, defaults - // to the server's maximum allowed page size. - int32 page_size = 2; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.v1.ListSessionsResponse.next_page_token] from a previous - // [ListSessionsResponse][google.spanner.v1.ListSessionsResponse]. - string page_token = 3; - - // An expression for filtering the results of the request. Filter rules are - // case insensitive. The fields eligible for filtering are: - // - // * `labels.key` where key is the name of a label - // - // Some examples of using filters are: - // - // * `labels.env:*` --> The session has the label "env". - // * `labels.env:dev` --> The session has the label "env" and the value of - // the label contains the string "dev". - string filter = 4; -} - -// The response for [ListSessions][google.spanner.v1.Spanner.ListSessions]. -message ListSessionsResponse { - // The list of requested sessions. - repeated Session sessions = 1; - - // `next_page_token` can be sent in a subsequent - // [ListSessions][google.spanner.v1.Spanner.ListSessions] call to fetch more of the matching - // sessions. - string next_page_token = 2; -} - -// The request for [DeleteSession][google.spanner.v1.Spanner.DeleteSession]. -message DeleteSessionRequest { - // Required. The name of the session to delete. - string name = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; -} - -// The request for [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and -// [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. -message ExecuteSqlRequest { - // Query optimizer configuration. - message QueryOptions { - // An option to control the selection of optimizer version. - // - // This parameter allows individual queries to pick different query - // optimizer versions. - // - // Specifying "latest" as a value instructs Cloud Spanner to use the - // latest supported query optimizer version. If not specified, Cloud Spanner - // uses optimizer version set at the database level options. Any other - // positive integer (from the list of supported optimizer versions) - // overrides the default optimizer version for query execution. - // The list of supported optimizer versions can be queried from - // SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement - // with an invalid optimizer version will fail with a syntax error - // (`INVALID_ARGUMENT`) status. - // - // The `optimizer_version` statement hint has precedence over this setting. - string optimizer_version = 1; - } - - // Mode in which the statement must be processed. - enum QueryMode { - // The default mode. Only the statement results are returned. - NORMAL = 0; - - // This mode returns only the query plan, without any results or - // execution statistics information. - PLAN = 1; - - // This mode returns both the query plan and the execution statistics along - // with the results. - PROFILE = 2; - } - - // Required. The session in which the SQL query should be performed. - string session = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; - - // The transaction to use. - // - // For queries, if none is provided, the default is a temporary read-only - // transaction with strong concurrency. - // - // Standard DML statements require a read-write transaction. To protect - // against replays, single-use transactions are not supported. The caller - // must either supply an existing transaction ID or begin a new transaction. - // - // Partitioned DML requires an existing Partitioned DML transaction ID. - TransactionSelector transaction = 2; - - // Required. The SQL string. - string sql = 3 [(google.api.field_behavior) = REQUIRED]; - - // Parameter names and values that bind to placeholders in the SQL string. - // - // A parameter placeholder consists of the `@` character followed by the - // parameter name (for example, `@firstName`). Parameter names can contain - // letters, numbers, and underscores. - // - // Parameters can appear anywhere that a literal value is expected. The same - // parameter name can be used more than once, for example: - // - // `"WHERE id > @msg_id AND id < @msg_id + 100"` - // - // It is an error to execute a SQL statement with unbound parameters. - google.protobuf.Struct params = 4; - - // It is not always possible for Cloud Spanner to infer the right SQL type - // from a JSON value. For example, values of type `BYTES` and values - // of type `STRING` both appear in [params][google.spanner.v1.ExecuteSqlRequest.params] as JSON strings. - // - // In these cases, `param_types` can be used to specify the exact - // SQL type for some or all of the SQL statement parameters. See the - // definition of [Type][google.spanner.v1.Type] for more information - // about SQL types. - map param_types = 5; - - // If this request is resuming a previously interrupted SQL statement - // execution, `resume_token` should be copied from the last - // [PartialResultSet][google.spanner.v1.PartialResultSet] yielded before the interruption. Doing this - // enables the new SQL statement execution to resume where the last one left - // off. The rest of the request parameters must exactly match the - // request that yielded this token. - bytes resume_token = 6; - - // Used to control the amount of debugging information returned in - // [ResultSetStats][google.spanner.v1.ResultSetStats]. If [partition_token][google.spanner.v1.ExecuteSqlRequest.partition_token] is set, [query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode] can only - // be set to [QueryMode.NORMAL][google.spanner.v1.ExecuteSqlRequest.QueryMode.NORMAL]. - QueryMode query_mode = 7; - - // If present, results will be restricted to the specified partition - // previously created using PartitionQuery(). There must be an exact - // match for the values of fields common to this message and the - // PartitionQueryRequest message used to create this partition_token. - bytes partition_token = 8; - - // A per-transaction sequence number used to identify this request. This field - // makes each request idempotent such that if the request is received multiple - // times, at most one will succeed. - // - // The sequence number must be monotonically increasing within the - // transaction. If a request arrives for the first time with an out-of-order - // sequence number, the transaction may be aborted. Replays of previously - // handled requests will yield the same response as the first execution. - // - // Required for DML statements. Ignored for queries. - int64 seqno = 9; - - // Query optimizer configuration to use for the given query. - QueryOptions query_options = 10; -} - -// The request for [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. -message ExecuteBatchDmlRequest { - // A single DML statement. - message Statement { - // Required. The DML string. - string sql = 1; - - // Parameter names and values that bind to placeholders in the DML string. - // - // A parameter placeholder consists of the `@` character followed by the - // parameter name (for example, `@firstName`). Parameter names can contain - // letters, numbers, and underscores. - // - // Parameters can appear anywhere that a literal value is expected. The - // same parameter name can be used more than once, for example: - // - // `"WHERE id > @msg_id AND id < @msg_id + 100"` - // - // It is an error to execute a SQL statement with unbound parameters. - google.protobuf.Struct params = 2; - - // It is not always possible for Cloud Spanner to infer the right SQL type - // from a JSON value. For example, values of type `BYTES` and values - // of type `STRING` both appear in [params][google.spanner.v1.ExecuteBatchDmlRequest.Statement.params] as JSON strings. - // - // In these cases, `param_types` can be used to specify the exact - // SQL type for some or all of the SQL statement parameters. See the - // definition of [Type][google.spanner.v1.Type] for more information - // about SQL types. - map param_types = 3; - } - - // Required. The session in which the DML statements should be performed. - string session = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; - - // Required. The transaction to use. Must be a read-write transaction. - // - // To protect against replays, single-use transactions are not supported. The - // caller must either supply an existing transaction ID or begin a new - // transaction. - TransactionSelector transaction = 2 [(google.api.field_behavior) = REQUIRED]; - - // Required. The list of statements to execute in this batch. Statements are executed - // serially, such that the effects of statement `i` are visible to statement - // `i+1`. Each statement must be a DML statement. Execution stops at the - // first failed statement; the remaining statements are not executed. - // - // Callers must provide at least one statement. - repeated Statement statements = 3 [(google.api.field_behavior) = REQUIRED]; - - // Required. A per-transaction sequence number used to identify this request. This field - // makes each request idempotent such that if the request is received multiple - // times, at most one will succeed. - // - // The sequence number must be monotonically increasing within the - // transaction. If a request arrives for the first time with an out-of-order - // sequence number, the transaction may be aborted. Replays of previously - // handled requests will yield the same response as the first execution. - int64 seqno = 4 [(google.api.field_behavior) = REQUIRED]; -} - -// The response for [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. Contains a list -// of [ResultSet][google.spanner.v1.ResultSet] messages, one for each DML statement that has successfully -// executed, in the same order as the statements in the request. If a statement -// fails, the status in the response body identifies the cause of the failure. -// -// To check for DML statements that failed, use the following approach: -// -// 1. Check the status in the response message. The [google.rpc.Code][google.rpc.Code] enum -// value `OK` indicates that all statements were executed successfully. -// 2. If the status was not `OK`, check the number of result sets in the -// response. If the response contains `N` [ResultSet][google.spanner.v1.ResultSet] messages, then -// statement `N+1` in the request failed. -// -// Example 1: -// -// * Request: 5 DML statements, all executed successfully. -// * Response: 5 [ResultSet][google.spanner.v1.ResultSet] messages, with the status `OK`. -// -// Example 2: -// -// * Request: 5 DML statements. The third statement has a syntax error. -// * Response: 2 [ResultSet][google.spanner.v1.ResultSet] messages, and a syntax error (`INVALID_ARGUMENT`) -// status. The number of [ResultSet][google.spanner.v1.ResultSet] messages indicates that the third -// statement failed, and the fourth and fifth statements were not executed. -message ExecuteBatchDmlResponse { - // One [ResultSet][google.spanner.v1.ResultSet] for each statement in the request that ran successfully, - // in the same order as the statements in the request. Each [ResultSet][google.spanner.v1.ResultSet] does - // not contain any rows. The [ResultSetStats][google.spanner.v1.ResultSetStats] in each [ResultSet][google.spanner.v1.ResultSet] contain - // the number of rows modified by the statement. - // - // Only the first [ResultSet][google.spanner.v1.ResultSet] in the response contains valid - // [ResultSetMetadata][google.spanner.v1.ResultSetMetadata]. - repeated ResultSet result_sets = 1; - - // If all DML statements are executed successfully, the status is `OK`. - // Otherwise, the error status of the first failed statement. - google.rpc.Status status = 2; -} - -// Options for a PartitionQueryRequest and -// PartitionReadRequest. -message PartitionOptions { - // **Note:** This hint is currently ignored by PartitionQuery and - // PartitionRead requests. - // - // The desired data size for each partition generated. The default for this - // option is currently 1 GiB. This is only a hint. The actual size of each - // partition may be smaller or larger than this size request. - int64 partition_size_bytes = 1; - - // **Note:** This hint is currently ignored by PartitionQuery and - // PartitionRead requests. - // - // The desired maximum number of partitions to return. For example, this may - // be set to the number of workers available. The default for this option - // is currently 10,000. The maximum value is currently 200,000. This is only - // a hint. The actual number of partitions returned may be smaller or larger - // than this maximum count request. - int64 max_partitions = 2; -} - -// The request for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] -message PartitionQueryRequest { - // Required. The session used to create the partitions. - string session = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; - - // Read only snapshot transactions are supported, read/write and single use - // transactions are not. - TransactionSelector transaction = 2; - - // Required. The query request to generate partitions for. The request will fail if - // the query is not root partitionable. The query plan of a root - // partitionable query has a single distributed union operator. A distributed - // union operator conceptually divides one or more tables into multiple - // splits, remotely evaluates a subquery independently on each split, and - // then unions all results. - // - // This must not contain DML commands, such as INSERT, UPDATE, or - // DELETE. Use [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] with a - // PartitionedDml transaction for large, partition-friendly DML operations. - string sql = 3 [(google.api.field_behavior) = REQUIRED]; - - // Parameter names and values that bind to placeholders in the SQL string. - // - // A parameter placeholder consists of the `@` character followed by the - // parameter name (for example, `@firstName`). Parameter names can contain - // letters, numbers, and underscores. - // - // Parameters can appear anywhere that a literal value is expected. The same - // parameter name can be used more than once, for example: - // - // `"WHERE id > @msg_id AND id < @msg_id + 100"` - // - // It is an error to execute a SQL statement with unbound parameters. - google.protobuf.Struct params = 4; - - // It is not always possible for Cloud Spanner to infer the right SQL type - // from a JSON value. For example, values of type `BYTES` and values - // of type `STRING` both appear in [params][google.spanner.v1.PartitionQueryRequest.params] as JSON strings. - // - // In these cases, `param_types` can be used to specify the exact - // SQL type for some or all of the SQL query parameters. See the - // definition of [Type][google.spanner.v1.Type] for more information - // about SQL types. - map param_types = 5; - - // Additional options that affect how many partitions are created. - PartitionOptions partition_options = 6; -} - -// The request for [PartitionRead][google.spanner.v1.Spanner.PartitionRead] -message PartitionReadRequest { - // Required. The session used to create the partitions. - string session = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; - - // Read only snapshot transactions are supported, read/write and single use - // transactions are not. - TransactionSelector transaction = 2; - - // Required. The name of the table in the database to be read. - string table = 3 [(google.api.field_behavior) = REQUIRED]; - - // If non-empty, the name of an index on [table][google.spanner.v1.PartitionReadRequest.table]. This index is - // used instead of the table primary key when interpreting [key_set][google.spanner.v1.PartitionReadRequest.key_set] - // and sorting result rows. See [key_set][google.spanner.v1.PartitionReadRequest.key_set] for further information. - string index = 4; - - // The columns of [table][google.spanner.v1.PartitionReadRequest.table] to be returned for each row matching - // this request. - repeated string columns = 5; - - // Required. `key_set` identifies the rows to be yielded. `key_set` names the - // primary keys of the rows in [table][google.spanner.v1.PartitionReadRequest.table] to be yielded, unless [index][google.spanner.v1.PartitionReadRequest.index] - // is present. If [index][google.spanner.v1.PartitionReadRequest.index] is present, then [key_set][google.spanner.v1.PartitionReadRequest.key_set] instead names - // index keys in [index][google.spanner.v1.PartitionReadRequest.index]. - // - // It is not an error for the `key_set` to name rows that do not - // exist in the database. Read yields nothing for nonexistent rows. - KeySet key_set = 6 [(google.api.field_behavior) = REQUIRED]; - - // Additional options that affect how many partitions are created. - PartitionOptions partition_options = 9; -} - -// Information returned for each partition returned in a -// PartitionResponse. -message Partition { - // This token can be passed to Read, StreamingRead, ExecuteSql, or - // ExecuteStreamingSql requests to restrict the results to those identified by - // this partition token. - bytes partition_token = 1; -} - -// The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] -// or [PartitionRead][google.spanner.v1.Spanner.PartitionRead] -message PartitionResponse { - // Partitions created by this request. - repeated Partition partitions = 1; - - // Transaction created by this request. - Transaction transaction = 2; -} - -// The request for [Read][google.spanner.v1.Spanner.Read] and -// [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. -message ReadRequest { - // Required. The session in which the read should be performed. - string session = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; - - // The transaction to use. If none is provided, the default is a - // temporary read-only transaction with strong concurrency. - TransactionSelector transaction = 2; - - // Required. The name of the table in the database to be read. - string table = 3 [(google.api.field_behavior) = REQUIRED]; - - // If non-empty, the name of an index on [table][google.spanner.v1.ReadRequest.table]. This index is - // used instead of the table primary key when interpreting [key_set][google.spanner.v1.ReadRequest.key_set] - // and sorting result rows. See [key_set][google.spanner.v1.ReadRequest.key_set] for further information. - string index = 4; - - // Required. The columns of [table][google.spanner.v1.ReadRequest.table] to be returned for each row matching - // this request. - repeated string columns = 5 [(google.api.field_behavior) = REQUIRED]; - - // Required. `key_set` identifies the rows to be yielded. `key_set` names the - // primary keys of the rows in [table][google.spanner.v1.ReadRequest.table] to be yielded, unless [index][google.spanner.v1.ReadRequest.index] - // is present. If [index][google.spanner.v1.ReadRequest.index] is present, then [key_set][google.spanner.v1.ReadRequest.key_set] instead names - // index keys in [index][google.spanner.v1.ReadRequest.index]. - // - // If the [partition_token][google.spanner.v1.ReadRequest.partition_token] field is empty, rows are yielded - // in table primary key order (if [index][google.spanner.v1.ReadRequest.index] is empty) or index key order - // (if [index][google.spanner.v1.ReadRequest.index] is non-empty). If the [partition_token][google.spanner.v1.ReadRequest.partition_token] field is not - // empty, rows will be yielded in an unspecified order. - // - // It is not an error for the `key_set` to name rows that do not - // exist in the database. Read yields nothing for nonexistent rows. - KeySet key_set = 6 [(google.api.field_behavior) = REQUIRED]; - - // If greater than zero, only the first `limit` rows are yielded. If `limit` - // is zero, the default is no limit. A limit cannot be specified if - // `partition_token` is set. - int64 limit = 8; - - // If this request is resuming a previously interrupted read, - // `resume_token` should be copied from the last - // [PartialResultSet][google.spanner.v1.PartialResultSet] yielded before the interruption. Doing this - // enables the new read to resume where the last read left off. The - // rest of the request parameters must exactly match the request - // that yielded this token. - bytes resume_token = 9; - - // If present, results will be restricted to the specified partition - // previously created using PartitionRead(). There must be an exact - // match for the values of fields common to this message and the - // PartitionReadRequest message used to create this partition_token. - bytes partition_token = 10; -} - -// The request for [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. -message BeginTransactionRequest { - // Required. The session in which the transaction runs. - string session = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; - - // Required. Options for the new transaction. - TransactionOptions options = 2 [(google.api.field_behavior) = REQUIRED]; -} - -// The request for [Commit][google.spanner.v1.Spanner.Commit]. -message CommitRequest { - // Required. The session in which the transaction to be committed is running. - string session = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; - - // Required. The transaction in which to commit. - oneof transaction { - // Commit a previously-started transaction. - bytes transaction_id = 2; - - // Execute mutations in a temporary transaction. Note that unlike - // commit of a previously-started transaction, commit with a - // temporary transaction is non-idempotent. That is, if the - // `CommitRequest` is sent to Cloud Spanner more than once (for - // instance, due to retries in the application, or in the - // transport library), it is possible that the mutations are - // executed more than once. If this is undesirable, use - // [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction] and - // [Commit][google.spanner.v1.Spanner.Commit] instead. - TransactionOptions single_use_transaction = 3; - } - - // The mutations to be executed when this transaction commits. All - // mutations are applied atomically, in the order they appear in - // this list. - repeated Mutation mutations = 4; -} - -// The response for [Commit][google.spanner.v1.Spanner.Commit]. -message CommitResponse { - // The Cloud Spanner timestamp at which the transaction committed. - google.protobuf.Timestamp commit_timestamp = 1; -} - -// The request for [Rollback][google.spanner.v1.Spanner.Rollback]. -message RollbackRequest { - // Required. The session in which the transaction to roll back is running. - string session = 1 [ - (google.api.field_behavior) = REQUIRED, - (google.api.resource_reference) = { - type: "spanner.googleapis.com/Session" - } - ]; - - // Required. The transaction to roll back. - bytes transaction_id = 2 [(google.api.field_behavior) = REQUIRED]; -} diff --git a/google/cloud/spanner_v1/proto/spanner_database_admin.proto b/google/cloud/spanner_v1/proto/spanner_database_admin.proto deleted file mode 100644 index 56dbff19e1..0000000000 --- a/google/cloud/spanner_v1/proto/spanner_database_admin.proto +++ /dev/null @@ -1,302 +0,0 @@ -// Copyright 2018 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.admin.database.v1; - -import "google/api/annotations.proto"; -import "google/iam/v1/iam_policy.proto"; -import "google/iam/v1/policy.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/empty.proto"; -import "google/protobuf/timestamp.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.Admin.Database.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/admin/database/v1;database"; -option java_multiple_files = true; -option java_outer_classname = "SpannerDatabaseAdminProto"; -option java_package = "com.google.spanner.admin.database.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\Admin\\Database\\V1"; - - -// Cloud Spanner Database Admin API -// -// The Cloud Spanner Database Admin API can be used to create, drop, and -// list databases. It also enables updating the schema of pre-existing -// databases. -service DatabaseAdmin { - // Lists Cloud Spanner databases. - rpc ListDatabases(ListDatabasesRequest) returns (ListDatabasesResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*/instances/*}/databases" - }; - } - - // Creates a new Cloud Spanner database and starts to prepare it for serving. - // The returned [long-running operation][google.longrunning.Operation] will - // have a name of the format `/operations/` and - // can be used to track preparation of the database. The - // [metadata][google.longrunning.Operation.metadata] field type is - // [CreateDatabaseMetadata][google.spanner.admin.database.v1.CreateDatabaseMetadata]. The - // [response][google.longrunning.Operation.response] field type is - // [Database][google.spanner.admin.database.v1.Database], if successful. - rpc CreateDatabase(CreateDatabaseRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1/{parent=projects/*/instances/*}/databases" - body: "*" - }; - } - - // Gets the state of a Cloud Spanner database. - rpc GetDatabase(GetDatabaseRequest) returns (Database) { - option (google.api.http) = { - get: "/v1/{name=projects/*/instances/*/databases/*}" - }; - } - - // Updates the schema of a Cloud Spanner database by - // creating/altering/dropping tables, columns, indexes, etc. The returned - // [long-running operation][google.longrunning.Operation] will have a name of - // the format `/operations/` and can be used to - // track execution of the schema change(s). The - // [metadata][google.longrunning.Operation.metadata] field type is - // [UpdateDatabaseDdlMetadata][google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata]. The operation has no response. - rpc UpdateDatabaseDdl(UpdateDatabaseDdlRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - patch: "/v1/{database=projects/*/instances/*/databases/*}/ddl" - body: "*" - }; - } - - // Drops (aka deletes) a Cloud Spanner database. - rpc DropDatabase(DropDatabaseRequest) returns (google.protobuf.Empty) { - option (google.api.http) = { - delete: "/v1/{database=projects/*/instances/*/databases/*}" - }; - } - - // Returns the schema of a Cloud Spanner database as a list of formatted - // DDL statements. This method does not show pending schema updates, those may - // be queried using the [Operations][google.longrunning.Operations] API. - rpc GetDatabaseDdl(GetDatabaseDdlRequest) returns (GetDatabaseDdlResponse) { - option (google.api.http) = { - get: "/v1/{database=projects/*/instances/*/databases/*}/ddl" - }; - } - - // Sets the access control policy on a database resource. Replaces any - // existing policy. - // - // Authorization requires `spanner.databases.setIamPolicy` permission on - // [resource][google.iam.v1.SetIamPolicyRequest.resource]. - rpc SetIamPolicy(google.iam.v1.SetIamPolicyRequest) returns (google.iam.v1.Policy) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*/databases/*}:setIamPolicy" - body: "*" - }; - } - - // Gets the access control policy for a database resource. Returns an empty - // policy if a database exists but does not have a policy set. - // - // Authorization requires `spanner.databases.getIamPolicy` permission on - // [resource][google.iam.v1.GetIamPolicyRequest.resource]. - rpc GetIamPolicy(google.iam.v1.GetIamPolicyRequest) returns (google.iam.v1.Policy) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*/databases/*}:getIamPolicy" - body: "*" - }; - } - - // Returns permissions that the caller has on the specified database resource. - // - // Attempting this RPC on a non-existent Cloud Spanner database will result in - // a NOT_FOUND error if the user has `spanner.databases.list` permission on - // the containing Cloud Spanner instance. Otherwise returns an empty set of - // permissions. - rpc TestIamPermissions(google.iam.v1.TestIamPermissionsRequest) returns (google.iam.v1.TestIamPermissionsResponse) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*/databases/*}:testIamPermissions" - body: "*" - }; - } -} - -// A Cloud Spanner database. -message Database { - // Indicates the current state of the database. - enum State { - // Not specified. - STATE_UNSPECIFIED = 0; - - // The database is still being created. Operations on the database may fail - // with `FAILED_PRECONDITION` in this state. - CREATING = 1; - - // The database is fully created and ready for use. - READY = 2; - } - - // Required. The name of the database. Values are of the form - // `projects//instances//databases/`, - // where `` is as specified in the `CREATE DATABASE` - // statement. This name can be passed to other API methods to - // identify the database. - string name = 1; - - // Output only. The current database state. - State state = 2; -} - -// The request for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. -message ListDatabasesRequest { - // Required. The instance whose databases should be listed. - // Values are of the form `projects//instances/`. - string parent = 1; - - // Number of databases to be returned in the response. If 0 or less, - // defaults to the server's maximum allowed page size. - int32 page_size = 3; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.database.v1.ListDatabasesResponse.next_page_token] from a - // previous [ListDatabasesResponse][google.spanner.admin.database.v1.ListDatabasesResponse]. - string page_token = 4; -} - -// The response for [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases]. -message ListDatabasesResponse { - // Databases that matched the request. - repeated Database databases = 1; - - // `next_page_token` can be sent in a subsequent - // [ListDatabases][google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases] call to fetch more - // of the matching databases. - string next_page_token = 2; -} - -// The request for [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. -message CreateDatabaseRequest { - // Required. The name of the instance that will serve the new database. - // Values are of the form `projects//instances/`. - string parent = 1; - - // Required. A `CREATE DATABASE` statement, which specifies the ID of the - // new database. The database ID must conform to the regular expression - // `[a-z][a-z0-9_\-]*[a-z0-9]` and be between 2 and 30 characters in length. - // If the database ID is a reserved word or if it contains a hyphen, the - // database ID must be enclosed in backticks (`` ` ``). - string create_statement = 2; - - // An optional list of DDL statements to run inside the newly created - // database. Statements can create tables, indexes, etc. These - // statements execute atomically with the creation of the database: - // if there is an error in any statement, the database is not created. - repeated string extra_statements = 3; -} - -// Metadata type for the operation returned by -// [CreateDatabase][google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase]. -message CreateDatabaseMetadata { - // The database being created. - string database = 1; -} - -// The request for [GetDatabase][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase]. -message GetDatabaseRequest { - // Required. The name of the requested database. Values are of the form - // `projects//instances//databases/`. - string name = 1; -} - -// Enqueues the given DDL statements to be applied, in order but not -// necessarily all at once, to the database schema at some point (or -// points) in the future. The server checks that the statements -// are executable (syntactically valid, name tables that exist, etc.) -// before enqueueing them, but they may still fail upon -// later execution (e.g., if a statement from another batch of -// statements is applied first and it conflicts in some way, or if -// there is some data-related problem like a `NULL` value in a column to -// which `NOT NULL` would be added). If a statement fails, all -// subsequent statements in the batch are automatically cancelled. -// -// Each batch of statements is assigned a name which can be used with -// the [Operations][google.longrunning.Operations] API to monitor -// progress. See the -// [operation_id][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.operation_id] field for more -// details. -message UpdateDatabaseDdlRequest { - // Required. The database to update. - string database = 1; - - // DDL statements to be applied to the database. - repeated string statements = 2; - - // If empty, the new update request is assigned an - // automatically-generated operation ID. Otherwise, `operation_id` - // is used to construct the name of the resulting - // [Operation][google.longrunning.Operation]. - // - // Specifying an explicit operation ID simplifies determining - // whether the statements were executed in the event that the - // [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl] call is replayed, - // or the return value is otherwise lost: the [database][google.spanner.admin.database.v1.UpdateDatabaseDdlRequest.database] and - // `operation_id` fields can be combined to form the - // [name][google.longrunning.Operation.name] of the resulting - // [longrunning.Operation][google.longrunning.Operation]: `/operations/`. - // - // `operation_id` should be unique within the database, and must be - // a valid identifier: `[a-z][a-z0-9_]*`. Note that - // automatically-generated operation IDs always begin with an - // underscore. If the named operation already exists, - // [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl] returns - // `ALREADY_EXISTS`. - string operation_id = 3; -} - -// Metadata type for the operation returned by -// [UpdateDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl]. -message UpdateDatabaseDdlMetadata { - // The database being modified. - string database = 1; - - // For an update this list contains all the statements. For an - // individual statement, this list contains only that statement. - repeated string statements = 2; - - // Reports the commit timestamps of all statements that have - // succeeded so far, where `commit_timestamps[i]` is the commit - // timestamp for the statement `statements[i]`. - repeated google.protobuf.Timestamp commit_timestamps = 3; -} - -// The request for [DropDatabase][google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase]. -message DropDatabaseRequest { - // Required. The database to be dropped. - string database = 1; -} - -// The request for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. -message GetDatabaseDdlRequest { - // Required. The database whose schema we wish to get. - string database = 1; -} - -// The response for [GetDatabaseDdl][google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl]. -message GetDatabaseDdlResponse { - // A list of formatted DDL statements defining the schema of the database - // specified in the request. - repeated string statements = 1; -} diff --git a/google/cloud/spanner_v1/proto/spanner_instance_admin.proto b/google/cloud/spanner_v1/proto/spanner_instance_admin.proto deleted file mode 100644 index e960e5428e..0000000000 --- a/google/cloud/spanner_v1/proto/spanner_instance_admin.proto +++ /dev/null @@ -1,475 +0,0 @@ -// Copyright 2018 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.admin.instance.v1; - -import "google/api/annotations.proto"; -import "google/iam/v1/iam_policy.proto"; -import "google/iam/v1/policy.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/empty.proto"; -import "google/protobuf/field_mask.proto"; -import "google/protobuf/timestamp.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.Admin.Instance.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/admin/instance/v1;instance"; -option java_multiple_files = true; -option java_outer_classname = "SpannerInstanceAdminProto"; -option java_package = "com.google.spanner.admin.instance.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\Admin\\Instance\\V1"; - - -// Cloud Spanner Instance Admin API -// -// The Cloud Spanner Instance Admin API can be used to create, delete, -// modify and list instances. Instances are dedicated Cloud Spanner serving -// and storage resources to be used by Cloud Spanner databases. -// -// Each instance has a "configuration", which dictates where the -// serving resources for the Cloud Spanner instance are located (e.g., -// US-central, Europe). Configurations are created by Google based on -// resource availability. -// -// Cloud Spanner billing is based on the instances that exist and their -// sizes. After an instance exists, there are no additional -// per-database or per-operation charges for use of the instance -// (though there may be additional network bandwidth charges). -// Instances offer isolation: problems with databases in one instance -// will not affect other instances. However, within an instance -// databases can affect each other. For example, if one database in an -// instance receives a lot of requests and consumes most of the -// instance resources, fewer resources are available for other -// databases in that instance, and their performance may suffer. -service InstanceAdmin { - // Lists the supported instance configurations for a given project. - rpc ListInstanceConfigs(ListInstanceConfigsRequest) returns (ListInstanceConfigsResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*}/instanceConfigs" - }; - } - - // Gets information about a particular instance configuration. - rpc GetInstanceConfig(GetInstanceConfigRequest) returns (InstanceConfig) { - option (google.api.http) = { - get: "/v1/{name=projects/*/instanceConfigs/*}" - }; - } - - // Lists all instances in the given project. - rpc ListInstances(ListInstancesRequest) returns (ListInstancesResponse) { - option (google.api.http) = { - get: "/v1/{parent=projects/*}/instances" - }; - } - - // Gets information about a particular instance. - rpc GetInstance(GetInstanceRequest) returns (Instance) { - option (google.api.http) = { - get: "/v1/{name=projects/*/instances/*}" - }; - } - - // Creates an instance and begins preparing it to begin serving. The - // returned [long-running operation][google.longrunning.Operation] - // can be used to track the progress of preparing the new - // instance. The instance name is assigned by the caller. If the - // named instance already exists, `CreateInstance` returns - // `ALREADY_EXISTS`. - // - // Immediately upon completion of this request: - // - // * The instance is readable via the API, with all requested attributes - // but no allocated resources. Its state is `CREATING`. - // - // Until completion of the returned operation: - // - // * Cancelling the operation renders the instance immediately unreadable - // via the API. - // * The instance can be deleted. - // * All other attempts to modify the instance are rejected. - // - // Upon completion of the returned operation: - // - // * Billing for all successfully-allocated resources begins (some types - // may have lower than the requested levels). - // * Databases can be created in the instance. - // * The instance's allocated resource levels are readable via the API. - // * The instance's state becomes `READY`. - // - // The returned [long-running operation][google.longrunning.Operation] will - // have a name of the format `/operations/` and - // can be used to track creation of the instance. The - // [metadata][google.longrunning.Operation.metadata] field type is - // [CreateInstanceMetadata][google.spanner.admin.instance.v1.CreateInstanceMetadata]. - // The [response][google.longrunning.Operation.response] field type is - // [Instance][google.spanner.admin.instance.v1.Instance], if successful. - rpc CreateInstance(CreateInstanceRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1/{parent=projects/*}/instances" - body: "*" - }; - } - - // Updates an instance, and begins allocating or releasing resources - // as requested. The returned [long-running - // operation][google.longrunning.Operation] can be used to track the - // progress of updating the instance. If the named instance does not - // exist, returns `NOT_FOUND`. - // - // Immediately upon completion of this request: - // - // * For resource types for which a decrease in the instance's allocation - // has been requested, billing is based on the newly-requested level. - // - // Until completion of the returned operation: - // - // * Cancelling the operation sets its metadata's - // [cancel_time][google.spanner.admin.instance.v1.UpdateInstanceMetadata.cancel_time], and begins - // restoring resources to their pre-request values. The operation - // is guaranteed to succeed at undoing all resource changes, - // after which point it terminates with a `CANCELLED` status. - // * All other attempts to modify the instance are rejected. - // * Reading the instance via the API continues to give the pre-request - // resource levels. - // - // Upon completion of the returned operation: - // - // * Billing begins for all successfully-allocated resources (some types - // may have lower than the requested levels). - // * All newly-reserved resources are available for serving the instance's - // tables. - // * The instance's new resource levels are readable via the API. - // - // The returned [long-running operation][google.longrunning.Operation] will - // have a name of the format `/operations/` and - // can be used to track the instance modification. The - // [metadata][google.longrunning.Operation.metadata] field type is - // [UpdateInstanceMetadata][google.spanner.admin.instance.v1.UpdateInstanceMetadata]. - // The [response][google.longrunning.Operation.response] field type is - // [Instance][google.spanner.admin.instance.v1.Instance], if successful. - // - // Authorization requires `spanner.instances.update` permission on - // resource [name][google.spanner.admin.instance.v1.Instance.name]. - rpc UpdateInstance(UpdateInstanceRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - patch: "/v1/{instance.name=projects/*/instances/*}" - body: "*" - }; - } - - // Deletes an instance. - // - // Immediately upon completion of the request: - // - // * Billing ceases for all of the instance's reserved resources. - // - // Soon afterward: - // - // * The instance and *all of its databases* immediately and - // irrevocably disappear from the API. All data in the databases - // is permanently deleted. - rpc DeleteInstance(DeleteInstanceRequest) returns (google.protobuf.Empty) { - option (google.api.http) = { - delete: "/v1/{name=projects/*/instances/*}" - }; - } - - // Sets the access control policy on an instance resource. Replaces any - // existing policy. - // - // Authorization requires `spanner.instances.setIamPolicy` on - // [resource][google.iam.v1.SetIamPolicyRequest.resource]. - rpc SetIamPolicy(google.iam.v1.SetIamPolicyRequest) returns (google.iam.v1.Policy) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*}:setIamPolicy" - body: "*" - }; - } - - // Gets the access control policy for an instance resource. Returns an empty - // policy if an instance exists but does not have a policy set. - // - // Authorization requires `spanner.instances.getIamPolicy` on - // [resource][google.iam.v1.GetIamPolicyRequest.resource]. - rpc GetIamPolicy(google.iam.v1.GetIamPolicyRequest) returns (google.iam.v1.Policy) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*}:getIamPolicy" - body: "*" - }; - } - - // Returns permissions that the caller has on the specified instance resource. - // - // Attempting this RPC on a non-existent Cloud Spanner instance resource will - // result in a NOT_FOUND error if the user has `spanner.instances.list` - // permission on the containing Google Cloud Project. Otherwise returns an - // empty set of permissions. - rpc TestIamPermissions(google.iam.v1.TestIamPermissionsRequest) returns (google.iam.v1.TestIamPermissionsResponse) { - option (google.api.http) = { - post: "/v1/{resource=projects/*/instances/*}:testIamPermissions" - body: "*" - }; - } -} - -// A possible configuration for a Cloud Spanner instance. Configurations -// define the geographic placement of nodes and their replication. -message InstanceConfig { - // A unique identifier for the instance configuration. Values - // are of the form - // `projects//instanceConfigs/[a-z][-a-z0-9]*` - string name = 1; - - // The name of this instance configuration as it appears in UIs. - string display_name = 2; -} - -// An isolated set of Cloud Spanner resources on which databases can be hosted. -message Instance { - // Indicates the current state of the instance. - enum State { - // Not specified. - STATE_UNSPECIFIED = 0; - - // The instance is still being created. Resources may not be - // available yet, and operations such as database creation may not - // work. - CREATING = 1; - - // The instance is fully created and ready to do work such as - // creating databases. - READY = 2; - } - - // Required. A unique identifier for the instance, which cannot be changed - // after the instance is created. Values are of the form - // `projects//instances/[a-z][-a-z0-9]*[a-z0-9]`. The final - // segment of the name must be between 6 and 30 characters in length. - string name = 1; - - // Required. The name of the instance's configuration. Values are of the form - // `projects//instanceConfigs/`. See - // also [InstanceConfig][google.spanner.admin.instance.v1.InstanceConfig] and - // [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. - string config = 2; - - // Required. The descriptive name for this instance as it appears in UIs. - // Must be unique per project and between 4 and 30 characters in length. - string display_name = 3; - - // Required. The number of nodes allocated to this instance. This may be zero - // in API responses for instances that are not yet in state `READY`. - // - // See [the documentation](https://cloud.google.com/spanner/docs/instances#node_count) - // for more information about nodes. - int32 node_count = 5; - - // Output only. The current instance state. For - // [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance], the state must be - // either omitted or set to `CREATING`. For - // [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance], the state must be - // either omitted or set to `READY`. - State state = 6; - - // Cloud Labels are a flexible and lightweight mechanism for organizing cloud - // resources into groups that reflect a customer's organizational needs and - // deployment strategies. Cloud Labels can be used to filter collections of - // resources. They can be used to control how resource metrics are aggregated. - // And they can be used as arguments to policy management rules (e.g. route, - // firewall, load balancing, etc.). - // - // * Label keys must be between 1 and 63 characters long and must conform to - // the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`. - // * Label values must be between 0 and 63 characters long and must conform - // to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`. - // * No more than 64 labels can be associated with a given resource. - // - // See https://goo.gl/xmQnxf for more information on and examples of labels. - // - // If you plan to use labels in your own code, please note that additional - // characters may be allowed in the future. And so you are advised to use an - // internal label representation, such as JSON, which doesn't rely upon - // specific characters being disallowed. For example, representing labels - // as the string: name + "_" + value would prove problematic if we were to - // allow "_" in a future release. - map labels = 7; -} - -// The request for [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. -message ListInstanceConfigsRequest { - // Required. The name of the project for which a list of supported instance - // configurations is requested. Values are of the form - // `projects/`. - string parent = 1; - - // Number of instance configurations to be returned in the response. If 0 or - // less, defaults to the server's maximum allowed page size. - int32 page_size = 2; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.instance.v1.ListInstanceConfigsResponse.next_page_token] - // from a previous [ListInstanceConfigsResponse][google.spanner.admin.instance.v1.ListInstanceConfigsResponse]. - string page_token = 3; -} - -// The response for [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs]. -message ListInstanceConfigsResponse { - // The list of requested instance configurations. - repeated InstanceConfig instance_configs = 1; - - // `next_page_token` can be sent in a subsequent - // [ListInstanceConfigs][google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs] call to - // fetch more of the matching instance configurations. - string next_page_token = 2; -} - -// The request for -// [GetInstanceConfigRequest][google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig]. -message GetInstanceConfigRequest { - // Required. The name of the requested instance configuration. Values are of - // the form `projects//instanceConfigs/`. - string name = 1; -} - -// The request for [GetInstance][google.spanner.admin.instance.v1.InstanceAdmin.GetInstance]. -message GetInstanceRequest { - // Required. The name of the requested instance. Values are of the form - // `projects//instances/`. - string name = 1; -} - -// The request for [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. -message CreateInstanceRequest { - // Required. The name of the project in which to create the instance. Values - // are of the form `projects/`. - string parent = 1; - - // Required. The ID of the instance to create. Valid identifiers are of the - // form `[a-z][-a-z0-9]*[a-z0-9]` and must be between 6 and 30 characters in - // length. - string instance_id = 2; - - // Required. The instance to create. The name may be omitted, but if - // specified must be `/instances/`. - Instance instance = 3; -} - -// The request for [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. -message ListInstancesRequest { - // Required. The name of the project for which a list of instances is - // requested. Values are of the form `projects/`. - string parent = 1; - - // Number of instances to be returned in the response. If 0 or less, defaults - // to the server's maximum allowed page size. - int32 page_size = 2; - - // If non-empty, `page_token` should contain a - // [next_page_token][google.spanner.admin.instance.v1.ListInstancesResponse.next_page_token] from a - // previous [ListInstancesResponse][google.spanner.admin.instance.v1.ListInstancesResponse]. - string page_token = 3; - - // An expression for filtering the results of the request. Filter rules are - // case insensitive. The fields eligible for filtering are: - // - // * `name` - // * `display_name` - // * `labels.key` where key is the name of a label - // - // Some examples of using filters are: - // - // * `name:*` --> The instance has a name. - // * `name:Howl` --> The instance's name contains the string "howl". - // * `name:HOWL` --> Equivalent to above. - // * `NAME:howl` --> Equivalent to above. - // * `labels.env:*` --> The instance has the label "env". - // * `labels.env:dev` --> The instance has the label "env" and the value of - // the label contains the string "dev". - // * `name:howl labels.env:dev` --> The instance's name contains "howl" and - // it has the label "env" with its value - // containing "dev". - string filter = 4; -} - -// The response for [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances]. -message ListInstancesResponse { - // The list of requested instances. - repeated Instance instances = 1; - - // `next_page_token` can be sent in a subsequent - // [ListInstances][google.spanner.admin.instance.v1.InstanceAdmin.ListInstances] call to fetch more - // of the matching instances. - string next_page_token = 2; -} - -// The request for [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. -message UpdateInstanceRequest { - // Required. The instance to update, which must always include the instance - // name. Otherwise, only fields mentioned in [][google.spanner.admin.instance.v1.UpdateInstanceRequest.field_mask] need be included. - Instance instance = 1; - - // Required. A mask specifying which fields in [][google.spanner.admin.instance.v1.UpdateInstanceRequest.instance] should be updated. - // The field mask must always be specified; this prevents any future fields in - // [][google.spanner.admin.instance.v1.Instance] from being erased accidentally by clients that do not know - // about them. - google.protobuf.FieldMask field_mask = 2; -} - -// The request for [DeleteInstance][google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance]. -message DeleteInstanceRequest { - // Required. The name of the instance to be deleted. Values are of the form - // `projects//instances/` - string name = 1; -} - -// Metadata type for the operation returned by -// [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance]. -message CreateInstanceMetadata { - // The instance being created. - Instance instance = 1; - - // The time at which the - // [CreateInstance][google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance] request was - // received. - google.protobuf.Timestamp start_time = 2; - - // The time at which this operation was cancelled. If set, this operation is - // in the process of undoing itself (which is guaranteed to succeed) and - // cannot be cancelled again. - google.protobuf.Timestamp cancel_time = 3; - - // The time at which this operation failed or was completed successfully. - google.protobuf.Timestamp end_time = 4; -} - -// Metadata type for the operation returned by -// [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance]. -message UpdateInstanceMetadata { - // The desired end state of the update. - Instance instance = 1; - - // The time at which [UpdateInstance][google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance] - // request was received. - google.protobuf.Timestamp start_time = 2; - - // The time at which this operation was cancelled. If set, this operation is - // in the process of undoing itself (which is guaranteed to succeed) and - // cannot be cancelled again. - google.protobuf.Timestamp cancel_time = 3; - - // The time at which this operation failed or was completed successfully. - google.protobuf.Timestamp end_time = 4; -} diff --git a/google/cloud/spanner_v1/proto/spanner_pb2.py b/google/cloud/spanner_v1/proto/spanner_pb2.py deleted file mode 100644 index ab1ff4e42e..0000000000 --- a/google/cloud/spanner_v1/proto/spanner_pb2.py +++ /dev/null @@ -1,3399 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner_v1/proto/spanner.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 -from google.api import client_pb2 as google_dot_api_dot_client__pb2 -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.api import resource_pb2 as google_dot_api_dot_resource__pb2 -from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2 -from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.rpc import status_pb2 as google_dot_rpc_dot_status__pb2 -from google.cloud.spanner_v1.proto import ( - keys_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_keys__pb2, -) -from google.cloud.spanner_v1.proto import ( - mutation_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_mutation__pb2, -) -from google.cloud.spanner_v1.proto import ( - result_set_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2, -) -from google.cloud.spanner_v1.proto import ( - transaction_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_transaction__pb2, -) -from google.cloud.spanner_v1.proto import ( - type_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_type__pb2, -) - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner_v1/proto/spanner.proto", - package="google.spanner.v1", - syntax="proto3", - serialized_options=_b( - "\n\025com.google.spanner.v1B\014SpannerProtoP\001Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\252\002\027Google.Cloud.Spanner.V1\312\002\027Google\\Cloud\\Spanner\\V1\352A_\n\037spanner.googleapis.com/Database\022\n\x11partition_options\x18\x06 \x01(\x0b\x32#.google.spanner.v1.PartitionOptions\x1aJ\n\x0fParamTypesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12&\n\x05value\x18\x02 \x01(\x0b\x32\x17.google.spanner.v1.Type:\x02\x38\x01"\xb1\x02\n\x14PartitionReadRequest\x12\x37\n\x07session\x18\x01 \x01(\tB&\xe0\x41\x02\xfa\x41 \n\x1espanner.googleapis.com/Session\x12;\n\x0btransaction\x18\x02 \x01(\x0b\x32&.google.spanner.v1.TransactionSelector\x12\x12\n\x05table\x18\x03 \x01(\tB\x03\xe0\x41\x02\x12\r\n\x05index\x18\x04 \x01(\t\x12\x0f\n\x07\x63olumns\x18\x05 \x03(\t\x12/\n\x07key_set\x18\x06 \x01(\x0b\x32\x19.google.spanner.v1.KeySetB\x03\xe0\x41\x02\x12>\n\x11partition_options\x18\t \x01(\x0b\x32#.google.spanner.v1.PartitionOptions"$\n\tPartition\x12\x17\n\x0fpartition_token\x18\x01 \x01(\x0c"z\n\x11PartitionResponse\x12\x30\n\npartitions\x18\x01 \x03(\x0b\x32\x1c.google.spanner.v1.Partition\x12\x33\n\x0btransaction\x18\x02 \x01(\x0b\x32\x1e.google.spanner.v1.Transaction"\xab\x02\n\x0bReadRequest\x12\x37\n\x07session\x18\x01 \x01(\tB&\xe0\x41\x02\xfa\x41 \n\x1espanner.googleapis.com/Session\x12;\n\x0btransaction\x18\x02 \x01(\x0b\x32&.google.spanner.v1.TransactionSelector\x12\x12\n\x05table\x18\x03 \x01(\tB\x03\xe0\x41\x02\x12\r\n\x05index\x18\x04 \x01(\t\x12\x14\n\x07\x63olumns\x18\x05 \x03(\tB\x03\xe0\x41\x02\x12/\n\x07key_set\x18\x06 \x01(\x0b\x32\x19.google.spanner.v1.KeySetB\x03\xe0\x41\x02\x12\r\n\x05limit\x18\x08 \x01(\x03\x12\x14\n\x0cresume_token\x18\t \x01(\x0c\x12\x17\n\x0fpartition_token\x18\n \x01(\x0c"\x8f\x01\n\x17\x42\x65ginTransactionRequest\x12\x37\n\x07session\x18\x01 \x01(\tB&\xe0\x41\x02\xfa\x41 \n\x1espanner.googleapis.com/Session\x12;\n\x07options\x18\x02 \x01(\x0b\x32%.google.spanner.v1.TransactionOptionsB\x03\xe0\x41\x02"\xea\x01\n\rCommitRequest\x12\x37\n\x07session\x18\x01 \x01(\tB&\xe0\x41\x02\xfa\x41 \n\x1espanner.googleapis.com/Session\x12\x18\n\x0etransaction_id\x18\x02 \x01(\x0cH\x00\x12G\n\x16single_use_transaction\x18\x03 \x01(\x0b\x32%.google.spanner.v1.TransactionOptionsH\x00\x12.\n\tmutations\x18\x04 \x03(\x0b\x32\x1b.google.spanner.v1.MutationB\r\n\x0btransaction"F\n\x0e\x43ommitResponse\x12\x34\n\x10\x63ommit_timestamp\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp"g\n\x0fRollbackRequest\x12\x37\n\x07session\x18\x01 \x01(\tB&\xe0\x41\x02\xfa\x41 \n\x1espanner.googleapis.com/Session\x12\x1b\n\x0etransaction_id\x18\x02 \x01(\x0c\x42\x03\xe0\x41\x02\x32\xc0\x16\n\x07Spanner\x12\xa6\x01\n\rCreateSession\x12\'.google.spanner.v1.CreateSessionRequest\x1a\x1a.google.spanner.v1.Session"P\x82\xd3\xe4\x93\x02?":/v1/{database=projects/*/instances/*/databases/*}/sessions:\x01*\xda\x41\x08\x64\x61tabase\x12\xe0\x01\n\x13\x42\x61tchCreateSessions\x12-.google.spanner.v1.BatchCreateSessionsRequest\x1a..google.spanner.v1.BatchCreateSessionsResponse"j\x82\xd3\xe4\x93\x02K"F/v1/{database=projects/*/instances/*/databases/*}/sessions:batchCreate:\x01*\xda\x41\x16\x64\x61tabase,session_count\x12\x97\x01\n\nGetSession\x12$.google.spanner.v1.GetSessionRequest\x1a\x1a.google.spanner.v1.Session"G\x82\xd3\xe4\x93\x02:\x12\x38/v1/{name=projects/*/instances/*/databases/*/sessions/*}\xda\x41\x04name\x12\xae\x01\n\x0cListSessions\x12&.google.spanner.v1.ListSessionsRequest\x1a\'.google.spanner.v1.ListSessionsResponse"M\x82\xd3\xe4\x93\x02<\x12:/v1/{database=projects/*/instances/*/databases/*}/sessions\xda\x41\x08\x64\x61tabase\x12\x99\x01\n\rDeleteSession\x12\'.google.spanner.v1.DeleteSessionRequest\x1a\x16.google.protobuf.Empty"G\x82\xd3\xe4\x93\x02:*8/v1/{name=projects/*/instances/*/databases/*/sessions/*}\xda\x41\x04name\x12\xa3\x01\n\nExecuteSql\x12$.google.spanner.v1.ExecuteSqlRequest\x1a\x1c.google.spanner.v1.ResultSet"Q\x82\xd3\xe4\x93\x02K"F/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeSql:\x01*\x12\xbe\x01\n\x13\x45xecuteStreamingSql\x12$.google.spanner.v1.ExecuteSqlRequest\x1a#.google.spanner.v1.PartialResultSet"Z\x82\xd3\xe4\x93\x02T"O/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeStreamingSql:\x01*0\x01\x12\xc0\x01\n\x0f\x45xecuteBatchDml\x12).google.spanner.v1.ExecuteBatchDmlRequest\x1a*.google.spanner.v1.ExecuteBatchDmlResponse"V\x82\xd3\xe4\x93\x02P"K/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeBatchDml:\x01*\x12\x91\x01\n\x04Read\x12\x1e.google.spanner.v1.ReadRequest\x1a\x1c.google.spanner.v1.ResultSet"K\x82\xd3\xe4\x93\x02\x45"@/v1/{session=projects/*/instances/*/databases/*/sessions/*}:read:\x01*\x12\xac\x01\n\rStreamingRead\x12\x1e.google.spanner.v1.ReadRequest\x1a#.google.spanner.v1.PartialResultSet"T\x82\xd3\xe4\x93\x02N"I/v1/{session=projects/*/instances/*/databases/*/sessions/*}:streamingRead:\x01*0\x01\x12\xc9\x01\n\x10\x42\x65ginTransaction\x12*.google.spanner.v1.BeginTransactionRequest\x1a\x1e.google.spanner.v1.Transaction"i\x82\xd3\xe4\x93\x02Q"L/v1/{session=projects/*/instances/*/databases/*/sessions/*}:beginTransaction:\x01*\xda\x41\x0fsession,options\x12\xeb\x01\n\x06\x43ommit\x12 .google.spanner.v1.CommitRequest\x1a!.google.spanner.v1.CommitResponse"\x9b\x01\x82\xd3\xe4\x93\x02G"B/v1/{session=projects/*/instances/*/databases/*/sessions/*}:commit:\x01*\xda\x41 session,transaction_id,mutations\xda\x41(session,single_use_transaction,mutations\x12\xb0\x01\n\x08Rollback\x12".google.spanner.v1.RollbackRequest\x1a\x16.google.protobuf.Empty"h\x82\xd3\xe4\x93\x02I"D/v1/{session=projects/*/instances/*/databases/*/sessions/*}:rollback:\x01*\xda\x41\x16session,transaction_id\x12\xb7\x01\n\x0ePartitionQuery\x12(.google.spanner.v1.PartitionQueryRequest\x1a$.google.spanner.v1.PartitionResponse"U\x82\xd3\xe4\x93\x02O"J/v1/{session=projects/*/instances/*/databases/*/sessions/*}:partitionQuery:\x01*\x12\xb4\x01\n\rPartitionRead\x12\'.google.spanner.v1.PartitionReadRequest\x1a$.google.spanner.v1.PartitionResponse"T\x82\xd3\xe4\x93\x02N"I/v1/{session=projects/*/instances/*/databases/*/sessions/*}:partitionRead:\x01*\x1aw\xca\x41\x16spanner.googleapis.com\xd2\x41[https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spanner.dataB\xf7\x01\n\x15\x63om.google.spanner.v1B\x0cSpannerProtoP\x01Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\xaa\x02\x17Google.Cloud.Spanner.V1\xca\x02\x17Google\\Cloud\\Spanner\\V1\xea\x41_\n\x1fspanner.googleapis.com/Database\x12 The - session has the label "env". - ``labels.env:dev`` --> The - session has the label "env" and the value of the label - contains the string "dev". - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ListSessionsRequest) - ), -) -_sym_db.RegisterMessage(ListSessionsRequest) - -ListSessionsResponse = _reflection.GeneratedProtocolMessageType( - "ListSessionsResponse", - (_message.Message,), - dict( - DESCRIPTOR=_LISTSESSIONSRESPONSE, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The response for - [ListSessions][google.spanner.v1.Spanner.ListSessions]. - - - Attributes: - sessions: - The list of requested sessions. - next_page_token: - \ ``next_page_token`` can be sent in a subsequent - [ListSessions][google.spanner.v1.Spanner.ListSessions] call to - fetch more of the matching sessions. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ListSessionsResponse) - ), -) -_sym_db.RegisterMessage(ListSessionsResponse) - -DeleteSessionRequest = _reflection.GeneratedProtocolMessageType( - "DeleteSessionRequest", - (_message.Message,), - dict( - DESCRIPTOR=_DELETESESSIONREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for - [DeleteSession][google.spanner.v1.Spanner.DeleteSession]. - - - Attributes: - name: - Required. The name of the session to delete. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.DeleteSessionRequest) - ), -) -_sym_db.RegisterMessage(DeleteSessionRequest) - -ExecuteSqlRequest = _reflection.GeneratedProtocolMessageType( - "ExecuteSqlRequest", - (_message.Message,), - dict( - QueryOptions=_reflection.GeneratedProtocolMessageType( - "QueryOptions", - (_message.Message,), - dict( - DESCRIPTOR=_EXECUTESQLREQUEST_QUERYOPTIONS, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""Query optimizer configuration. - - - Attributes: - optimizer_version: - An option to control the selection of optimizer version. This - parameter allows individual queries to pick different query - optimizer versions. Specifying "latest" as a value instructs - Cloud Spanner to use the latest supported query optimizer - version. If not specified, Cloud Spanner uses optimizer - version set at the database level options. Any other positive - integer (from the list of supported optimizer versions) - overrides the default optimizer version for query execution. - The list of supported optimizer versions can be queried from - SPANNER\_SYS.SUPPORTED\_OPTIMIZER\_VERSIONS. Executing a SQL - statement with an invalid optimizer version will fail with a - syntax error (``INVALID_ARGUMENT``) status. The - ``optimizer_version`` statement hint has precedence over this - setting. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ExecuteSqlRequest.QueryOptions) - ), - ), - ParamTypesEntry=_reflection.GeneratedProtocolMessageType( - "ParamTypesEntry", - (_message.Message,), - dict( - DESCRIPTOR=_EXECUTESQLREQUEST_PARAMTYPESENTRY, - __module__="google.cloud.spanner_v1.proto.spanner_pb2" - # @@protoc_insertion_point(class_scope:google.spanner.v1.ExecuteSqlRequest.ParamTypesEntry) - ), - ), - DESCRIPTOR=_EXECUTESQLREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and - [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. - - - Attributes: - session: - Required. The session in which the SQL query should be - performed. - transaction: - The transaction to use. For queries, if none is provided, the - default is a temporary read-only transaction with strong - concurrency. Standard DML statements require a read-write - transaction. To protect against replays, single-use - transactions are not supported. The caller must either supply - an existing transaction ID or begin a new transaction. - Partitioned DML requires an existing Partitioned DML - transaction ID. - sql: - Required. The SQL string. - params: - Parameter names and values that bind to placeholders in the - SQL string. A parameter placeholder consists of the ``@`` - character followed by the parameter name (for example, - ``@firstName``). Parameter names can contain letters, numbers, - and underscores. Parameters can appear anywhere that a - literal value is expected. The same parameter name can be used - more than once, for example: ``"WHERE id > @msg_id AND id < - @msg_id + 100"`` It is an error to execute a SQL statement - with unbound parameters. - param_types: - It is not always possible for Cloud Spanner to infer the right - SQL type from a JSON value. For example, values of type - ``BYTES`` and values of type ``STRING`` both appear in - [params][google.spanner.v1.ExecuteSqlRequest.params] as JSON - strings. In these cases, ``param_types`` can be used to - specify the exact SQL type for some or all of the SQL - statement parameters. See the definition of - [Type][google.spanner.v1.Type] for more information about SQL - types. - resume_token: - If this request is resuming a previously interrupted SQL - statement execution, ``resume_token`` should be copied from - the last - [PartialResultSet][google.spanner.v1.PartialResultSet] yielded - before the interruption. Doing this enables the new SQL - statement execution to resume where the last one left off. The - rest of the request parameters must exactly match the request - that yielded this token. - query_mode: - Used to control the amount of debugging information returned - in [ResultSetStats][google.spanner.v1.ResultSetStats]. If [par - tition\_token][google.spanner.v1.ExecuteSqlRequest.partition\_ - token] is set, - [query\_mode][google.spanner.v1.ExecuteSqlRequest.query\_mode] - can only be set to [QueryMode.NORMAL][google.spanner.v1.Execut - eSqlRequest.QueryMode.NORMAL]. - partition_token: - If present, results will be restricted to the specified - partition previously created using PartitionQuery(). There - must be an exact match for the values of fields common to this - message and the PartitionQueryRequest message used to create - this partition\_token. - seqno: - A per-transaction sequence number used to identify this - request. This field makes each request idempotent such that if - the request is received multiple times, at most one will - succeed. The sequence number must be monotonically increasing - within the transaction. If a request arrives for the first - time with an out-of-order sequence number, the transaction may - be aborted. Replays of previously handled requests will yield - the same response as the first execution. Required for DML - statements. Ignored for queries. - query_options: - Query optimizer configuration to use for the given query. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ExecuteSqlRequest) - ), -) -_sym_db.RegisterMessage(ExecuteSqlRequest) -_sym_db.RegisterMessage(ExecuteSqlRequest.QueryOptions) -_sym_db.RegisterMessage(ExecuteSqlRequest.ParamTypesEntry) - -ExecuteBatchDmlRequest = _reflection.GeneratedProtocolMessageType( - "ExecuteBatchDmlRequest", - (_message.Message,), - dict( - Statement=_reflection.GeneratedProtocolMessageType( - "Statement", - (_message.Message,), - dict( - ParamTypesEntry=_reflection.GeneratedProtocolMessageType( - "ParamTypesEntry", - (_message.Message,), - dict( - DESCRIPTOR=_EXECUTEBATCHDMLREQUEST_STATEMENT_PARAMTYPESENTRY, - __module__="google.cloud.spanner_v1.proto.spanner_pb2" - # @@protoc_insertion_point(class_scope:google.spanner.v1.ExecuteBatchDmlRequest.Statement.ParamTypesEntry) - ), - ), - DESCRIPTOR=_EXECUTEBATCHDMLREQUEST_STATEMENT, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""A single DML statement. - - - Attributes: - sql: - Required. The DML string. - params: - Parameter names and values that bind to placeholders in the - DML string. A parameter placeholder consists of the ``@`` - character followed by the parameter name (for example, - ``@firstName``). Parameter names can contain letters, numbers, - and underscores. Parameters can appear anywhere that a - literal value is expected. The same parameter name can be used - more than once, for example: ``"WHERE id > @msg_id AND id < - @msg_id + 100"`` It is an error to execute a SQL statement - with unbound parameters. - param_types: - It is not always possible for Cloud Spanner to infer the right - SQL type from a JSON value. For example, values of type - ``BYTES`` and values of type ``STRING`` both appear in [params - ][google.spanner.v1.ExecuteBatchDmlRequest.Statement.params] - as JSON strings. In these cases, ``param_types`` can be used - to specify the exact SQL type for some or all of the SQL - statement parameters. See the definition of - [Type][google.spanner.v1.Type] for more information about SQL - types. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ExecuteBatchDmlRequest.Statement) - ), - ), - DESCRIPTOR=_EXECUTEBATCHDMLREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for - [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. - - - Attributes: - session: - Required. The session in which the DML statements should be - performed. - transaction: - Required. The transaction to use. Must be a read-write - transaction. To protect against replays, single-use - transactions are not supported. The caller must either supply - an existing transaction ID or begin a new transaction. - statements: - Required. The list of statements to execute in this batch. - Statements are executed serially, such that the effects of - statement ``i`` are visible to statement ``i+1``. Each - statement must be a DML statement. Execution stops at the - first failed statement; the remaining statements are not - executed. Callers must provide at least one statement. - seqno: - Required. A per-transaction sequence number used to identify - this request. This field makes each request idempotent such - that if the request is received multiple times, at most one - will succeed. The sequence number must be monotonically - increasing within the transaction. If a request arrives for - the first time with an out-of-order sequence number, the - transaction may be aborted. Replays of previously handled - requests will yield the same response as the first execution. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ExecuteBatchDmlRequest) - ), -) -_sym_db.RegisterMessage(ExecuteBatchDmlRequest) -_sym_db.RegisterMessage(ExecuteBatchDmlRequest.Statement) -_sym_db.RegisterMessage(ExecuteBatchDmlRequest.Statement.ParamTypesEntry) - -ExecuteBatchDmlResponse = _reflection.GeneratedProtocolMessageType( - "ExecuteBatchDmlResponse", - (_message.Message,), - dict( - DESCRIPTOR=_EXECUTEBATCHDMLRESPONSE, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The response for - [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. Contains a - list of [ResultSet][google.spanner.v1.ResultSet] messages, one for each - DML statement that has successfully executed, in the same order as the - statements in the request. If a statement fails, the status in the - response body identifies the cause of the failure. - - To check for DML statements that failed, use the following approach: - - 1. Check the status in the response message. The - [google.rpc.Code][google.rpc.Code] enum value ``OK`` indicates that - all statements were executed successfully. - 2. If the status was not ``OK``, check the number of result sets in the - response. If the response contains ``N`` - [ResultSet][google.spanner.v1.ResultSet] messages, then statement - ``N+1`` in the request failed. - - Example 1: - - - Request: 5 DML statements, all executed successfully. - - Response: 5 [ResultSet][google.spanner.v1.ResultSet] messages, with - the status ``OK``. - - Example 2: - - - Request: 5 DML statements. The third statement has a syntax error. - - Response: 2 [ResultSet][google.spanner.v1.ResultSet] messages, and a - syntax error (``INVALID_ARGUMENT``) status. The number of - [ResultSet][google.spanner.v1.ResultSet] messages indicates that the - third statement failed, and the fourth and fifth statements were not - executed. - - - Attributes: - result_sets: - One [ResultSet][google.spanner.v1.ResultSet] for each - statement in the request that ran successfully, in the same - order as the statements in the request. Each - [ResultSet][google.spanner.v1.ResultSet] does not contain any - rows. The [ResultSetStats][google.spanner.v1.ResultSetStats] - in each [ResultSet][google.spanner.v1.ResultSet] contain the - number of rows modified by the statement. Only the first - [ResultSet][google.spanner.v1.ResultSet] in the response - contains valid - [ResultSetMetadata][google.spanner.v1.ResultSetMetadata]. - status: - If all DML statements are executed successfully, the status is - ``OK``. Otherwise, the error status of the first failed - statement. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ExecuteBatchDmlResponse) - ), -) -_sym_db.RegisterMessage(ExecuteBatchDmlResponse) - -PartitionOptions = _reflection.GeneratedProtocolMessageType( - "PartitionOptions", - (_message.Message,), - dict( - DESCRIPTOR=_PARTITIONOPTIONS, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""Options for a PartitionQueryRequest and - PartitionReadRequest. - - - Attributes: - partition_size_bytes: - \ **Note:** This hint is currently ignored by PartitionQuery - and PartitionRead requests. The desired data size for each - partition generated. The default for this option is currently - 1 GiB. This is only a hint. The actual size of each partition - may be smaller or larger than this size request. - max_partitions: - \ **Note:** This hint is currently ignored by PartitionQuery - and PartitionRead requests. The desired maximum number of - partitions to return. For example, this may be set to the - number of workers available. The default for this option is - currently 10,000. The maximum value is currently 200,000. This - is only a hint. The actual number of partitions returned may - be smaller or larger than this maximum count request. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.PartitionOptions) - ), -) -_sym_db.RegisterMessage(PartitionOptions) - -PartitionQueryRequest = _reflection.GeneratedProtocolMessageType( - "PartitionQueryRequest", - (_message.Message,), - dict( - ParamTypesEntry=_reflection.GeneratedProtocolMessageType( - "ParamTypesEntry", - (_message.Message,), - dict( - DESCRIPTOR=_PARTITIONQUERYREQUEST_PARAMTYPESENTRY, - __module__="google.cloud.spanner_v1.proto.spanner_pb2" - # @@protoc_insertion_point(class_scope:google.spanner.v1.PartitionQueryRequest.ParamTypesEntry) - ), - ), - DESCRIPTOR=_PARTITIONQUERYREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for - [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] - - - Attributes: - session: - Required. The session used to create the partitions. - transaction: - Read only snapshot transactions are supported, read/write and - single use transactions are not. - sql: - Required. The query request to generate partitions for. The - request will fail if the query is not root partitionable. The - query plan of a root partitionable query has a single - distributed union operator. A distributed union operator - conceptually divides one or more tables into multiple splits, - remotely evaluates a subquery independently on each split, and - then unions all results. This must not contain DML commands, - such as INSERT, UPDATE, or DELETE. Use [ExecuteStreamingSql][g - oogle.spanner.v1.Spanner.ExecuteStreamingSql] with a - PartitionedDml transaction for large, partition-friendly DML - operations. - params: - Parameter names and values that bind to placeholders in the - SQL string. A parameter placeholder consists of the ``@`` - character followed by the parameter name (for example, - ``@firstName``). Parameter names can contain letters, numbers, - and underscores. Parameters can appear anywhere that a - literal value is expected. The same parameter name can be used - more than once, for example: ``"WHERE id > @msg_id AND id < - @msg_id + 100"`` It is an error to execute a SQL statement - with unbound parameters. - param_types: - It is not always possible for Cloud Spanner to infer the right - SQL type from a JSON value. For example, values of type - ``BYTES`` and values of type ``STRING`` both appear in - [params][google.spanner.v1.PartitionQueryRequest.params] as - JSON strings. In these cases, ``param_types`` can be used to - specify the exact SQL type for some or all of the SQL query - parameters. See the definition of - [Type][google.spanner.v1.Type] for more information about SQL - types. - partition_options: - Additional options that affect how many partitions are - created. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.PartitionQueryRequest) - ), -) -_sym_db.RegisterMessage(PartitionQueryRequest) -_sym_db.RegisterMessage(PartitionQueryRequest.ParamTypesEntry) - -PartitionReadRequest = _reflection.GeneratedProtocolMessageType( - "PartitionReadRequest", - (_message.Message,), - dict( - DESCRIPTOR=_PARTITIONREADREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for - [PartitionRead][google.spanner.v1.Spanner.PartitionRead] - - - Attributes: - session: - Required. The session used to create the partitions. - transaction: - Read only snapshot transactions are supported, read/write and - single use transactions are not. - table: - Required. The name of the table in the database to be read. - index: - If non-empty, the name of an index on - [table][google.spanner.v1.PartitionReadRequest.table]. This - index is used instead of the table primary key when - interpreting - [key\_set][google.spanner.v1.PartitionReadRequest.key\_set] - and sorting result rows. See - [key\_set][google.spanner.v1.PartitionReadRequest.key\_set] - for further information. - columns: - The columns of - [table][google.spanner.v1.PartitionReadRequest.table] to be - returned for each row matching this request. - key_set: - Required. ``key_set`` identifies the rows to be yielded. - ``key_set`` names the primary keys of the rows in - [table][google.spanner.v1.PartitionReadRequest.table] to be - yielded, unless - [index][google.spanner.v1.PartitionReadRequest.index] is - present. If - [index][google.spanner.v1.PartitionReadRequest.index] is - present, then - [key\_set][google.spanner.v1.PartitionReadRequest.key\_set] - instead names index keys in - [index][google.spanner.v1.PartitionReadRequest.index]. It is - not an error for the ``key_set`` to name rows that do not - exist in the database. Read yields nothing for nonexistent - rows. - partition_options: - Additional options that affect how many partitions are - created. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.PartitionReadRequest) - ), -) -_sym_db.RegisterMessage(PartitionReadRequest) - -Partition = _reflection.GeneratedProtocolMessageType( - "Partition", - (_message.Message,), - dict( - DESCRIPTOR=_PARTITION, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""Information returned for each partition returned in a - PartitionResponse. - - - Attributes: - partition_token: - This token can be passed to Read, StreamingRead, ExecuteSql, - or ExecuteStreamingSql requests to restrict the results to - those identified by this partition token. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.Partition) - ), -) -_sym_db.RegisterMessage(Partition) - -PartitionResponse = _reflection.GeneratedProtocolMessageType( - "PartitionResponse", - (_message.Message,), - dict( - DESCRIPTOR=_PARTITIONRESPONSE, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The response for - [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] or - [PartitionRead][google.spanner.v1.Spanner.PartitionRead] - - - Attributes: - partitions: - Partitions created by this request. - transaction: - Transaction created by this request. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.PartitionResponse) - ), -) -_sym_db.RegisterMessage(PartitionResponse) - -ReadRequest = _reflection.GeneratedProtocolMessageType( - "ReadRequest", - (_message.Message,), - dict( - DESCRIPTOR=_READREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for [Read][google.spanner.v1.Spanner.Read] and - [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. - - - Attributes: - session: - Required. The session in which the read should be performed. - transaction: - The transaction to use. If none is provided, the default is a - temporary read-only transaction with strong concurrency. - table: - Required. The name of the table in the database to be read. - index: - If non-empty, the name of an index on - [table][google.spanner.v1.ReadRequest.table]. This index is - used instead of the table primary key when interpreting - [key\_set][google.spanner.v1.ReadRequest.key\_set] and sorting - result rows. See - [key\_set][google.spanner.v1.ReadRequest.key\_set] for further - information. - columns: - Required. The columns of - [table][google.spanner.v1.ReadRequest.table] to be returned - for each row matching this request. - key_set: - Required. ``key_set`` identifies the rows to be yielded. - ``key_set`` names the primary keys of the rows in - [table][google.spanner.v1.ReadRequest.table] to be yielded, - unless [index][google.spanner.v1.ReadRequest.index] is - present. If [index][google.spanner.v1.ReadRequest.index] is - present, then - [key\_set][google.spanner.v1.ReadRequest.key\_set] instead - names index keys in - [index][google.spanner.v1.ReadRequest.index]. If the [partiti - on\_token][google.spanner.v1.ReadRequest.partition\_token] - field is empty, rows are yielded in table primary key order - (if [index][google.spanner.v1.ReadRequest.index] is empty) or - index key order (if - [index][google.spanner.v1.ReadRequest.index] is non-empty). If - the [partition\_token][google.spanner.v1.ReadRequest.partition - \_token] field is not empty, rows will be yielded in an - unspecified order. It is not an error for the ``key_set`` to - name rows that do not exist in the database. Read yields - nothing for nonexistent rows. - limit: - If greater than zero, only the first ``limit`` rows are - yielded. If ``limit`` is zero, the default is no limit. A - limit cannot be specified if ``partition_token`` is set. - resume_token: - If this request is resuming a previously interrupted read, - ``resume_token`` should be copied from the last - [PartialResultSet][google.spanner.v1.PartialResultSet] yielded - before the interruption. Doing this enables the new read to - resume where the last read left off. The rest of the request - parameters must exactly match the request that yielded this - token. - partition_token: - If present, results will be restricted to the specified - partition previously created using PartitionRead(). There must - be an exact match for the values of fields common to this - message and the PartitionReadRequest message used to create - this partition\_token. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.ReadRequest) - ), -) -_sym_db.RegisterMessage(ReadRequest) - -BeginTransactionRequest = _reflection.GeneratedProtocolMessageType( - "BeginTransactionRequest", - (_message.Message,), - dict( - DESCRIPTOR=_BEGINTRANSACTIONREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for - [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. - - - Attributes: - session: - Required. The session in which the transaction runs. - options: - Required. Options for the new transaction. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.BeginTransactionRequest) - ), -) -_sym_db.RegisterMessage(BeginTransactionRequest) - -CommitRequest = _reflection.GeneratedProtocolMessageType( - "CommitRequest", - (_message.Message,), - dict( - DESCRIPTOR=_COMMITREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for - [Commit][google.spanner.v1.Spanner.Commit]. - - - Attributes: - session: - Required. The session in which the transaction to be committed - is running. - transaction: - Required. The transaction in which to commit. - transaction_id: - Commit a previously-started transaction. - single_use_transaction: - Execute mutations in a temporary transaction. Note that unlike - commit of a previously-started transaction, commit with a - temporary transaction is non-idempotent. That is, if the - ``CommitRequest`` is sent to Cloud Spanner more than once (for - instance, due to retries in the application, or in the - transport library), it is possible that the mutations are - executed more than once. If this is undesirable, use - [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction] - and [Commit][google.spanner.v1.Spanner.Commit] instead. - mutations: - The mutations to be executed when this transaction commits. - All mutations are applied atomically, in the order they appear - in this list. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.CommitRequest) - ), -) -_sym_db.RegisterMessage(CommitRequest) - -CommitResponse = _reflection.GeneratedProtocolMessageType( - "CommitResponse", - (_message.Message,), - dict( - DESCRIPTOR=_COMMITRESPONSE, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The response for - [Commit][google.spanner.v1.Spanner.Commit]. - - - Attributes: - commit_timestamp: - The Cloud Spanner timestamp at which the transaction - committed. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.CommitResponse) - ), -) -_sym_db.RegisterMessage(CommitResponse) - -RollbackRequest = _reflection.GeneratedProtocolMessageType( - "RollbackRequest", - (_message.Message,), - dict( - DESCRIPTOR=_ROLLBACKREQUEST, - __module__="google.cloud.spanner_v1.proto.spanner_pb2", - __doc__="""The request for - [Rollback][google.spanner.v1.Spanner.Rollback]. - - - Attributes: - session: - Required. The session in which the transaction to roll back is - running. - transaction_id: - Required. The transaction to roll back. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.RollbackRequest) - ), -) -_sym_db.RegisterMessage(RollbackRequest) - - -DESCRIPTOR._options = None -_CREATESESSIONREQUEST.fields_by_name["database"]._options = None -_BATCHCREATESESSIONSREQUEST.fields_by_name["database"]._options = None -_BATCHCREATESESSIONSREQUEST.fields_by_name["session_count"]._options = None -_SESSION_LABELSENTRY._options = None -_SESSION._options = None -_GETSESSIONREQUEST.fields_by_name["name"]._options = None -_LISTSESSIONSREQUEST.fields_by_name["database"]._options = None -_DELETESESSIONREQUEST.fields_by_name["name"]._options = None -_EXECUTESQLREQUEST_PARAMTYPESENTRY._options = None -_EXECUTESQLREQUEST.fields_by_name["session"]._options = None -_EXECUTESQLREQUEST.fields_by_name["sql"]._options = None -_EXECUTEBATCHDMLREQUEST_STATEMENT_PARAMTYPESENTRY._options = None -_EXECUTEBATCHDMLREQUEST.fields_by_name["session"]._options = None -_EXECUTEBATCHDMLREQUEST.fields_by_name["transaction"]._options = None -_EXECUTEBATCHDMLREQUEST.fields_by_name["statements"]._options = None -_EXECUTEBATCHDMLREQUEST.fields_by_name["seqno"]._options = None -_PARTITIONQUERYREQUEST_PARAMTYPESENTRY._options = None -_PARTITIONQUERYREQUEST.fields_by_name["session"]._options = None -_PARTITIONQUERYREQUEST.fields_by_name["sql"]._options = None -_PARTITIONREADREQUEST.fields_by_name["session"]._options = None -_PARTITIONREADREQUEST.fields_by_name["table"]._options = None -_PARTITIONREADREQUEST.fields_by_name["key_set"]._options = None -_READREQUEST.fields_by_name["session"]._options = None -_READREQUEST.fields_by_name["table"]._options = None -_READREQUEST.fields_by_name["columns"]._options = None -_READREQUEST.fields_by_name["key_set"]._options = None -_BEGINTRANSACTIONREQUEST.fields_by_name["session"]._options = None -_BEGINTRANSACTIONREQUEST.fields_by_name["options"]._options = None -_COMMITREQUEST.fields_by_name["session"]._options = None -_ROLLBACKREQUEST.fields_by_name["session"]._options = None -_ROLLBACKREQUEST.fields_by_name["transaction_id"]._options = None - -_SPANNER = _descriptor.ServiceDescriptor( - name="Spanner", - full_name="google.spanner.v1.Spanner", - file=DESCRIPTOR, - index=0, - serialized_options=_b( - "\312A\026spanner.googleapis.com\322A[https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spanner.data" - ), - serialized_start=4716, - serialized_end=7596, - methods=[ - _descriptor.MethodDescriptor( - name="CreateSession", - full_name="google.spanner.v1.Spanner.CreateSession", - index=0, - containing_service=None, - input_type=_CREATESESSIONREQUEST, - output_type=_SESSION, - serialized_options=_b( - '\202\323\344\223\002?":/v1/{database=projects/*/instances/*/databases/*}/sessions:\001*\332A\010database' - ), - ), - _descriptor.MethodDescriptor( - name="BatchCreateSessions", - full_name="google.spanner.v1.Spanner.BatchCreateSessions", - index=1, - containing_service=None, - input_type=_BATCHCREATESESSIONSREQUEST, - output_type=_BATCHCREATESESSIONSRESPONSE, - serialized_options=_b( - '\202\323\344\223\002K"F/v1/{database=projects/*/instances/*/databases/*}/sessions:batchCreate:\001*\332A\026database,session_count' - ), - ), - _descriptor.MethodDescriptor( - name="GetSession", - full_name="google.spanner.v1.Spanner.GetSession", - index=2, - containing_service=None, - input_type=_GETSESSIONREQUEST, - output_type=_SESSION, - serialized_options=_b( - "\202\323\344\223\002:\0228/v1/{name=projects/*/instances/*/databases/*/sessions/*}\332A\004name" - ), - ), - _descriptor.MethodDescriptor( - name="ListSessions", - full_name="google.spanner.v1.Spanner.ListSessions", - index=3, - containing_service=None, - input_type=_LISTSESSIONSREQUEST, - output_type=_LISTSESSIONSRESPONSE, - serialized_options=_b( - "\202\323\344\223\002<\022:/v1/{database=projects/*/instances/*/databases/*}/sessions\332A\010database" - ), - ), - _descriptor.MethodDescriptor( - name="DeleteSession", - full_name="google.spanner.v1.Spanner.DeleteSession", - index=4, - containing_service=None, - input_type=_DELETESESSIONREQUEST, - output_type=google_dot_protobuf_dot_empty__pb2._EMPTY, - serialized_options=_b( - "\202\323\344\223\002:*8/v1/{name=projects/*/instances/*/databases/*/sessions/*}\332A\004name" - ), - ), - _descriptor.MethodDescriptor( - name="ExecuteSql", - full_name="google.spanner.v1.Spanner.ExecuteSql", - index=5, - containing_service=None, - input_type=_EXECUTESQLREQUEST, - output_type=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2._RESULTSET, - serialized_options=_b( - '\202\323\344\223\002K"F/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeSql:\001*' - ), - ), - _descriptor.MethodDescriptor( - name="ExecuteStreamingSql", - full_name="google.spanner.v1.Spanner.ExecuteStreamingSql", - index=6, - containing_service=None, - input_type=_EXECUTESQLREQUEST, - output_type=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2._PARTIALRESULTSET, - serialized_options=_b( - '\202\323\344\223\002T"O/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeStreamingSql:\001*' - ), - ), - _descriptor.MethodDescriptor( - name="ExecuteBatchDml", - full_name="google.spanner.v1.Spanner.ExecuteBatchDml", - index=7, - containing_service=None, - input_type=_EXECUTEBATCHDMLREQUEST, - output_type=_EXECUTEBATCHDMLRESPONSE, - serialized_options=_b( - '\202\323\344\223\002P"K/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeBatchDml:\001*' - ), - ), - _descriptor.MethodDescriptor( - name="Read", - full_name="google.spanner.v1.Spanner.Read", - index=8, - containing_service=None, - input_type=_READREQUEST, - output_type=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2._RESULTSET, - serialized_options=_b( - '\202\323\344\223\002E"@/v1/{session=projects/*/instances/*/databases/*/sessions/*}:read:\001*' - ), - ), - _descriptor.MethodDescriptor( - name="StreamingRead", - full_name="google.spanner.v1.Spanner.StreamingRead", - index=9, - containing_service=None, - input_type=_READREQUEST, - output_type=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2._PARTIALRESULTSET, - serialized_options=_b( - '\202\323\344\223\002N"I/v1/{session=projects/*/instances/*/databases/*/sessions/*}:streamingRead:\001*' - ), - ), - _descriptor.MethodDescriptor( - name="BeginTransaction", - full_name="google.spanner.v1.Spanner.BeginTransaction", - index=10, - containing_service=None, - input_type=_BEGINTRANSACTIONREQUEST, - output_type=google_dot_cloud_dot_spanner__v1_dot_proto_dot_transaction__pb2._TRANSACTION, - serialized_options=_b( - '\202\323\344\223\002Q"L/v1/{session=projects/*/instances/*/databases/*/sessions/*}:beginTransaction:\001*\332A\017session,options' - ), - ), - _descriptor.MethodDescriptor( - name="Commit", - full_name="google.spanner.v1.Spanner.Commit", - index=11, - containing_service=None, - input_type=_COMMITREQUEST, - output_type=_COMMITRESPONSE, - serialized_options=_b( - '\202\323\344\223\002G"B/v1/{session=projects/*/instances/*/databases/*/sessions/*}:commit:\001*\332A session,transaction_id,mutations\332A(session,single_use_transaction,mutations' - ), - ), - _descriptor.MethodDescriptor( - name="Rollback", - full_name="google.spanner.v1.Spanner.Rollback", - index=12, - containing_service=None, - input_type=_ROLLBACKREQUEST, - output_type=google_dot_protobuf_dot_empty__pb2._EMPTY, - serialized_options=_b( - '\202\323\344\223\002I"D/v1/{session=projects/*/instances/*/databases/*/sessions/*}:rollback:\001*\332A\026session,transaction_id' - ), - ), - _descriptor.MethodDescriptor( - name="PartitionQuery", - full_name="google.spanner.v1.Spanner.PartitionQuery", - index=13, - containing_service=None, - input_type=_PARTITIONQUERYREQUEST, - output_type=_PARTITIONRESPONSE, - serialized_options=_b( - '\202\323\344\223\002O"J/v1/{session=projects/*/instances/*/databases/*/sessions/*}:partitionQuery:\001*' - ), - ), - _descriptor.MethodDescriptor( - name="PartitionRead", - full_name="google.spanner.v1.Spanner.PartitionRead", - index=14, - containing_service=None, - input_type=_PARTITIONREADREQUEST, - output_type=_PARTITIONRESPONSE, - serialized_options=_b( - '\202\323\344\223\002N"I/v1/{session=projects/*/instances/*/databases/*/sessions/*}:partitionRead:\001*' - ), - ), - ], -) -_sym_db.RegisterServiceDescriptor(_SPANNER) - -DESCRIPTOR.services_by_name["Spanner"] = _SPANNER - -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_v1/proto/spanner_pb2_grpc.py b/google/cloud/spanner_v1/proto/spanner_pb2_grpc.py deleted file mode 100644 index 1ac098d328..0000000000 --- a/google/cloud/spanner_v1/proto/spanner_pb2_grpc.py +++ /dev/null @@ -1,404 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc - -from google.cloud.spanner_v1.proto import ( - result_set_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2, -) -from google.cloud.spanner_v1.proto import ( - spanner_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2, -) -from google.cloud.spanner_v1.proto import ( - transaction_pb2 as google_dot_cloud_dot_spanner__v1_dot_proto_dot_transaction__pb2, -) -from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2 - - -class SpannerStub(object): - """Cloud Spanner API - - The Cloud Spanner API can be used to manage sessions and execute - transactions on data stored in Cloud Spanner databases. - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.CreateSession = channel.unary_unary( - "/google.spanner.v1.Spanner/CreateSession", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.CreateSessionRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.Session.FromString, - ) - self.BatchCreateSessions = channel.unary_unary( - "/google.spanner.v1.Spanner/BatchCreateSessions", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.BatchCreateSessionsRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.BatchCreateSessionsResponse.FromString, - ) - self.GetSession = channel.unary_unary( - "/google.spanner.v1.Spanner/GetSession", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.GetSessionRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.Session.FromString, - ) - self.ListSessions = channel.unary_unary( - "/google.spanner.v1.Spanner/ListSessions", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ListSessionsRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ListSessionsResponse.FromString, - ) - self.DeleteSession = channel.unary_unary( - "/google.spanner.v1.Spanner/DeleteSession", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.DeleteSessionRequest.SerializeToString, - response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, - ) - self.ExecuteSql = channel.unary_unary( - "/google.spanner.v1.Spanner/ExecuteSql", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ExecuteSqlRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2.ResultSet.FromString, - ) - self.ExecuteStreamingSql = channel.unary_stream( - "/google.spanner.v1.Spanner/ExecuteStreamingSql", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ExecuteSqlRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2.PartialResultSet.FromString, - ) - self.ExecuteBatchDml = channel.unary_unary( - "/google.spanner.v1.Spanner/ExecuteBatchDml", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ExecuteBatchDmlRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ExecuteBatchDmlResponse.FromString, - ) - self.Read = channel.unary_unary( - "/google.spanner.v1.Spanner/Read", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ReadRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2.ResultSet.FromString, - ) - self.StreamingRead = channel.unary_stream( - "/google.spanner.v1.Spanner/StreamingRead", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ReadRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2.PartialResultSet.FromString, - ) - self.BeginTransaction = channel.unary_unary( - "/google.spanner.v1.Spanner/BeginTransaction", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.BeginTransactionRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_transaction__pb2.Transaction.FromString, - ) - self.Commit = channel.unary_unary( - "/google.spanner.v1.Spanner/Commit", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.CommitRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.CommitResponse.FromString, - ) - self.Rollback = channel.unary_unary( - "/google.spanner.v1.Spanner/Rollback", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.RollbackRequest.SerializeToString, - response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, - ) - self.PartitionQuery = channel.unary_unary( - "/google.spanner.v1.Spanner/PartitionQuery", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.PartitionQueryRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.PartitionResponse.FromString, - ) - self.PartitionRead = channel.unary_unary( - "/google.spanner.v1.Spanner/PartitionRead", - request_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.PartitionReadRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.PartitionResponse.FromString, - ) - - -class SpannerServicer(object): - """Cloud Spanner API - - The Cloud Spanner API can be used to manage sessions and execute - transactions on data stored in Cloud Spanner databases. - """ - - def CreateSession(self, request, context): - """Creates a new session. A session can be used to perform - transactions that read and/or modify data in a Cloud Spanner database. - Sessions are meant to be reused for many consecutive - transactions. - - Sessions can only execute one transaction at a time. To execute - multiple concurrent read-write/write-only transactions, create - multiple sessions. Note that standalone reads and queries use a - transaction internally, and count toward the one transaction - limit. - - Active sessions use additional server resources, so it is a good idea to - delete idle and unneeded sessions. - Aside from explicit deletes, Cloud Spanner may delete sessions for which no - operations are sent for more than an hour. If a session is deleted, - requests to it return `NOT_FOUND`. - - Idle sessions can be kept alive by sending a trivial SQL query - periodically, e.g., `"SELECT 1"`. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def BatchCreateSessions(self, request, context): - """Creates multiple new sessions. - - This API can be used to initialize a session cache on the clients. - See https://goo.gl/TgSFN2 for best practices on session cache management. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def GetSession(self, request, context): - """Gets a session. Returns `NOT_FOUND` if the session does not exist. - This is mainly useful for determining whether a session is still - alive. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def ListSessions(self, request, context): - """Lists all sessions in a given database. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def DeleteSession(self, request, context): - """Ends a session, releasing server resources associated with it. This will - asynchronously trigger cancellation of any operations that are running with - this session. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def ExecuteSql(self, request, context): - """Executes an SQL statement, returning all results in a single reply. This - method cannot be used to return a result set larger than 10 MiB; - if the query yields more data than that, the query fails with - a `FAILED_PRECONDITION` error. - - Operations inside read-write transactions might return `ABORTED`. If - this occurs, the application should restart the transaction from - the beginning. See [Transaction][google.spanner.v1.Transaction] for more details. - - Larger result sets can be fetched in streaming fashion by calling - [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] instead. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def ExecuteStreamingSql(self, request, context): - """Like [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], except returns the result - set as a stream. Unlike [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], there - is no limit on the size of the returned result set. However, no - individual row in the result set can exceed 100 MiB, and no - column value can exceed 10 MiB. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def ExecuteBatchDml(self, request, context): - """Executes a batch of SQL DML statements. This method allows many statements - to be run with lower latency than submitting them sequentially with - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. - - Statements are executed in sequential order. A request can succeed even if - a statement fails. The [ExecuteBatchDmlResponse.status][google.spanner.v1.ExecuteBatchDmlResponse.status] field in the - response provides information about the statement that failed. Clients must - inspect this field to determine whether an error occurred. - - Execution stops after the first failed statement; the remaining statements - are not executed. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def Read(self, request, context): - """Reads rows from the database using key lookups and scans, as a - simple key/value style alternative to - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. This method cannot be used to - return a result set larger than 10 MiB; if the read matches more - data than that, the read fails with a `FAILED_PRECONDITION` - error. - - Reads inside read-write transactions might return `ABORTED`. If - this occurs, the application should restart the transaction from - the beginning. See [Transaction][google.spanner.v1.Transaction] for more details. - - Larger result sets can be yielded in streaming fashion by calling - [StreamingRead][google.spanner.v1.Spanner.StreamingRead] instead. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def StreamingRead(self, request, context): - """Like [Read][google.spanner.v1.Spanner.Read], except returns the result set as a - stream. Unlike [Read][google.spanner.v1.Spanner.Read], there is no limit on the - size of the returned result set. However, no individual row in - the result set can exceed 100 MiB, and no column value can exceed - 10 MiB. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def BeginTransaction(self, request, context): - """Begins a new transaction. This step can often be skipped: - [Read][google.spanner.v1.Spanner.Read], [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and - [Commit][google.spanner.v1.Spanner.Commit] can begin a new transaction as a - side-effect. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def Commit(self, request, context): - """Commits a transaction. The request includes the mutations to be - applied to rows in the database. - - `Commit` might return an `ABORTED` error. This can occur at any time; - commonly, the cause is conflicts with concurrent - transactions. However, it can also happen for a variety of other - reasons. If `Commit` returns `ABORTED`, the caller should re-attempt - the transaction from the beginning, re-using the same session. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def Rollback(self, request, context): - """Rolls back a transaction, releasing any locks it holds. It is a good - idea to call this for any transaction that includes one or more - [Read][google.spanner.v1.Spanner.Read] or [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] requests and - ultimately decides not to commit. - - `Rollback` returns `OK` if it successfully aborts the transaction, the - transaction was already aborted, or the transaction is not - found. `Rollback` never returns `ABORTED`. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def PartitionQuery(self, request, context): - """Creates a set of partition tokens that can be used to execute a query - operation in parallel. Each of the returned partition tokens can be used - by [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] to specify a subset - of the query result to read. The same session and read-only transaction - must be used by the PartitionQueryRequest used to create the - partition tokens and the ExecuteSqlRequests that use the partition tokens. - - Partition tokens become invalid when the session used to create them - is deleted, is idle for too long, begins a new transaction, or becomes too - old. When any of these happen, it is not possible to resume the query, and - the whole operation must be restarted from the beginning. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - def PartitionRead(self, request, context): - """Creates a set of partition tokens that can be used to execute a read - operation in parallel. Each of the returned partition tokens can be used - by [StreamingRead][google.spanner.v1.Spanner.StreamingRead] to specify a subset of the read - result to read. The same session and read-only transaction must be used by - the PartitionReadRequest used to create the partition tokens and the - ReadRequests that use the partition tokens. There are no ordering - guarantees on rows returned among the returned partition tokens, or even - within each individual StreamingRead call issued with a partition_token. - - Partition tokens become invalid when the session used to create them - is deleted, is idle for too long, begins a new transaction, or becomes too - old. When any of these happen, it is not possible to resume the read, and - the whole operation must be restarted from the beginning. - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_SpannerServicer_to_server(servicer, server): - rpc_method_handlers = { - "CreateSession": grpc.unary_unary_rpc_method_handler( - servicer.CreateSession, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.CreateSessionRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.Session.SerializeToString, - ), - "BatchCreateSessions": grpc.unary_unary_rpc_method_handler( - servicer.BatchCreateSessions, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.BatchCreateSessionsRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.BatchCreateSessionsResponse.SerializeToString, - ), - "GetSession": grpc.unary_unary_rpc_method_handler( - servicer.GetSession, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.GetSessionRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.Session.SerializeToString, - ), - "ListSessions": grpc.unary_unary_rpc_method_handler( - servicer.ListSessions, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ListSessionsRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ListSessionsResponse.SerializeToString, - ), - "DeleteSession": grpc.unary_unary_rpc_method_handler( - servicer.DeleteSession, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.DeleteSessionRequest.FromString, - response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, - ), - "ExecuteSql": grpc.unary_unary_rpc_method_handler( - servicer.ExecuteSql, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ExecuteSqlRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2.ResultSet.SerializeToString, - ), - "ExecuteStreamingSql": grpc.unary_stream_rpc_method_handler( - servicer.ExecuteStreamingSql, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ExecuteSqlRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2.PartialResultSet.SerializeToString, - ), - "ExecuteBatchDml": grpc.unary_unary_rpc_method_handler( - servicer.ExecuteBatchDml, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ExecuteBatchDmlRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ExecuteBatchDmlResponse.SerializeToString, - ), - "Read": grpc.unary_unary_rpc_method_handler( - servicer.Read, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ReadRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2.ResultSet.SerializeToString, - ), - "StreamingRead": grpc.unary_stream_rpc_method_handler( - servicer.StreamingRead, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.ReadRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_result__set__pb2.PartialResultSet.SerializeToString, - ), - "BeginTransaction": grpc.unary_unary_rpc_method_handler( - servicer.BeginTransaction, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.BeginTransactionRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_transaction__pb2.Transaction.SerializeToString, - ), - "Commit": grpc.unary_unary_rpc_method_handler( - servicer.Commit, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.CommitRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.CommitResponse.SerializeToString, - ), - "Rollback": grpc.unary_unary_rpc_method_handler( - servicer.Rollback, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.RollbackRequest.FromString, - response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, - ), - "PartitionQuery": grpc.unary_unary_rpc_method_handler( - servicer.PartitionQuery, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.PartitionQueryRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.PartitionResponse.SerializeToString, - ), - "PartitionRead": grpc.unary_unary_rpc_method_handler( - servicer.PartitionRead, - request_deserializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.PartitionReadRequest.FromString, - response_serializer=google_dot_cloud_dot_spanner__v1_dot_proto_dot_spanner__pb2.PartitionResponse.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.spanner.v1.Spanner", rpc_method_handlers - ) - server.add_generic_rpc_handlers((generic_handler,)) diff --git a/google/cloud/spanner_v1/proto/transaction.proto b/google/cloud/spanner_v1/proto/transaction.proto deleted file mode 100644 index 6a429f5d41..0000000000 --- a/google/cloud/spanner_v1/proto/transaction.proto +++ /dev/null @@ -1,455 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.v1; - -import "google/protobuf/duration.proto"; -import "google/protobuf/timestamp.proto"; -import "google/api/annotations.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/v1;spanner"; -option java_multiple_files = true; -option java_outer_classname = "TransactionProto"; -option java_package = "com.google.spanner.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\V1"; - -// # Transactions -// -// -// Each session can have at most one active transaction at a time. After the -// active transaction is completed, the session can immediately be -// re-used for the next transaction. It is not necessary to create a -// new session for each transaction. -// -// # Transaction Modes -// -// Cloud Spanner supports three transaction modes: -// -// 1. Locking read-write. This type of transaction is the only way -// to write data into Cloud Spanner. These transactions rely on -// pessimistic locking and, if necessary, two-phase commit. -// Locking read-write transactions may abort, requiring the -// application to retry. -// -// 2. Snapshot read-only. This transaction type provides guaranteed -// consistency across several reads, but does not allow -// writes. Snapshot read-only transactions can be configured to -// read at timestamps in the past. Snapshot read-only -// transactions do not need to be committed. -// -// 3. Partitioned DML. This type of transaction is used to execute -// a single Partitioned DML statement. Partitioned DML partitions -// the key space and runs the DML statement over each partition -// in parallel using separate, internal transactions that commit -// independently. Partitioned DML transactions do not need to be -// committed. -// -// For transactions that only read, snapshot read-only transactions -// provide simpler semantics and are almost always faster. In -// particular, read-only transactions do not take locks, so they do -// not conflict with read-write transactions. As a consequence of not -// taking locks, they also do not abort, so retry loops are not needed. -// -// Transactions may only read/write data in a single database. They -// may, however, read/write data in different tables within that -// database. -// -// ## Locking Read-Write Transactions -// -// Locking transactions may be used to atomically read-modify-write -// data anywhere in a database. This type of transaction is externally -// consistent. -// -// Clients should attempt to minimize the amount of time a transaction -// is active. Faster transactions commit with higher probability -// and cause less contention. Cloud Spanner attempts to keep read locks -// active as long as the transaction continues to do reads, and the -// transaction has not been terminated by -// [Commit][google.spanner.v1.Spanner.Commit] or -// [Rollback][google.spanner.v1.Spanner.Rollback]. Long periods of -// inactivity at the client may cause Cloud Spanner to release a -// transaction's locks and abort it. -// -// Conceptually, a read-write transaction consists of zero or more -// reads or SQL statements followed by -// [Commit][google.spanner.v1.Spanner.Commit]. At any time before -// [Commit][google.spanner.v1.Spanner.Commit], the client can send a -// [Rollback][google.spanner.v1.Spanner.Rollback] request to abort the -// transaction. -// -// ### Semantics -// -// Cloud Spanner can commit the transaction if all read locks it acquired -// are still valid at commit time, and it is able to acquire write -// locks for all writes. Cloud Spanner can abort the transaction for any -// reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees -// that the transaction has not modified any user data in Cloud Spanner. -// -// Unless the transaction commits, Cloud Spanner makes no guarantees about -// how long the transaction's locks were held for. It is an error to -// use Cloud Spanner locks for any sort of mutual exclusion other than -// between Cloud Spanner transactions themselves. -// -// ### Retrying Aborted Transactions -// -// When a transaction aborts, the application can choose to retry the -// whole transaction again. To maximize the chances of successfully -// committing the retry, the client should execute the retry in the -// same session as the original attempt. The original session's lock -// priority increases with each consecutive abort, meaning that each -// attempt has a slightly better chance of success than the previous. -// -// Under some circumstances (e.g., many transactions attempting to -// modify the same row(s)), a transaction can abort many times in a -// short period before successfully committing. Thus, it is not a good -// idea to cap the number of retries a transaction can attempt; -// instead, it is better to limit the total amount of wall time spent -// retrying. -// -// ### Idle Transactions -// -// A transaction is considered idle if it has no outstanding reads or -// SQL queries and has not started a read or SQL query within the last 10 -// seconds. Idle transactions can be aborted by Cloud Spanner so that they -// don't hold on to locks indefinitely. In that case, the commit will -// fail with error `ABORTED`. -// -// If this behavior is undesirable, periodically executing a simple -// SQL query in the transaction (e.g., `SELECT 1`) prevents the -// transaction from becoming idle. -// -// ## Snapshot Read-Only Transactions -// -// Snapshot read-only transactions provides a simpler method than -// locking read-write transactions for doing several consistent -// reads. However, this type of transaction does not support writes. -// -// Snapshot transactions do not take locks. Instead, they work by -// choosing a Cloud Spanner timestamp, then executing all reads at that -// timestamp. Since they do not acquire locks, they do not block -// concurrent read-write transactions. -// -// Unlike locking read-write transactions, snapshot read-only -// transactions never abort. They can fail if the chosen read -// timestamp is garbage collected; however, the default garbage -// collection policy is generous enough that most applications do not -// need to worry about this in practice. -// -// Snapshot read-only transactions do not need to call -// [Commit][google.spanner.v1.Spanner.Commit] or -// [Rollback][google.spanner.v1.Spanner.Rollback] (and in fact are not -// permitted to do so). -// -// To execute a snapshot transaction, the client specifies a timestamp -// bound, which tells Cloud Spanner how to choose a read timestamp. -// -// The types of timestamp bound are: -// -// - Strong (the default). -// - Bounded staleness. -// - Exact staleness. -// -// If the Cloud Spanner database to be read is geographically distributed, -// stale read-only transactions can execute more quickly than strong -// or read-write transaction, because they are able to execute far -// from the leader replica. -// -// Each type of timestamp bound is discussed in detail below. -// -// ### Strong -// -// Strong reads are guaranteed to see the effects of all transactions -// that have committed before the start of the read. Furthermore, all -// rows yielded by a single read are consistent with each other -- if -// any part of the read observes a transaction, all parts of the read -// see the transaction. -// -// Strong reads are not repeatable: two consecutive strong read-only -// transactions might return inconsistent results if there are -// concurrent writes. If consistency across reads is required, the -// reads should be executed within a transaction or at an exact read -// timestamp. -// -// See [TransactionOptions.ReadOnly.strong][google.spanner.v1.TransactionOptions.ReadOnly.strong]. -// -// ### Exact Staleness -// -// These timestamp bounds execute reads at a user-specified -// timestamp. Reads at a timestamp are guaranteed to see a consistent -// prefix of the global transaction history: they observe -// modifications done by all transactions with a commit timestamp <= -// the read timestamp, and observe none of the modifications done by -// transactions with a larger commit timestamp. They will block until -// all conflicting transactions that may be assigned commit timestamps -// <= the read timestamp have finished. -// -// The timestamp can either be expressed as an absolute Cloud Spanner commit -// timestamp or a staleness relative to the current time. -// -// These modes do not require a "negotiation phase" to pick a -// timestamp. As a result, they execute slightly faster than the -// equivalent boundedly stale concurrency modes. On the other hand, -// boundedly stale reads usually return fresher results. -// -// See [TransactionOptions.ReadOnly.read_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.read_timestamp] and -// [TransactionOptions.ReadOnly.exact_staleness][google.spanner.v1.TransactionOptions.ReadOnly.exact_staleness]. -// -// ### Bounded Staleness -// -// Bounded staleness modes allow Cloud Spanner to pick the read timestamp, -// subject to a user-provided staleness bound. Cloud Spanner chooses the -// newest timestamp within the staleness bound that allows execution -// of the reads at the closest available replica without blocking. -// -// All rows yielded are consistent with each other -- if any part of -// the read observes a transaction, all parts of the read see the -// transaction. Boundedly stale reads are not repeatable: two stale -// reads, even if they use the same staleness bound, can execute at -// different timestamps and thus return inconsistent results. -// -// Boundedly stale reads execute in two phases: the first phase -// negotiates a timestamp among all replicas needed to serve the -// read. In the second phase, reads are executed at the negotiated -// timestamp. -// -// As a result of the two phase execution, bounded staleness reads are -// usually a little slower than comparable exact staleness -// reads. However, they are typically able to return fresher -// results, and are more likely to execute at the closest replica. -// -// Because the timestamp negotiation requires up-front knowledge of -// which rows will be read, it can only be used with single-use -// read-only transactions. -// -// See [TransactionOptions.ReadOnly.max_staleness][google.spanner.v1.TransactionOptions.ReadOnly.max_staleness] and -// [TransactionOptions.ReadOnly.min_read_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.min_read_timestamp]. -// -// ### Old Read Timestamps and Garbage Collection -// -// Cloud Spanner continuously garbage collects deleted and overwritten data -// in the background to reclaim storage space. This process is known -// as "version GC". By default, version GC reclaims versions after they -// are one hour old. Because of this, Cloud Spanner cannot perform reads -// at read timestamps more than one hour in the past. This -// restriction also applies to in-progress reads and/or SQL queries whose -// timestamp become too old while executing. Reads and SQL queries with -// too-old read timestamps fail with the error `FAILED_PRECONDITION`. -// -// ## Partitioned DML Transactions -// -// Partitioned DML transactions are used to execute DML statements with a -// different execution strategy that provides different, and often better, -// scalability properties for large, table-wide operations than DML in a -// ReadWrite transaction. Smaller scoped statements, such as an OLTP workload, -// should prefer using ReadWrite transactions. -// -// Partitioned DML partitions the keyspace and runs the DML statement on each -// partition in separate, internal transactions. These transactions commit -// automatically when complete, and run independently from one another. -// -// To reduce lock contention, this execution strategy only acquires read locks -// on rows that match the WHERE clause of the statement. Additionally, the -// smaller per-partition transactions hold locks for less time. -// -// That said, Partitioned DML is not a drop-in replacement for standard DML used -// in ReadWrite transactions. -// -// - The DML statement must be fully-partitionable. Specifically, the statement -// must be expressible as the union of many statements which each access only -// a single row of the table. -// -// - The statement is not applied atomically to all rows of the table. Rather, -// the statement is applied atomically to partitions of the table, in -// independent transactions. Secondary index rows are updated atomically -// with the base table rows. -// -// - Partitioned DML does not guarantee exactly-once execution semantics -// against a partition. The statement will be applied at least once to each -// partition. It is strongly recommended that the DML statement should be -// idempotent to avoid unexpected results. For instance, it is potentially -// dangerous to run a statement such as -// `UPDATE table SET column = column + 1` as it could be run multiple times -// against some rows. -// -// - The partitions are committed automatically - there is no support for -// Commit or Rollback. If the call returns an error, or if the client issuing -// the ExecuteSql call dies, it is possible that some rows had the statement -// executed on them successfully. It is also possible that statement was -// never executed against other rows. -// -// - Partitioned DML transactions may only contain the execution of a single -// DML statement via ExecuteSql or ExecuteStreamingSql. -// -// - If any error is encountered during the execution of the partitioned DML -// operation (for instance, a UNIQUE INDEX violation, division by zero, or a -// value that cannot be stored due to schema constraints), then the -// operation is stopped at that point and an error is returned. It is -// possible that at this point, some partitions have been committed (or even -// committed multiple times), and other partitions have not been run at all. -// -// Given the above, Partitioned DML is good fit for large, database-wide, -// operations that are idempotent, such as deleting old rows from a very large -// table. -message TransactionOptions { - // Message type to initiate a read-write transaction. Currently this - // transaction type has no options. - message ReadWrite { - - } - - // Message type to initiate a Partitioned DML transaction. - message PartitionedDml { - - } - - // Message type to initiate a read-only transaction. - message ReadOnly { - // How to choose the timestamp for the read-only transaction. - oneof timestamp_bound { - // Read at a timestamp where all previously committed transactions - // are visible. - bool strong = 1; - - // Executes all reads at a timestamp >= `min_read_timestamp`. - // - // This is useful for requesting fresher data than some previous - // read, or data that is fresh enough to observe the effects of some - // previously committed transaction whose timestamp is known. - // - // Note that this option can only be used in single-use transactions. - // - // A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds. - // Example: `"2014-10-02T15:01:23.045123456Z"`. - google.protobuf.Timestamp min_read_timestamp = 2; - - // Read data at a timestamp >= `NOW - max_staleness` - // seconds. Guarantees that all writes that have committed more - // than the specified number of seconds ago are visible. Because - // Cloud Spanner chooses the exact timestamp, this mode works even if - // the client's local clock is substantially skewed from Cloud Spanner - // commit timestamps. - // - // Useful for reading the freshest data available at a nearby - // replica, while bounding the possible staleness if the local - // replica has fallen behind. - // - // Note that this option can only be used in single-use - // transactions. - google.protobuf.Duration max_staleness = 3; - - // Executes all reads at the given timestamp. Unlike other modes, - // reads at a specific timestamp are repeatable; the same read at - // the same timestamp always returns the same data. If the - // timestamp is in the future, the read will block until the - // specified timestamp, modulo the read's deadline. - // - // Useful for large scale consistent reads such as mapreduces, or - // for coordinating many reads against a consistent snapshot of the - // data. - // - // A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds. - // Example: `"2014-10-02T15:01:23.045123456Z"`. - google.protobuf.Timestamp read_timestamp = 4; - - // Executes all reads at a timestamp that is `exact_staleness` - // old. The timestamp is chosen soon after the read is started. - // - // Guarantees that all writes that have committed more than the - // specified number of seconds ago are visible. Because Cloud Spanner - // chooses the exact timestamp, this mode works even if the client's - // local clock is substantially skewed from Cloud Spanner commit - // timestamps. - // - // Useful for reading at nearby replicas without the distributed - // timestamp negotiation overhead of `max_staleness`. - google.protobuf.Duration exact_staleness = 5; - } - - // If true, the Cloud Spanner-selected read timestamp is included in - // the [Transaction][google.spanner.v1.Transaction] message that describes the transaction. - bool return_read_timestamp = 6; - } - - // Required. The type of transaction. - oneof mode { - // Transaction may write. - // - // Authorization to begin a read-write transaction requires - // `spanner.databases.beginOrRollbackReadWriteTransaction` permission - // on the `session` resource. - ReadWrite read_write = 1; - - // Partitioned DML transaction. - // - // Authorization to begin a Partitioned DML transaction requires - // `spanner.databases.beginPartitionedDmlTransaction` permission - // on the `session` resource. - PartitionedDml partitioned_dml = 3; - - // Transaction will not write. - // - // Authorization to begin a read-only transaction requires - // `spanner.databases.beginReadOnlyTransaction` permission - // on the `session` resource. - ReadOnly read_only = 2; - } -} - -// A transaction. -message Transaction { - // `id` may be used to identify the transaction in subsequent - // [Read][google.spanner.v1.Spanner.Read], - // [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], - // [Commit][google.spanner.v1.Spanner.Commit], or - // [Rollback][google.spanner.v1.Spanner.Rollback] calls. - // - // Single-use read-only transactions do not have IDs, because - // single-use transactions do not support multiple requests. - bytes id = 1; - - // For snapshot read-only transactions, the read timestamp chosen - // for the transaction. Not returned by default: see - // [TransactionOptions.ReadOnly.return_read_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.return_read_timestamp]. - // - // A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds. - // Example: `"2014-10-02T15:01:23.045123456Z"`. - google.protobuf.Timestamp read_timestamp = 2; -} - -// This message is used to select the transaction in which a -// [Read][google.spanner.v1.Spanner.Read] or -// [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] call runs. -// -// See [TransactionOptions][google.spanner.v1.TransactionOptions] for more information about transactions. -message TransactionSelector { - // If no fields are set, the default is a single use transaction - // with strong concurrency. - oneof selector { - // Execute the read or SQL query in a temporary transaction. - // This is the most efficient way to execute a transaction that - // consists of a single SQL query. - TransactionOptions single_use = 1; - - // Execute the read or SQL query in a previously-started transaction. - bytes id = 2; - - // Begin a new transaction and execute this read or SQL query in - // it. The transaction ID of the new transaction is returned in - // [ResultSetMetadata.transaction][google.spanner.v1.ResultSetMetadata.transaction], which is a [Transaction][google.spanner.v1.Transaction]. - TransactionOptions begin = 3; - } -} diff --git a/google/cloud/spanner_v1/proto/transaction_pb2.py b/google/cloud/spanner_v1/proto/transaction_pb2.py deleted file mode 100644 index aa83e3373c..0000000000 --- a/google/cloud/spanner_v1/proto/transaction_pb2.py +++ /dev/null @@ -1,1019 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner_v1/proto/transaction.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner_v1/proto/transaction.proto", - package="google.spanner.v1", - syntax="proto3", - serialized_options=_b( - "\n\025com.google.spanner.v1B\020TransactionProtoP\001Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\252\002\027Google.Cloud.Spanner.V1\312\002\027Google\\Cloud\\Spanner\\V1" - ), - serialized_pb=_b( - '\n/google/cloud/spanner_v1/proto/transaction.proto\x12\x11google.spanner.v1\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1cgoogle/api/annotations.proto"\xc3\x04\n\x12TransactionOptions\x12\x45\n\nread_write\x18\x01 \x01(\x0b\x32/.google.spanner.v1.TransactionOptions.ReadWriteH\x00\x12O\n\x0fpartitioned_dml\x18\x03 \x01(\x0b\x32\x34.google.spanner.v1.TransactionOptions.PartitionedDmlH\x00\x12\x43\n\tread_only\x18\x02 \x01(\x0b\x32..google.spanner.v1.TransactionOptions.ReadOnlyH\x00\x1a\x0b\n\tReadWrite\x1a\x10\n\x0ePartitionedDml\x1a\xa8\x02\n\x08ReadOnly\x12\x10\n\x06strong\x18\x01 \x01(\x08H\x00\x12\x38\n\x12min_read_timestamp\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.TimestampH\x00\x12\x32\n\rmax_staleness\x18\x03 \x01(\x0b\x32\x19.google.protobuf.DurationH\x00\x12\x34\n\x0eread_timestamp\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.TimestampH\x00\x12\x34\n\x0f\x65xact_staleness\x18\x05 \x01(\x0b\x32\x19.google.protobuf.DurationH\x00\x12\x1d\n\x15return_read_timestamp\x18\x06 \x01(\x08\x42\x11\n\x0ftimestamp_boundB\x06\n\x04mode"M\n\x0bTransaction\x12\n\n\x02id\x18\x01 \x01(\x0c\x12\x32\n\x0eread_timestamp\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp"\xa4\x01\n\x13TransactionSelector\x12;\n\nsingle_use\x18\x01 \x01(\x0b\x32%.google.spanner.v1.TransactionOptionsH\x00\x12\x0c\n\x02id\x18\x02 \x01(\x0cH\x00\x12\x36\n\x05\x62\x65gin\x18\x03 \x01(\x0b\x32%.google.spanner.v1.TransactionOptionsH\x00\x42\n\n\x08selectorB\x99\x01\n\x15\x63om.google.spanner.v1B\x10TransactionProtoP\x01Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\xaa\x02\x17Google.Cloud.Spanner.V1\xca\x02\x17Google\\Cloud\\Spanner\\V1b\x06proto3' - ), - dependencies=[ - google_dot_protobuf_dot_duration__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - ], -) - - -_TRANSACTIONOPTIONS_READWRITE = _descriptor.Descriptor( - name="ReadWrite", - full_name="google.spanner.v1.TransactionOptions.ReadWrite", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=409, - serialized_end=420, -) - -_TRANSACTIONOPTIONS_PARTITIONEDDML = _descriptor.Descriptor( - name="PartitionedDml", - full_name="google.spanner.v1.TransactionOptions.PartitionedDml", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=422, - serialized_end=438, -) - -_TRANSACTIONOPTIONS_READONLY = _descriptor.Descriptor( - name="ReadOnly", - full_name="google.spanner.v1.TransactionOptions.ReadOnly", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="strong", - full_name="google.spanner.v1.TransactionOptions.ReadOnly.strong", - index=0, - number=1, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="min_read_timestamp", - full_name="google.spanner.v1.TransactionOptions.ReadOnly.min_read_timestamp", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="max_staleness", - full_name="google.spanner.v1.TransactionOptions.ReadOnly.max_staleness", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="read_timestamp", - full_name="google.spanner.v1.TransactionOptions.ReadOnly.read_timestamp", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="exact_staleness", - full_name="google.spanner.v1.TransactionOptions.ReadOnly.exact_staleness", - index=4, - number=5, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="return_read_timestamp", - full_name="google.spanner.v1.TransactionOptions.ReadOnly.return_read_timestamp", - index=5, - number=6, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="timestamp_bound", - full_name="google.spanner.v1.TransactionOptions.ReadOnly.timestamp_bound", - index=0, - containing_type=None, - fields=[], - ) - ], - serialized_start=441, - serialized_end=737, -) - -_TRANSACTIONOPTIONS = _descriptor.Descriptor( - name="TransactionOptions", - full_name="google.spanner.v1.TransactionOptions", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="read_write", - full_name="google.spanner.v1.TransactionOptions.read_write", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="partitioned_dml", - full_name="google.spanner.v1.TransactionOptions.partitioned_dml", - index=1, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="read_only", - full_name="google.spanner.v1.TransactionOptions.read_only", - index=2, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[ - _TRANSACTIONOPTIONS_READWRITE, - _TRANSACTIONOPTIONS_PARTITIONEDDML, - _TRANSACTIONOPTIONS_READONLY, - ], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="mode", - full_name="google.spanner.v1.TransactionOptions.mode", - index=0, - containing_type=None, - fields=[], - ) - ], - serialized_start=166, - serialized_end=745, -) - - -_TRANSACTION = _descriptor.Descriptor( - name="Transaction", - full_name="google.spanner.v1.Transaction", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="id", - full_name="google.spanner.v1.Transaction.id", - index=0, - number=1, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b(""), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="read_timestamp", - full_name="google.spanner.v1.Transaction.read_timestamp", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=747, - serialized_end=824, -) - - -_TRANSACTIONSELECTOR = _descriptor.Descriptor( - name="TransactionSelector", - full_name="google.spanner.v1.TransactionSelector", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="single_use", - full_name="google.spanner.v1.TransactionSelector.single_use", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="id", - full_name="google.spanner.v1.TransactionSelector.id", - index=1, - number=2, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b(""), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="begin", - full_name="google.spanner.v1.TransactionSelector.begin", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="selector", - full_name="google.spanner.v1.TransactionSelector.selector", - index=0, - containing_type=None, - fields=[], - ) - ], - serialized_start=827, - serialized_end=991, -) - -_TRANSACTIONOPTIONS_READWRITE.containing_type = _TRANSACTIONOPTIONS -_TRANSACTIONOPTIONS_PARTITIONEDDML.containing_type = _TRANSACTIONOPTIONS -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "min_read_timestamp" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "max_staleness" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "read_timestamp" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "exact_staleness" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_TRANSACTIONOPTIONS_READONLY.containing_type = _TRANSACTIONOPTIONS -_TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"].fields.append( - _TRANSACTIONOPTIONS_READONLY.fields_by_name["strong"] -) -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "strong" -].containing_oneof = _TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"] -_TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"].fields.append( - _TRANSACTIONOPTIONS_READONLY.fields_by_name["min_read_timestamp"] -) -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "min_read_timestamp" -].containing_oneof = _TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"] -_TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"].fields.append( - _TRANSACTIONOPTIONS_READONLY.fields_by_name["max_staleness"] -) -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "max_staleness" -].containing_oneof = _TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"] -_TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"].fields.append( - _TRANSACTIONOPTIONS_READONLY.fields_by_name["read_timestamp"] -) -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "read_timestamp" -].containing_oneof = _TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"] -_TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"].fields.append( - _TRANSACTIONOPTIONS_READONLY.fields_by_name["exact_staleness"] -) -_TRANSACTIONOPTIONS_READONLY.fields_by_name[ - "exact_staleness" -].containing_oneof = _TRANSACTIONOPTIONS_READONLY.oneofs_by_name["timestamp_bound"] -_TRANSACTIONOPTIONS.fields_by_name[ - "read_write" -].message_type = _TRANSACTIONOPTIONS_READWRITE -_TRANSACTIONOPTIONS.fields_by_name[ - "partitioned_dml" -].message_type = _TRANSACTIONOPTIONS_PARTITIONEDDML -_TRANSACTIONOPTIONS.fields_by_name[ - "read_only" -].message_type = _TRANSACTIONOPTIONS_READONLY -_TRANSACTIONOPTIONS.oneofs_by_name["mode"].fields.append( - _TRANSACTIONOPTIONS.fields_by_name["read_write"] -) -_TRANSACTIONOPTIONS.fields_by_name[ - "read_write" -].containing_oneof = _TRANSACTIONOPTIONS.oneofs_by_name["mode"] -_TRANSACTIONOPTIONS.oneofs_by_name["mode"].fields.append( - _TRANSACTIONOPTIONS.fields_by_name["partitioned_dml"] -) -_TRANSACTIONOPTIONS.fields_by_name[ - "partitioned_dml" -].containing_oneof = _TRANSACTIONOPTIONS.oneofs_by_name["mode"] -_TRANSACTIONOPTIONS.oneofs_by_name["mode"].fields.append( - _TRANSACTIONOPTIONS.fields_by_name["read_only"] -) -_TRANSACTIONOPTIONS.fields_by_name[ - "read_only" -].containing_oneof = _TRANSACTIONOPTIONS.oneofs_by_name["mode"] -_TRANSACTION.fields_by_name[ - "read_timestamp" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_TRANSACTIONSELECTOR.fields_by_name["single_use"].message_type = _TRANSACTIONOPTIONS -_TRANSACTIONSELECTOR.fields_by_name["begin"].message_type = _TRANSACTIONOPTIONS -_TRANSACTIONSELECTOR.oneofs_by_name["selector"].fields.append( - _TRANSACTIONSELECTOR.fields_by_name["single_use"] -) -_TRANSACTIONSELECTOR.fields_by_name[ - "single_use" -].containing_oneof = _TRANSACTIONSELECTOR.oneofs_by_name["selector"] -_TRANSACTIONSELECTOR.oneofs_by_name["selector"].fields.append( - _TRANSACTIONSELECTOR.fields_by_name["id"] -) -_TRANSACTIONSELECTOR.fields_by_name[ - "id" -].containing_oneof = _TRANSACTIONSELECTOR.oneofs_by_name["selector"] -_TRANSACTIONSELECTOR.oneofs_by_name["selector"].fields.append( - _TRANSACTIONSELECTOR.fields_by_name["begin"] -) -_TRANSACTIONSELECTOR.fields_by_name[ - "begin" -].containing_oneof = _TRANSACTIONSELECTOR.oneofs_by_name["selector"] -DESCRIPTOR.message_types_by_name["TransactionOptions"] = _TRANSACTIONOPTIONS -DESCRIPTOR.message_types_by_name["Transaction"] = _TRANSACTION -DESCRIPTOR.message_types_by_name["TransactionSelector"] = _TRANSACTIONSELECTOR -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -TransactionOptions = _reflection.GeneratedProtocolMessageType( - "TransactionOptions", - (_message.Message,), - dict( - ReadWrite=_reflection.GeneratedProtocolMessageType( - "ReadWrite", - (_message.Message,), - dict( - DESCRIPTOR=_TRANSACTIONOPTIONS_READWRITE, - __module__="google.cloud.spanner_v1.proto.transaction_pb2", - __doc__="""Message type to initiate a read-write transaction. - Currently this transaction type has no options. - - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.TransactionOptions.ReadWrite) - ), - ), - PartitionedDml=_reflection.GeneratedProtocolMessageType( - "PartitionedDml", - (_message.Message,), - dict( - DESCRIPTOR=_TRANSACTIONOPTIONS_PARTITIONEDDML, - __module__="google.cloud.spanner_v1.proto.transaction_pb2", - __doc__="""Message type to initiate a Partitioned DML transaction. - - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.TransactionOptions.PartitionedDml) - ), - ), - ReadOnly=_reflection.GeneratedProtocolMessageType( - "ReadOnly", - (_message.Message,), - dict( - DESCRIPTOR=_TRANSACTIONOPTIONS_READONLY, - __module__="google.cloud.spanner_v1.proto.transaction_pb2", - __doc__="""Message type to initiate a read-only transaction. - - - Attributes: - timestamp_bound: - How to choose the timestamp for the read-only transaction. - strong: - Read at a timestamp where all previously committed - transactions are visible. - min_read_timestamp: - Executes all reads at a timestamp >= ``min_read_timestamp``. - This is useful for requesting fresher data than some previous - read, or data that is fresh enough to observe the effects of - some previously committed transaction whose timestamp is - known. Note that this option can only be used in single-use - transactions. A timestamp in RFC3339 UTC "Zulu" format, - accurate to nanoseconds. Example: - ``"2014-10-02T15:01:23.045123456Z"``. - max_staleness: - Read data at a timestamp >= ``NOW - max_staleness`` seconds. - Guarantees that all writes that have committed more than the - specified number of seconds ago are visible. Because Cloud - Spanner chooses the exact timestamp, this mode works even if - the client's local clock is substantially skewed from Cloud - Spanner commit timestamps. Useful for reading the freshest - data available at a nearby replica, while bounding the - possible staleness if the local replica has fallen behind. - Note that this option can only be used in single-use - transactions. - read_timestamp: - Executes all reads at the given timestamp. Unlike other modes, - reads at a specific timestamp are repeatable; the same read at - the same timestamp always returns the same data. If the - timestamp is in the future, the read will block until the - specified timestamp, modulo the read's deadline. Useful for - large scale consistent reads such as mapreduces, or for - coordinating many reads against a consistent snapshot of the - data. A timestamp in RFC3339 UTC "Zulu" format, accurate to - nanoseconds. Example: ``"2014-10-02T15:01:23.045123456Z"``. - exact_staleness: - Executes all reads at a timestamp that is ``exact_staleness`` - old. The timestamp is chosen soon after the read is started. - Guarantees that all writes that have committed more than the - specified number of seconds ago are visible. Because Cloud - Spanner chooses the exact timestamp, this mode works even if - the client's local clock is substantially skewed from Cloud - Spanner commit timestamps. Useful for reading at nearby - replicas without the distributed timestamp negotiation - overhead of ``max_staleness``. - return_read_timestamp: - If true, the Cloud Spanner-selected read timestamp is included - in the [Transaction][google.spanner.v1.Transaction] message - that describes the transaction. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.TransactionOptions.ReadOnly) - ), - ), - DESCRIPTOR=_TRANSACTIONOPTIONS, - __module__="google.cloud.spanner_v1.proto.transaction_pb2", - __doc__="""# Transactions - - Each session can have at most one active transaction at a time. After - the active transaction is completed, the session can immediately be - re-used for the next transaction. It is not necessary to create a new - session for each transaction. - - Transaction Modes - - - Cloud Spanner supports three transaction modes: - - 1. Locking read-write. This type of transaction is the only way to write - data into Cloud Spanner. These transactions rely on pessimistic - locking and, if necessary, two-phase commit. Locking read-write - transactions may abort, requiring the application to retry. - - 2. Snapshot read-only. This transaction type provides guaranteed - consistency across several reads, but does not allow writes. Snapshot - read-only transactions can be configured to read at timestamps in the - past. Snapshot read-only transactions do not need to be committed. - - 3. Partitioned DML. This type of transaction is used to execute a single - Partitioned DML statement. Partitioned DML partitions the key space - and runs the DML statement over each partition in parallel using - separate, internal transactions that commit independently. - Partitioned DML transactions do not need to be committed. - - For transactions that only read, snapshot read-only transactions provide - simpler semantics and are almost always faster. In particular, read-only - transactions do not take locks, so they do not conflict with read-write - transactions. As a consequence of not taking locks, they also do not - abort, so retry loops are not needed. - - Transactions may only read/write data in a single database. They may, - however, read/write data in different tables within that database. - - Locking Read-Write Transactions - - - Locking transactions may be used to atomically read-modify-write data - anywhere in a database. This type of transaction is externally - consistent. - - Clients should attempt to minimize the amount of time a transaction is - active. Faster transactions commit with higher probability and cause - less contention. Cloud Spanner attempts to keep read locks active as - long as the transaction continues to do reads, and the transaction has - not been terminated by [Commit][google.spanner.v1.Spanner.Commit] or - [Rollback][google.spanner.v1.Spanner.Rollback]. Long periods of - inactivity at the client may cause Cloud Spanner to release a - transaction's locks and abort it. - - Conceptually, a read-write transaction consists of zero or more reads or - SQL statements followed by [Commit][google.spanner.v1.Spanner.Commit]. - At any time before [Commit][google.spanner.v1.Spanner.Commit], the - client can send a [Rollback][google.spanner.v1.Spanner.Rollback] request - to abort the transaction. - - Semantics - - - Cloud Spanner can commit the transaction if all read locks it acquired - are still valid at commit time, and it is able to acquire write locks - for all writes. Cloud Spanner can abort the transaction for any reason. - If a commit attempt returns ``ABORTED``, Cloud Spanner guarantees that - the transaction has not modified any user data in Cloud Spanner. - - Unless the transaction commits, Cloud Spanner makes no guarantees about - how long the transaction's locks were held for. It is an error to use - Cloud Spanner locks for any sort of mutual exclusion other than between - Cloud Spanner transactions themselves. - - Retrying Aborted Transactions - - - When a transaction aborts, the application can choose to retry the whole - transaction again. To maximize the chances of successfully committing - the retry, the client should execute the retry in the same session as - the original attempt. The original session's lock priority increases - with each consecutive abort, meaning that each attempt has a slightly - better chance of success than the previous. - - Under some circumstances (e.g., many transactions attempting to modify - the same row(s)), a transaction can abort many times in a short period - before successfully committing. Thus, it is not a good idea to cap the - number of retries a transaction can attempt; instead, it is better to - limit the total amount of wall time spent retrying. - - Idle Transactions - - - A transaction is considered idle if it has no outstanding reads or SQL - queries and has not started a read or SQL query within the last 10 - seconds. Idle transactions can be aborted by Cloud Spanner so that they - don't hold on to locks indefinitely. In that case, the commit will fail - with error ``ABORTED``. - - If this behavior is undesirable, periodically executing a simple SQL - query in the transaction (e.g., ``SELECT 1``) prevents the transaction - from becoming idle. - - Snapshot Read-Only Transactions - - - Snapshot read-only transactions provides a simpler method than locking - read-write transactions for doing several consistent reads. However, - this type of transaction does not support writes. - - Snapshot transactions do not take locks. Instead, they work by choosing - a Cloud Spanner timestamp, then executing all reads at that timestamp. - Since they do not acquire locks, they do not block concurrent read-write - transactions. - - Unlike locking read-write transactions, snapshot read-only transactions - never abort. They can fail if the chosen read timestamp is garbage - collected; however, the default garbage collection policy is generous - enough that most applications do not need to worry about this in - practice. - - Snapshot read-only transactions do not need to call - [Commit][google.spanner.v1.Spanner.Commit] or - [Rollback][google.spanner.v1.Spanner.Rollback] (and in fact are not - permitted to do so). - - To execute a snapshot transaction, the client specifies a timestamp - bound, which tells Cloud Spanner how to choose a read timestamp. - - The types of timestamp bound are: - - - Strong (the default). - - Bounded staleness. - - Exact staleness. - - If the Cloud Spanner database to be read is geographically distributed, - stale read-only transactions can execute more quickly than strong or - read-write transaction, because they are able to execute far from the - leader replica. - - Each type of timestamp bound is discussed in detail below. - - Strong - - - Strong reads are guaranteed to see the effects of all transactions that - have committed before the start of the read. Furthermore, all rows - yielded by a single read are consistent with each other -- if any part - of the read observes a transaction, all parts of the read see the - transaction. - - Strong reads are not repeatable: two consecutive strong read-only - transactions might return inconsistent results if there are concurrent - writes. If consistency across reads is required, the reads should be - executed within a transaction or at an exact read timestamp. - - See - [TransactionOptions.ReadOnly.strong][google.spanner.v1.TransactionOptions.ReadOnly.strong]. - - Exact Staleness - - - These timestamp bounds execute reads at a user-specified timestamp. - Reads at a timestamp are guaranteed to see a consistent prefix of the - global transaction history: they observe modifications done by all - transactions with a commit timestamp <= the read timestamp, and observe - none of the modifications done by transactions with a larger commit - timestamp. They will block until all conflicting transactions that may - be assigned commit timestamps <= the read timestamp have finished. - - The timestamp can either be expressed as an absolute Cloud Spanner - commit timestamp or a staleness relative to the current time. - - These modes do not require a "negotiation phase" to pick a timestamp. As - a result, they execute slightly faster than the equivalent boundedly - stale concurrency modes. On the other hand, boundedly stale reads - usually return fresher results. - - See - [TransactionOptions.ReadOnly.read\_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.read\_timestamp] - and - [TransactionOptions.ReadOnly.exact\_staleness][google.spanner.v1.TransactionOptions.ReadOnly.exact\_staleness]. - - Bounded Staleness - - - Bounded staleness modes allow Cloud Spanner to pick the read timestamp, - subject to a user-provided staleness bound. Cloud Spanner chooses the - newest timestamp within the staleness bound that allows execution of the - reads at the closest available replica without blocking. - - All rows yielded are consistent with each other -- if any part of the - read observes a transaction, all parts of the read see the transaction. - Boundedly stale reads are not repeatable: two stale reads, even if they - use the same staleness bound, can execute at different timestamps and - thus return inconsistent results. - - Boundedly stale reads execute in two phases: the first phase negotiates - a timestamp among all replicas needed to serve the read. In the second - phase, reads are executed at the negotiated timestamp. - - As a result of the two phase execution, bounded staleness reads are - usually a little slower than comparable exact staleness reads. However, - they are typically able to return fresher results, and are more likely - to execute at the closest replica. - - Because the timestamp negotiation requires up-front knowledge of which - rows will be read, it can only be used with single-use read-only - transactions. - - See - [TransactionOptions.ReadOnly.max\_staleness][google.spanner.v1.TransactionOptions.ReadOnly.max\_staleness] - and - [TransactionOptions.ReadOnly.min\_read\_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.min\_read\_timestamp]. - - Old Read Timestamps and Garbage Collection - - - Cloud Spanner continuously garbage collects deleted and overwritten data - in the background to reclaim storage space. This process is known as - "version GC". By default, version GC reclaims versions after they are - one hour old. Because of this, Cloud Spanner cannot perform reads at - read timestamps more than one hour in the past. This restriction also - applies to in-progress reads and/or SQL queries whose timestamp become - too old while executing. Reads and SQL queries with too-old read - timestamps fail with the error ``FAILED_PRECONDITION``. - - Partitioned DML Transactions - - - Partitioned DML transactions are used to execute DML statements with a - different execution strategy that provides different, and often better, - scalability properties for large, table-wide operations than DML in a - ReadWrite transaction. Smaller scoped statements, such as an OLTP - workload, should prefer using ReadWrite transactions. - - Partitioned DML partitions the keyspace and runs the DML statement on - each partition in separate, internal transactions. These transactions - commit automatically when complete, and run independently from one - another. - - To reduce lock contention, this execution strategy only acquires read - locks on rows that match the WHERE clause of the statement. - Additionally, the smaller per-partition transactions hold locks for less - time. - - That said, Partitioned DML is not a drop-in replacement for standard DML - used in ReadWrite transactions. - - - The DML statement must be fully-partitionable. Specifically, the - statement must be expressible as the union of many statements which - each access only a single row of the table. - - - The statement is not applied atomically to all rows of the table. - Rather, the statement is applied atomically to partitions of the - table, in independent transactions. Secondary index rows are updated - atomically with the base table rows. - - - Partitioned DML does not guarantee exactly-once execution semantics - against a partition. The statement will be applied at least once to - each partition. It is strongly recommended that the DML statement - should be idempotent to avoid unexpected results. For instance, it is - potentially dangerous to run a statement such as - ``UPDATE table SET column = column + 1`` as it could be run multiple - times against some rows. - - - The partitions are committed automatically - there is no support for - Commit or Rollback. If the call returns an error, or if the client - issuing the ExecuteSql call dies, it is possible that some rows had - the statement executed on them successfully. It is also possible that - statement was never executed against other rows. - - - Partitioned DML transactions may only contain the execution of a - single DML statement via ExecuteSql or ExecuteStreamingSql. - - - If any error is encountered during the execution of the partitioned - DML operation (for instance, a UNIQUE INDEX violation, division by - zero, or a value that cannot be stored due to schema constraints), - then the operation is stopped at that point and an error is returned. - It is possible that at this point, some partitions have been - committed (or even committed multiple times), and other partitions - have not been run at all. - - Given the above, Partitioned DML is good fit for large, database-wide, - operations that are idempotent, such as deleting old rows from a very - large table. - - - Attributes: - mode: - Required. The type of transaction. - read_write: - Transaction may write. Authorization to begin a read-write - transaction requires - ``spanner.databases.beginOrRollbackReadWriteTransaction`` - permission on the ``session`` resource. - partitioned_dml: - Partitioned DML transaction. Authorization to begin a - Partitioned DML transaction requires - ``spanner.databases.beginPartitionedDmlTransaction`` - permission on the ``session`` resource. - read_only: - Transaction will not write. Authorization to begin a read- - only transaction requires - ``spanner.databases.beginReadOnlyTransaction`` permission on - the ``session`` resource. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.TransactionOptions) - ), -) -_sym_db.RegisterMessage(TransactionOptions) -_sym_db.RegisterMessage(TransactionOptions.ReadWrite) -_sym_db.RegisterMessage(TransactionOptions.PartitionedDml) -_sym_db.RegisterMessage(TransactionOptions.ReadOnly) - -Transaction = _reflection.GeneratedProtocolMessageType( - "Transaction", - (_message.Message,), - dict( - DESCRIPTOR=_TRANSACTION, - __module__="google.cloud.spanner_v1.proto.transaction_pb2", - __doc__="""A transaction. - - - Attributes: - id: - \ ``id`` may be used to identify the transaction in subsequent - [Read][google.spanner.v1.Spanner.Read], - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], - [Commit][google.spanner.v1.Spanner.Commit], or - [Rollback][google.spanner.v1.Spanner.Rollback] calls. Single- - use read-only transactions do not have IDs, because single-use - transactions do not support multiple requests. - read_timestamp: - For snapshot read-only transactions, the read timestamp chosen - for the transaction. Not returned by default: see [Transaction - Options.ReadOnly.return\_read\_timestamp][google.spanner.v1.Tr - ansactionOptions.ReadOnly.return\_read\_timestamp]. A - timestamp in RFC3339 UTC "Zulu" format, accurate to - nanoseconds. Example: ``"2014-10-02T15:01:23.045123456Z"``. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.Transaction) - ), -) -_sym_db.RegisterMessage(Transaction) - -TransactionSelector = _reflection.GeneratedProtocolMessageType( - "TransactionSelector", - (_message.Message,), - dict( - DESCRIPTOR=_TRANSACTIONSELECTOR, - __module__="google.cloud.spanner_v1.proto.transaction_pb2", - __doc__="""This message is used to select the transaction in which a - [Read][google.spanner.v1.Spanner.Read] or - [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] call runs. - - See [TransactionOptions][google.spanner.v1.TransactionOptions] for more - information about transactions. - - - Attributes: - selector: - If no fields are set, the default is a single use transaction - with strong concurrency. - single_use: - Execute the read or SQL query in a temporary transaction. This - is the most efficient way to execute a transaction that - consists of a single SQL query. - id: - Execute the read or SQL query in a previously-started - transaction. - begin: - Begin a new transaction and execute this read or SQL query in - it. The transaction ID of the new transaction is returned in [ - ResultSetMetadata.transaction][google.spanner.v1.ResultSetMeta - data.transaction], which is a - [Transaction][google.spanner.v1.Transaction]. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.TransactionSelector) - ), -) -_sym_db.RegisterMessage(TransactionSelector) - - -DESCRIPTOR._options = None -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_v1/proto/transaction_pb2_grpc.py b/google/cloud/spanner_v1/proto/transaction_pb2_grpc.py deleted file mode 100644 index 07cb78fe03..0000000000 --- a/google/cloud/spanner_v1/proto/transaction_pb2_grpc.py +++ /dev/null @@ -1,2 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc diff --git a/google/cloud/spanner_v1/proto/type.proto b/google/cloud/spanner_v1/proto/type.proto deleted file mode 100644 index b749d5084f..0000000000 --- a/google/cloud/spanner_v1/proto/type.proto +++ /dev/null @@ -1,118 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.spanner.v1; - -import "google/api/annotations.proto"; - -option csharp_namespace = "Google.Cloud.Spanner.V1"; -option go_package = "google.golang.org/genproto/googleapis/spanner/v1;spanner"; -option java_multiple_files = true; -option java_outer_classname = "TypeProto"; -option java_package = "com.google.spanner.v1"; -option php_namespace = "Google\\Cloud\\Spanner\\V1"; - -// `TypeCode` is used as part of [Type][google.spanner.v1.Type] to -// indicate the type of a Cloud Spanner value. -// -// Each legal value of a type can be encoded to or decoded from a JSON -// value, using the encodings described below. All Cloud Spanner values can -// be `null`, regardless of type; `null`s are always encoded as a JSON -// `null`. -enum TypeCode { - // Not specified. - TYPE_CODE_UNSPECIFIED = 0; - - // Encoded as JSON `true` or `false`. - BOOL = 1; - - // Encoded as `string`, in decimal format. - INT64 = 2; - - // Encoded as `number`, or the strings `"NaN"`, `"Infinity"`, or - // `"-Infinity"`. - FLOAT64 = 3; - - // Encoded as `string` in RFC 3339 timestamp format. The time zone - // must be present, and must be `"Z"`. - // - // If the schema has the column option - // `allow_commit_timestamp=true`, the placeholder string - // `"spanner.commit_timestamp()"` can be used to instruct the system - // to insert the commit timestamp associated with the transaction - // commit. - TIMESTAMP = 4; - - // Encoded as `string` in RFC 3339 date format. - DATE = 5; - - // Encoded as `string`. - STRING = 6; - - // Encoded as a base64-encoded `string`, as described in RFC 4648, - // section 4. - BYTES = 7; - - // Encoded as `list`, where the list elements are represented - // according to - // [array_element_type][google.spanner.v1.Type.array_element_type]. - ARRAY = 8; - - // Encoded as `list`, where list element `i` is represented according - // to [struct_type.fields[i]][google.spanner.v1.StructType.fields]. - STRUCT = 9; -} - -// `Type` indicates the type of a Cloud Spanner value, as might be stored in a -// table cell or returned from an SQL query. -message Type { - // Required. The [TypeCode][google.spanner.v1.TypeCode] for this type. - TypeCode code = 1; - - // If [code][google.spanner.v1.Type.code] == [ARRAY][google.spanner.v1.TypeCode.ARRAY], then `array_element_type` - // is the type of the array elements. - Type array_element_type = 2; - - // If [code][google.spanner.v1.Type.code] == [STRUCT][google.spanner.v1.TypeCode.STRUCT], then `struct_type` - // provides type information for the struct's fields. - StructType struct_type = 3; -} - -// `StructType` defines the fields of a [STRUCT][google.spanner.v1.TypeCode.STRUCT] type. -message StructType { - // Message representing a single field of a struct. - message Field { - // The name of the field. For reads, this is the column name. For - // SQL queries, it is the column alias (e.g., `"Word"` in the - // query `"SELECT 'hello' AS Word"`), or the column name (e.g., - // `"ColName"` in the query `"SELECT ColName FROM Table"`). Some - // columns might have an empty name (e.g., !"SELECT - // UPPER(ColName)"`). Note that a query result can contain - // multiple fields with the same name. - string name = 1; - - // The type of the field. - Type type = 2; - } - - // The list of fields that make up this struct. Order is - // significant, because values of this struct type are represented as - // lists, where the order of field values matches the order of - // fields in the [StructType][google.spanner.v1.StructType]. In turn, the order of fields - // matches the order of columns in a read request, or the order of - // fields in the `SELECT` clause of a query. - repeated Field fields = 1; -} diff --git a/google/cloud/spanner_v1/proto/type_pb2.py b/google/cloud/spanner_v1/proto/type_pb2.py deleted file mode 100644 index 2ef35b36c6..0000000000 --- a/google/cloud/spanner_v1/proto/type_pb2.py +++ /dev/null @@ -1,357 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/spanner_v1/proto/type.proto - -import sys - -_b = sys.version_info[0] < 3 and (lambda x: x) or (lambda x: x.encode("latin1")) -from google.protobuf.internal import enum_type_wrapper -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/spanner_v1/proto/type.proto", - package="google.spanner.v1", - syntax="proto3", - serialized_options=_b( - "\n\025com.google.spanner.v1B\tTypeProtoP\001Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\252\002\027Google.Cloud.Spanner.V1\312\002\027Google\\Cloud\\Spanner\\V1" - ), - serialized_pb=_b( - '\n(google/cloud/spanner_v1/proto/type.proto\x12\x11google.spanner.v1\x1a\x1cgoogle/api/annotations.proto"\x9a\x01\n\x04Type\x12)\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x1b.google.spanner.v1.TypeCode\x12\x33\n\x12\x61rray_element_type\x18\x02 \x01(\x0b\x32\x17.google.spanner.v1.Type\x12\x32\n\x0bstruct_type\x18\x03 \x01(\x0b\x32\x1d.google.spanner.v1.StructType"\x7f\n\nStructType\x12\x33\n\x06\x66ields\x18\x01 \x03(\x0b\x32#.google.spanner.v1.StructType.Field\x1a<\n\x05\x46ield\x12\x0c\n\x04name\x18\x01 \x01(\t\x12%\n\x04type\x18\x02 \x01(\x0b\x32\x17.google.spanner.v1.Type*\x8e\x01\n\x08TypeCode\x12\x19\n\x15TYPE_CODE_UNSPECIFIED\x10\x00\x12\x08\n\x04\x42OOL\x10\x01\x12\t\n\x05INT64\x10\x02\x12\x0b\n\x07\x46LOAT64\x10\x03\x12\r\n\tTIMESTAMP\x10\x04\x12\x08\n\x04\x44\x41TE\x10\x05\x12\n\n\x06STRING\x10\x06\x12\t\n\x05\x42YTES\x10\x07\x12\t\n\x05\x41RRAY\x10\x08\x12\n\n\x06STRUCT\x10\tB\x92\x01\n\x15\x63om.google.spanner.v1B\tTypeProtoP\x01Z8google.golang.org/genproto/googleapis/spanner/v1;spanner\xaa\x02\x17Google.Cloud.Spanner.V1\xca\x02\x17Google\\Cloud\\Spanner\\V1b\x06proto3' - ), - dependencies=[google_dot_api_dot_annotations__pb2.DESCRIPTOR], -) - -_TYPECODE = _descriptor.EnumDescriptor( - name="TypeCode", - full_name="google.spanner.v1.TypeCode", - filename=None, - file=DESCRIPTOR, - values=[ - _descriptor.EnumValueDescriptor( - name="TYPE_CODE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - ), - _descriptor.EnumValueDescriptor( - name="BOOL", index=1, number=1, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="INT64", index=2, number=2, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="FLOAT64", index=3, number=3, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="TIMESTAMP", index=4, number=4, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="DATE", index=5, number=5, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="STRING", index=6, number=6, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="BYTES", index=7, number=7, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="ARRAY", index=8, number=8, serialized_options=None, type=None - ), - _descriptor.EnumValueDescriptor( - name="STRUCT", index=9, number=9, serialized_options=None, type=None - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=380, - serialized_end=522, -) -_sym_db.RegisterEnumDescriptor(_TYPECODE) - -TypeCode = enum_type_wrapper.EnumTypeWrapper(_TYPECODE) -TYPE_CODE_UNSPECIFIED = 0 -BOOL = 1 -INT64 = 2 -FLOAT64 = 3 -TIMESTAMP = 4 -DATE = 5 -STRING = 6 -BYTES = 7 -ARRAY = 8 -STRUCT = 9 - - -_TYPE = _descriptor.Descriptor( - name="Type", - full_name="google.spanner.v1.Type", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="code", - full_name="google.spanner.v1.Type.code", - index=0, - number=1, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="array_element_type", - full_name="google.spanner.v1.Type.array_element_type", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="struct_type", - full_name="google.spanner.v1.Type.struct_type", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=94, - serialized_end=248, -) - - -_STRUCTTYPE_FIELD = _descriptor.Descriptor( - name="Field", - full_name="google.spanner.v1.StructType.Field", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.spanner.v1.StructType.Field.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=_b("").decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - _descriptor.FieldDescriptor( - name="type", - full_name="google.spanner.v1.StructType.Field.type", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=317, - serialized_end=377, -) - -_STRUCTTYPE = _descriptor.Descriptor( - name="StructType", - full_name="google.spanner.v1.StructType", - filename=None, - file=DESCRIPTOR, - containing_type=None, - fields=[ - _descriptor.FieldDescriptor( - name="fields", - full_name="google.spanner.v1.StructType.fields", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - ) - ], - extensions=[], - nested_types=[_STRUCTTYPE_FIELD], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=250, - serialized_end=377, -) - -_TYPE.fields_by_name["code"].enum_type = _TYPECODE -_TYPE.fields_by_name["array_element_type"].message_type = _TYPE -_TYPE.fields_by_name["struct_type"].message_type = _STRUCTTYPE -_STRUCTTYPE_FIELD.fields_by_name["type"].message_type = _TYPE -_STRUCTTYPE_FIELD.containing_type = _STRUCTTYPE -_STRUCTTYPE.fields_by_name["fields"].message_type = _STRUCTTYPE_FIELD -DESCRIPTOR.message_types_by_name["Type"] = _TYPE -DESCRIPTOR.message_types_by_name["StructType"] = _STRUCTTYPE -DESCRIPTOR.enum_types_by_name["TypeCode"] = _TYPECODE -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -Type = _reflection.GeneratedProtocolMessageType( - "Type", - (_message.Message,), - dict( - DESCRIPTOR=_TYPE, - __module__="google.cloud.spanner_v1.proto.type_pb2", - __doc__="""\ ``Type`` indicates the type of a Cloud Spanner value, as - might be stored in a table cell or returned from an SQL query. - - - Attributes: - code: - Required. The [TypeCode][google.spanner.v1.TypeCode] for this - type. - array_element_type: - If [code][google.spanner.v1.Type.code] == - [ARRAY][google.spanner.v1.TypeCode.ARRAY], then - ``array_element_type`` is the type of the array elements. - struct_type: - If [code][google.spanner.v1.Type.code] == - [STRUCT][google.spanner.v1.TypeCode.STRUCT], then - ``struct_type`` provides type information for the struct's - fields. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.Type) - ), -) -_sym_db.RegisterMessage(Type) - -StructType = _reflection.GeneratedProtocolMessageType( - "StructType", - (_message.Message,), - dict( - Field=_reflection.GeneratedProtocolMessageType( - "Field", - (_message.Message,), - dict( - DESCRIPTOR=_STRUCTTYPE_FIELD, - __module__="google.cloud.spanner_v1.proto.type_pb2", - __doc__="""Message representing a single field of a struct. - - - Attributes: - name: - The name of the field. For reads, this is the column name. For - SQL queries, it is the column alias (e.g., ``"Word"`` in the - query ``"SELECT 'hello' AS Word"``), or the column name (e.g., - ``"ColName"`` in the query ``"SELECT ColName FROM Table"``). - Some columns might have an empty name (e.g., !"SELECT - UPPER(ColName)"\`). Note that a query result can contain - multiple fields with the same name. - type: - The type of the field. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.StructType.Field) - ), - ), - DESCRIPTOR=_STRUCTTYPE, - __module__="google.cloud.spanner_v1.proto.type_pb2", - __doc__="""\ ``StructType`` defines the fields of a - [STRUCT][google.spanner.v1.TypeCode.STRUCT] type. - - - Attributes: - fields: - The list of fields that make up this struct. Order is - significant, because values of this struct type are - represented as lists, where the order of field values matches - the order of fields in the - [StructType][google.spanner.v1.StructType]. In turn, the order - of fields matches the order of columns in a read request, or - the order of fields in the ``SELECT`` clause of a query. - """, - # @@protoc_insertion_point(class_scope:google.spanner.v1.StructType) - ), -) -_sym_db.RegisterMessage(StructType) -_sym_db.RegisterMessage(StructType.Field) - - -DESCRIPTOR._options = None -# @@protoc_insertion_point(module_scope) diff --git a/google/cloud/spanner_v1/proto/type_pb2_grpc.py b/google/cloud/spanner_v1/proto/type_pb2_grpc.py deleted file mode 100644 index 07cb78fe03..0000000000 --- a/google/cloud/spanner_v1/proto/type_pb2_grpc.py +++ /dev/null @@ -1,2 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -import grpc diff --git a/google/cloud/spanner_v1/py.typed b/google/cloud/spanner_v1/py.typed new file mode 100644 index 0000000000..0989eccd04 --- /dev/null +++ b/google/cloud/spanner_v1/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-spanner package uses inline types. diff --git a/google/cloud/spanner_v1/request_id_header.py b/google/cloud/spanner_v1/request_id_header.py new file mode 100644 index 0000000000..b540b725f5 --- /dev/null +++ b/google/cloud/spanner_v1/request_id_header.py @@ -0,0 +1,68 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +REQ_ID_VERSION = 1 # The version of the x-goog-spanner-request-id spec. +REQ_ID_HEADER_KEY = "x-goog-spanner-request-id" + + +def generate_rand_uint64(): + b = os.urandom(8) + return ( + b[7] & 0xFF + | (b[6] & 0xFF) << 8 + | (b[5] & 0xFF) << 16 + | (b[4] & 0xFF) << 24 + | (b[3] & 0xFF) << 32 + | (b[2] & 0xFF) << 36 + | (b[1] & 0xFF) << 48 + | (b[0] & 0xFF) << 56 + ) + + +REQ_RAND_PROCESS_ID = generate_rand_uint64() +X_GOOG_SPANNER_REQUEST_ID_SPAN_ATTR = "x_goog_spanner_request_id" + + +def with_request_id( + client_id, channel_id, nth_request, attempt, other_metadata=[], span=None +): + req_id = build_request_id(client_id, channel_id, nth_request, attempt) + all_metadata = (other_metadata or []).copy() + all_metadata.append((REQ_ID_HEADER_KEY, req_id)) + + if span is not None: + span.set_attribute(X_GOOG_SPANNER_REQUEST_ID_SPAN_ATTR, req_id) + + return all_metadata + + +def build_request_id(client_id, channel_id, nth_request, attempt): + return f"{REQ_ID_VERSION}.{REQ_RAND_PROCESS_ID}.{client_id}.{channel_id}.{nth_request}.{attempt}" + + +def parse_request_id(request_id_str): + splits = request_id_str.split(".") + version, rand_process_id, client_id, channel_id, nth_request, nth_attempt = list( + map(lambda v: int(v), splits) + ) + return ( + version, + rand_process_id, + client_id, + channel_id, + nth_request, + nth_attempt, + ) diff --git a/google/cloud/spanner_v1/services/__init__.py b/google/cloud/spanner_v1/services/__init__.py new file mode 100644 index 0000000000..cbf94b283c --- /dev/null +++ b/google/cloud/spanner_v1/services/__init__.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/google/cloud/spanner_v1/services/spanner/__init__.py b/google/cloud/spanner_v1/services/spanner/__init__.py new file mode 100644 index 0000000000..3af41fdc08 --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/__init__.py @@ -0,0 +1,22 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from .client import SpannerClient +from .async_client import SpannerAsyncClient + +__all__ = ( + "SpannerClient", + "SpannerAsyncClient", +) diff --git a/google/cloud/spanner_v1/services/spanner/async_client.py b/google/cloud/spanner_v1/services/spanner/async_client.py new file mode 100644 index 0000000000..c48b62d532 --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/async_client.py @@ -0,0 +1,2208 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import logging as std_logging +from collections import OrderedDict +import re +from typing import ( + Dict, + Callable, + Mapping, + MutableMapping, + MutableSequence, + Optional, + AsyncIterable, + Awaitable, + Sequence, + Tuple, + Type, + Union, +) + +from google.cloud.spanner_v1 import gapic_version as package_version + +from google.api_core.client_options import ClientOptions +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry_async as retries +from google.auth import credentials as ga_credentials # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + + +try: + OptionalRetry = Union[retries.AsyncRetry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.AsyncRetry, object, None] # type: ignore + +from google.cloud.spanner_v1.services.spanner import pagers +from google.cloud.spanner_v1.types import commit_response +from google.cloud.spanner_v1.types import mutation +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import spanner +from google.cloud.spanner_v1.types import transaction +from google.protobuf import struct_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from google.rpc import status_pb2 # type: ignore +from .transports.base import SpannerTransport, DEFAULT_CLIENT_INFO +from .transports.grpc_asyncio import SpannerGrpcAsyncIOTransport +from .client import SpannerClient + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class SpannerAsyncClient: + """Cloud Spanner API + + The Cloud Spanner API can be used to manage sessions and execute + transactions on data stored in Cloud Spanner databases. + """ + + _client: SpannerClient + + # Copy defaults from the synchronous client for use here. + # Note: DEFAULT_ENDPOINT is deprecated. Use _DEFAULT_ENDPOINT_TEMPLATE instead. + DEFAULT_ENDPOINT = SpannerClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = SpannerClient.DEFAULT_MTLS_ENDPOINT + _DEFAULT_ENDPOINT_TEMPLATE = SpannerClient._DEFAULT_ENDPOINT_TEMPLATE + _DEFAULT_UNIVERSE = SpannerClient._DEFAULT_UNIVERSE + + database_path = staticmethod(SpannerClient.database_path) + parse_database_path = staticmethod(SpannerClient.parse_database_path) + session_path = staticmethod(SpannerClient.session_path) + parse_session_path = staticmethod(SpannerClient.parse_session_path) + common_billing_account_path = staticmethod( + SpannerClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + SpannerClient.parse_common_billing_account_path + ) + common_folder_path = staticmethod(SpannerClient.common_folder_path) + parse_common_folder_path = staticmethod(SpannerClient.parse_common_folder_path) + common_organization_path = staticmethod(SpannerClient.common_organization_path) + parse_common_organization_path = staticmethod( + SpannerClient.parse_common_organization_path + ) + common_project_path = staticmethod(SpannerClient.common_project_path) + parse_common_project_path = staticmethod(SpannerClient.parse_common_project_path) + common_location_path = staticmethod(SpannerClient.common_location_path) + parse_common_location_path = staticmethod(SpannerClient.parse_common_location_path) + + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials + info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + SpannerAsyncClient: The constructed client. + """ + return SpannerClient.from_service_account_info.__func__(SpannerAsyncClient, info, *args, **kwargs) # type: ignore + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + SpannerAsyncClient: The constructed client. + """ + return SpannerClient.from_service_account_file.__func__(SpannerAsyncClient, filename, *args, **kwargs) # type: ignore + + from_service_account_json = from_service_account_file + + @classmethod + def get_mtls_endpoint_and_cert_source( + cls, client_options: Optional[ClientOptions] = None + ): + """Return the API endpoint and client cert source for mutual TLS. + + The client cert source is determined in the following order: + (1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the + client cert source is None. + (2) if `client_options.client_cert_source` is provided, use the provided one; if the + default client cert source exists, use the default one; otherwise the client cert + source is None. + + The API endpoint is determined in the following order: + (1) if `client_options.api_endpoint` if provided, use the provided one. + (2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the + default mTLS endpoint; if the environment variable is "never", use the default API + endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise + use the default API endpoint. + + More details can be found at https://google.aip.dev/auth/4114. + + Args: + client_options (google.api_core.client_options.ClientOptions): Custom options for the + client. Only the `api_endpoint` and `client_cert_source` properties may be used + in this method. + + Returns: + Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the + client cert source to use. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If any errors happen. + """ + return SpannerClient.get_mtls_endpoint_and_cert_source(client_options) # type: ignore + + @property + def transport(self) -> SpannerTransport: + """Returns the transport used by the client instance. + + Returns: + SpannerTransport: The transport used by the client instance. + """ + return self._client.transport + + @property + def api_endpoint(self): + """Return the API endpoint used by the client instance. + + Returns: + str: The API endpoint used by the client instance. + """ + return self._client._api_endpoint + + @property + def universe_domain(self) -> str: + """Return the universe domain used by the client instance. + + Returns: + str: The universe domain used + by the client instance. + """ + return self._client._universe_domain + + get_transport_class = SpannerClient.get_transport_class + + def __init__( + self, + *, + credentials: Optional[ga_credentials.Credentials] = None, + transport: Optional[ + Union[str, SpannerTransport, Callable[..., SpannerTransport]] + ] = "grpc_asyncio", + client_options: Optional[ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiates the spanner async client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Optional[Union[str,SpannerTransport,Callable[..., SpannerTransport]]]): + The transport to use, or a Callable that constructs and returns a new transport to use. + If a Callable is given, it will be called with the same set of initialization + arguments as used in the SpannerTransport constructor. + If set to None, a transport is chosen automatically. + client_options (Optional[Union[google.api_core.client_options.ClientOptions, dict]]): + Custom options for the client. + + 1. The ``api_endpoint`` property can be used to override the + default endpoint provided by the client when ``transport`` is + not explicitly provided. Only if this property is not set and + ``transport`` was not explicitly provided, the endpoint is + determined by the GOOGLE_API_USE_MTLS_ENDPOINT environment + variable, which have one of the following values: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto-switch to the + default mTLS endpoint if client certificate is present; this is + the default value). + + 2. If the GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide a client certificate for mTLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + 3. The ``universe_domain`` property can be used to override the + default "googleapis.com" universe. Note that ``api_endpoint`` + property still takes precedence; and ``universe_domain`` is + currently not supported for mTLS. + + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + self._client = SpannerClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ): # pragma: NO COVER + _LOGGER.debug( + "Created client `google.spanner_v1.SpannerAsyncClient`.", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "universeDomain": getattr( + self._client._transport._credentials, "universe_domain", "" + ), + "credentialsType": f"{type(self._client._transport._credentials).__module__}.{type(self._client._transport._credentials).__qualname__}", + "credentialsInfo": getattr( + self.transport._credentials, "get_cred_info", lambda: None + )(), + } + if hasattr(self._client._transport, "_credentials") + else { + "serviceName": "google.spanner.v1.Spanner", + "credentialsType": None, + }, + ) + + async def create_session( + self, + request: Optional[Union[spanner.CreateSessionRequest, dict]] = None, + *, + database: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.Session: + r"""Creates a new session. A session can be used to perform + transactions that read and/or modify data in a Cloud Spanner + database. Sessions are meant to be reused for many consecutive + transactions. + + Sessions can only execute one transaction at a time. To execute + multiple concurrent read-write/write-only transactions, create + multiple sessions. Note that standalone reads and queries use a + transaction internally, and count toward the one transaction + limit. + + Active sessions use additional server resources, so it's a good + idea to delete idle and unneeded sessions. Aside from explicit + deletes, Cloud Spanner can delete sessions when no operations + are sent for more than an hour. If a session is deleted, + requests to it return ``NOT_FOUND``. + + Idle sessions can be kept alive by sending a trivial SQL query + periodically, for example, ``"SELECT 1"``. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_create_session(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.CreateSessionRequest( + database="database_value", + ) + + # Make the request + response = await client.create_session(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.CreateSessionRequest, dict]]): + The request object. The request for + [CreateSession][google.spanner.v1.Spanner.CreateSession]. + database (:class:`str`): + Required. The database in which the + new session is created. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.Session: + A session in the Cloud Spanner API. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.CreateSessionRequest): + request = spanner.CreateSessionRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.create_session + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def batch_create_sessions( + self, + request: Optional[Union[spanner.BatchCreateSessionsRequest, dict]] = None, + *, + database: Optional[str] = None, + session_count: Optional[int] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.BatchCreateSessionsResponse: + r"""Creates multiple new sessions. + + This API can be used to initialize a session cache on + the clients. See https://goo.gl/TgSFN2 for best + practices on session cache management. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_batch_create_sessions(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.BatchCreateSessionsRequest( + database="database_value", + session_count=1420, + ) + + # Make the request + response = await client.batch_create_sessions(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.BatchCreateSessionsRequest, dict]]): + The request object. The request for + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. + database (:class:`str`): + Required. The database in which the + new sessions are created. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + session_count (:class:`int`): + Required. The number of sessions to be created in this + batch call. The API can return fewer than the requested + number of sessions. If a specific number of sessions are + desired, the client can make additional calls to + ``BatchCreateSessions`` (adjusting + [session_count][google.spanner.v1.BatchCreateSessionsRequest.session_count] + as necessary). + + This corresponds to the ``session_count`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.BatchCreateSessionsResponse: + The response for + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, session_count] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.BatchCreateSessionsRequest): + request = spanner.BatchCreateSessionsRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if session_count is not None: + request.session_count = session_count + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.batch_create_sessions + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def get_session( + self, + request: Optional[Union[spanner.GetSessionRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.Session: + r"""Gets a session. Returns ``NOT_FOUND`` if the session doesn't + exist. This is mainly useful for determining whether a session + is still alive. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_get_session(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.GetSessionRequest( + name="name_value", + ) + + # Make the request + response = await client.get_session(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.GetSessionRequest, dict]]): + The request object. The request for + [GetSession][google.spanner.v1.Spanner.GetSession]. + name (:class:`str`): + Required. The name of the session to + retrieve. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.Session: + A session in the Cloud Spanner API. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.GetSessionRequest): + request = spanner.GetSessionRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.get_session + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def list_sessions( + self, + request: Optional[Union[spanner.ListSessionsRequest, dict]] = None, + *, + database: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListSessionsAsyncPager: + r"""Lists all sessions in a given database. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_list_sessions(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ListSessionsRequest( + database="database_value", + ) + + # Make the request + page_result = client.list_sessions(request=request) + + # Handle the response + async for response in page_result: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.ListSessionsRequest, dict]]): + The request object. The request for + [ListSessions][google.spanner.v1.Spanner.ListSessions]. + database (:class:`str`): + Required. The database in which to + list sessions. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.services.spanner.pagers.ListSessionsAsyncPager: + The response for + [ListSessions][google.spanner.v1.Spanner.ListSessions]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ListSessionsRequest): + request = spanner.ListSessionsRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.list_sessions + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__aiter__` convenience method. + response = pagers.ListSessionsAsyncPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def delete_session( + self, + request: Optional[Union[spanner.DeleteSessionRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Ends a session, releasing server resources associated + with it. This asynchronously triggers the cancellation + of any operations that are running with this session. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_delete_session(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.DeleteSessionRequest( + name="name_value", + ) + + # Make the request + await client.delete_session(request=request) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.DeleteSessionRequest, dict]]): + The request object. The request for + [DeleteSession][google.spanner.v1.Spanner.DeleteSession]. + name (:class:`str`): + Required. The name of the session to + delete. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.DeleteSessionRequest): + request = spanner.DeleteSessionRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.delete_session + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def execute_sql( + self, + request: Optional[Union[spanner.ExecuteSqlRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> result_set.ResultSet: + r"""Executes an SQL statement, returning all results in a single + reply. This method can't be used to return a result set larger + than 10 MiB; if the query yields more data than that, the query + fails with a ``FAILED_PRECONDITION`` error. + + Operations inside read-write transactions might return + ``ABORTED``. If this occurs, the application should restart the + transaction from the beginning. See + [Transaction][google.spanner.v1.Transaction] for more details. + + Larger result sets can be fetched in streaming fashion by + calling + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + instead. + + The query string can be SQL or `Graph Query Language + (GQL) `__. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_execute_sql(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + response = await client.execute_sql(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.ExecuteSqlRequest, dict]]): + The request object. The request for + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.ResultSet: + Results from [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ExecuteSqlRequest): + request = spanner.ExecuteSqlRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.execute_sql + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def execute_streaming_sql( + self, + request: Optional[Union[spanner.ExecuteSqlRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> Awaitable[AsyncIterable[result_set.PartialResultSet]]: + r"""Like [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], except + returns the result set as a stream. Unlike + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], there is no + limit on the size of the returned result set. However, no + individual row in the result set can exceed 100 MiB, and no + column value can exceed 10 MiB. + + The query string can be SQL or `Graph Query Language + (GQL) `__. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_execute_streaming_sql(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + stream = await client.execute_streaming_sql(request=request) + + # Handle the response + async for response in stream: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.ExecuteSqlRequest, dict]]): + The request object. The request for + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + AsyncIterable[google.cloud.spanner_v1.types.PartialResultSet]: + Partial results from a streaming read + or SQL query. Streaming reads and SQL + queries better tolerate large result + sets, large rows, and large values, but + are a little trickier to consume. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ExecuteSqlRequest): + request = spanner.ExecuteSqlRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.execute_streaming_sql + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def execute_batch_dml( + self, + request: Optional[Union[spanner.ExecuteBatchDmlRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.ExecuteBatchDmlResponse: + r"""Executes a batch of SQL DML statements. This method allows many + statements to be run with lower latency than submitting them + sequentially with + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + Statements are executed in sequential order. A request can + succeed even if a statement fails. The + [ExecuteBatchDmlResponse.status][google.spanner.v1.ExecuteBatchDmlResponse.status] + field in the response provides information about the statement + that failed. Clients must inspect this field to determine + whether an error occurred. + + Execution stops after the first failed statement; the remaining + statements are not executed. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_execute_batch_dml(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + statements = spanner_v1.Statement() + statements.sql = "sql_value" + + request = spanner_v1.ExecuteBatchDmlRequest( + session="session_value", + statements=statements, + seqno=550, + ) + + # Make the request + response = await client.execute_batch_dml(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.ExecuteBatchDmlRequest, dict]]): + The request object. The request for + [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.ExecuteBatchDmlResponse: + The response for + [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. + Contains a list of + [ResultSet][google.spanner.v1.ResultSet] messages, + one for each DML statement that has successfully + executed, in the same order as the statements in the + request. If a statement fails, the status in the + response body identifies the cause of the failure. + + To check for DML statements that failed, use the + following approach: + + 1. Check the status in the response message. The + [google.rpc.Code][google.rpc.Code] enum value OK + indicates that all statements were executed + successfully. 2. If the status was not OK, check the + number of result sets in the response. If the + response contains N + [ResultSet][google.spanner.v1.ResultSet] messages, + then statement N+1 in the request failed. + + Example 1: + + - Request: 5 DML statements, all executed + successfully. + + \* Response: 5 + [ResultSet][google.spanner.v1.ResultSet] messages, + with the status OK. + + Example 2: + + - Request: 5 DML statements. The third statement has + a syntax error. + + \* Response: 2 + [ResultSet][google.spanner.v1.ResultSet] messages, + and a syntax error (INVALID_ARGUMENT) status. The + number of [ResultSet][google.spanner.v1.ResultSet] + messages indicates that the third statement failed, + and the fourth and fifth statements were not + executed. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ExecuteBatchDmlRequest): + request = spanner.ExecuteBatchDmlRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.execute_batch_dml + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def read( + self, + request: Optional[Union[spanner.ReadRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> result_set.ResultSet: + r"""Reads rows from the database using key lookups and scans, as a + simple key/value style alternative to + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. This method + can't be used to return a result set larger than 10 MiB; if the + read matches more data than that, the read fails with a + ``FAILED_PRECONDITION`` error. + + Reads inside read-write transactions might return ``ABORTED``. + If this occurs, the application should restart the transaction + from the beginning. See + [Transaction][google.spanner.v1.Transaction] for more details. + + Larger result sets can be yielded in streaming fashion by + calling [StreamingRead][google.spanner.v1.Spanner.StreamingRead] + instead. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_read(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ReadRequest( + session="session_value", + table="table_value", + columns=['columns_value1', 'columns_value2'], + ) + + # Make the request + response = await client.read(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.ReadRequest, dict]]): + The request object. The request for [Read][google.spanner.v1.Spanner.Read] + and + [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.ResultSet: + Results from [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ReadRequest): + request = spanner.ReadRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[self._client._transport.read] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def streaming_read( + self, + request: Optional[Union[spanner.ReadRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> Awaitable[AsyncIterable[result_set.PartialResultSet]]: + r"""Like [Read][google.spanner.v1.Spanner.Read], except returns the + result set as a stream. Unlike + [Read][google.spanner.v1.Spanner.Read], there is no limit on the + size of the returned result set. However, no individual row in + the result set can exceed 100 MiB, and no column value can + exceed 10 MiB. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_streaming_read(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ReadRequest( + session="session_value", + table="table_value", + columns=['columns_value1', 'columns_value2'], + ) + + # Make the request + stream = await client.streaming_read(request=request) + + # Handle the response + async for response in stream: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.ReadRequest, dict]]): + The request object. The request for [Read][google.spanner.v1.Spanner.Read] + and + [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + AsyncIterable[google.cloud.spanner_v1.types.PartialResultSet]: + Partial results from a streaming read + or SQL query. Streaming reads and SQL + queries better tolerate large result + sets, large rows, and large values, but + are a little trickier to consume. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ReadRequest): + request = spanner.ReadRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.streaming_read + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def begin_transaction( + self, + request: Optional[Union[spanner.BeginTransactionRequest, dict]] = None, + *, + session: Optional[str] = None, + options: Optional[transaction.TransactionOptions] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> transaction.Transaction: + r"""Begins a new transaction. This step can often be skipped: + [Read][google.spanner.v1.Spanner.Read], + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [Commit][google.spanner.v1.Spanner.Commit] can begin a new + transaction as a side-effect. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_begin_transaction(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.BeginTransactionRequest( + session="session_value", + ) + + # Make the request + response = await client.begin_transaction(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.BeginTransactionRequest, dict]]): + The request object. The request for + [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. + session (:class:`str`): + Required. The session in which the + transaction runs. + + This corresponds to the ``session`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + options (:class:`google.cloud.spanner_v1.types.TransactionOptions`): + Required. Options for the new + transaction. + + This corresponds to the ``options`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.Transaction: + A transaction. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [session, options] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.BeginTransactionRequest): + request = spanner.BeginTransactionRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if session is not None: + request.session = session + if options is not None: + request.options = options + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.begin_transaction + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def commit( + self, + request: Optional[Union[spanner.CommitRequest, dict]] = None, + *, + session: Optional[str] = None, + transaction_id: Optional[bytes] = None, + mutations: Optional[MutableSequence[mutation.Mutation]] = None, + single_use_transaction: Optional[transaction.TransactionOptions] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> commit_response.CommitResponse: + r"""Commits a transaction. The request includes the mutations to be + applied to rows in the database. + + ``Commit`` might return an ``ABORTED`` error. This can occur at + any time; commonly, the cause is conflicts with concurrent + transactions. However, it can also happen for a variety of other + reasons. If ``Commit`` returns ``ABORTED``, the caller should + retry the transaction from the beginning, reusing the same + session. + + On very rare occasions, ``Commit`` might return ``UNKNOWN``. + This can happen, for example, if the client job experiences a 1+ + hour networking failure. At that point, Cloud Spanner has lost + track of the transaction outcome and we recommend that you + perform another read from the database to see the state of + things as they are now. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_commit(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.CommitRequest( + transaction_id=b'transaction_id_blob', + session="session_value", + ) + + # Make the request + response = await client.commit(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.CommitRequest, dict]]): + The request object. The request for + [Commit][google.spanner.v1.Spanner.Commit]. + session (:class:`str`): + Required. The session in which the + transaction to be committed is running. + + This corresponds to the ``session`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + transaction_id (:class:`bytes`): + Commit a previously-started + transaction. + + This corresponds to the ``transaction_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + mutations (:class:`MutableSequence[google.cloud.spanner_v1.types.Mutation]`): + The mutations to be executed when + this transaction commits. All mutations + are applied atomically, in the order + they appear in this list. + + This corresponds to the ``mutations`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + single_use_transaction (:class:`google.cloud.spanner_v1.types.TransactionOptions`): + Execute mutations in a temporary transaction. Note that + unlike commit of a previously-started transaction, + commit with a temporary transaction is non-idempotent. + That is, if the ``CommitRequest`` is sent to Cloud + Spanner more than once (for instance, due to retries in + the application, or in the transport library), it's + possible that the mutations are executed more than once. + If this is undesirable, use + [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction] + and [Commit][google.spanner.v1.Spanner.Commit] instead. + + This corresponds to the ``single_use_transaction`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.CommitResponse: + The response for + [Commit][google.spanner.v1.Spanner.Commit]. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [session, transaction_id, mutations, single_use_transaction] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.CommitRequest): + request = spanner.CommitRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if session is not None: + request.session = session + if transaction_id is not None: + request.transaction_id = transaction_id + if single_use_transaction is not None: + request.single_use_transaction = single_use_transaction + if mutations: + request.mutations.extend(mutations) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[self._client._transport.commit] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def rollback( + self, + request: Optional[Union[spanner.RollbackRequest, dict]] = None, + *, + session: Optional[str] = None, + transaction_id: Optional[bytes] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Rolls back a transaction, releasing any locks it holds. It's a + good idea to call this for any transaction that includes one or + more [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] requests and + ultimately decides not to commit. + + ``Rollback`` returns ``OK`` if it successfully aborts the + transaction, the transaction was already aborted, or the + transaction isn't found. ``Rollback`` never returns ``ABORTED``. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_rollback(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.RollbackRequest( + session="session_value", + transaction_id=b'transaction_id_blob', + ) + + # Make the request + await client.rollback(request=request) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.RollbackRequest, dict]]): + The request object. The request for + [Rollback][google.spanner.v1.Spanner.Rollback]. + session (:class:`str`): + Required. The session in which the + transaction to roll back is running. + + This corresponds to the ``session`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + transaction_id (:class:`bytes`): + Required. The transaction to roll + back. + + This corresponds to the ``transaction_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [session, transaction_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.RollbackRequest): + request = spanner.RollbackRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if session is not None: + request.session = session + if transaction_id is not None: + request.transaction_id = transaction_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[self._client._transport.rollback] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + async def partition_query( + self, + request: Optional[Union[spanner.PartitionQueryRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.PartitionResponse: + r"""Creates a set of partition tokens that can be used to execute a + query operation in parallel. Each of the returned partition + tokens can be used by + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + to specify a subset of the query result to read. The same + session and read-only transaction must be used by the + ``PartitionQueryRequest`` used to create the partition tokens + and the ``ExecuteSqlRequests`` that use the partition tokens. + + Partition tokens become invalid when the session used to create + them is deleted, is idle for too long, begins a new transaction, + or becomes too old. When any of these happen, it isn't possible + to resume the query, and the whole operation must be restarted + from the beginning. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_partition_query(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.PartitionQueryRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + response = await client.partition_query(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.PartitionQueryRequest, dict]]): + The request object. The request for + [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.PartitionResponse: + The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.PartitionQueryRequest): + request = spanner.PartitionQueryRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.partition_query + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def partition_read( + self, + request: Optional[Union[spanner.PartitionReadRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.PartitionResponse: + r"""Creates a set of partition tokens that can be used to execute a + read operation in parallel. Each of the returned partition + tokens can be used by + [StreamingRead][google.spanner.v1.Spanner.StreamingRead] to + specify a subset of the read result to read. The same session + and read-only transaction must be used by the + ``PartitionReadRequest`` used to create the partition tokens and + the ``ReadRequests`` that use the partition tokens. There are no + ordering guarantees on rows returned among the returned + partition tokens, or even within each individual + ``StreamingRead`` call issued with a ``partition_token``. + + Partition tokens become invalid when the session used to create + them is deleted, is idle for too long, begins a new transaction, + or becomes too old. When any of these happen, it isn't possible + to resume the read, and the whole operation must be restarted + from the beginning. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_partition_read(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.PartitionReadRequest( + session="session_value", + table="table_value", + ) + + # Make the request + response = await client.partition_read(request=request) + + # Handle the response + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.PartitionReadRequest, dict]]): + The request object. The request for + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.PartitionResponse: + The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.PartitionReadRequest): + request = spanner.PartitionReadRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.partition_read + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = await rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def batch_write( + self, + request: Optional[Union[spanner.BatchWriteRequest, dict]] = None, + *, + session: Optional[str] = None, + mutation_groups: Optional[ + MutableSequence[spanner.BatchWriteRequest.MutationGroup] + ] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> Awaitable[AsyncIterable[spanner.BatchWriteResponse]]: + r"""Batches the supplied mutation groups in a collection of + efficient transactions. All mutations in a group are committed + atomically. However, mutations across groups can be committed + non-atomically in an unspecified order and thus, they must be + independent of each other. Partial failure is possible, that is, + some groups might have been committed successfully, while some + might have failed. The results of individual batches are + streamed into the response as the batches are applied. + + ``BatchWrite`` requests are not replay protected, meaning that + each mutation group can be applied more than once. Replays of + non-idempotent mutations can have undesirable effects. For + example, replays of an insert mutation can produce an already + exists error or if you use generated or commit timestamp-based + keys, it can result in additional rows being added to the + mutation's table. We recommend structuring your mutation groups + to be idempotent to avoid this issue. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + async def sample_batch_write(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + mutation_groups = spanner_v1.MutationGroup() + mutation_groups.mutations.insert.table = "table_value" + + request = spanner_v1.BatchWriteRequest( + session="session_value", + mutation_groups=mutation_groups, + ) + + # Make the request + stream = await client.batch_write(request=request) + + # Handle the response + async for response in stream: + print(response) + + Args: + request (Optional[Union[google.cloud.spanner_v1.types.BatchWriteRequest, dict]]): + The request object. The request for + [BatchWrite][google.spanner.v1.Spanner.BatchWrite]. + session (:class:`str`): + Required. The session in which the + batch request is to be run. + + This corresponds to the ``session`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + mutation_groups (:class:`MutableSequence[google.cloud.spanner_v1.types.BatchWriteRequest.MutationGroup]`): + Required. The groups of mutations to + be applied. + + This corresponds to the ``mutation_groups`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry_async.AsyncRetry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + AsyncIterable[google.cloud.spanner_v1.types.BatchWriteResponse]: + The result of applying a batch of + mutations. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [session, mutation_groups] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.BatchWriteRequest): + request = spanner.BatchWriteRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if session is not None: + request.session = session + if mutation_groups: + request.mutation_groups.extend(mutation_groups) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._client._transport._wrapped_methods[ + self._client._transport.batch_write + ] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._client._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + async def __aenter__(self) -> "SpannerAsyncClient": + return self + + async def __aexit__(self, exc_type, exc, tb): + await self.transport.close() + + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +__all__ = ("SpannerAsyncClient",) diff --git a/google/cloud/spanner_v1/services/spanner/client.py b/google/cloud/spanner_v1/services/spanner/client.py new file mode 100644 index 0000000000..82dbf8375e --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/client.py @@ -0,0 +1,2627 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from collections import OrderedDict +from http import HTTPStatus +import json +import logging as std_logging +import os +import re +from typing import ( + Dict, + Callable, + Mapping, + MutableMapping, + MutableSequence, + Optional, + Iterable, + Sequence, + Tuple, + Type, + Union, + cast, +) +import warnings + +from google.cloud.spanner_v1 import gapic_version as package_version + +from google.api_core import client_options as client_options_lib +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + +from google.cloud.spanner_v1.services.spanner import pagers +from google.cloud.spanner_v1.types import commit_response +from google.cloud.spanner_v1.types import mutation +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import spanner +from google.cloud.spanner_v1.types import transaction +from google.cloud.spanner_v1.metrics.metrics_interceptor import MetricsInterceptor +from google.protobuf import struct_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from google.rpc import status_pb2 # type: ignore +from .transports.base import SpannerTransport, DEFAULT_CLIENT_INFO +from .transports.grpc import SpannerGrpcTransport +from .transports.grpc_asyncio import SpannerGrpcAsyncIOTransport +from .transports.rest import SpannerRestTransport + + +class SpannerClientMeta(type): + """Metaclass for the Spanner client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = OrderedDict() # type: Dict[str, Type[SpannerTransport]] + _transport_registry["grpc"] = SpannerGrpcTransport + _transport_registry["grpc_asyncio"] = SpannerGrpcAsyncIOTransport + _transport_registry["rest"] = SpannerRestTransport + + def get_transport_class( + cls, + label: Optional[str] = None, + ) -> Type[SpannerTransport]: + """Returns an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class SpannerClient(metaclass=SpannerClientMeta): + """Cloud Spanner API + + The Cloud Spanner API can be used to manage sessions and execute + transactions on data stored in Cloud Spanner databases. + """ + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Converts api endpoint to mTLS endpoint. + + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + # Note: DEFAULT_ENDPOINT is deprecated. Use _DEFAULT_ENDPOINT_TEMPLATE instead. + DEFAULT_ENDPOINT = "spanner.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + _DEFAULT_ENDPOINT_TEMPLATE = "spanner.{UNIVERSE_DOMAIN}" + _DEFAULT_UNIVERSE = "googleapis.com" + + @classmethod + def from_service_account_info(cls, info: dict, *args, **kwargs): + """Creates an instance of this client using the provided credentials + info. + + Args: + info (dict): The service account private key info. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + SpannerClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_info(info) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + SpannerClient: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> SpannerTransport: + """Returns the transport used by the client instance. + + Returns: + SpannerTransport: The transport used by the client + instance. + """ + return self._transport + + @staticmethod + def database_path( + project: str, + instance: str, + database: str, + ) -> str: + """Returns a fully-qualified database string.""" + return "projects/{project}/instances/{instance}/databases/{database}".format( + project=project, + instance=instance, + database=database, + ) + + @staticmethod + def parse_database_path(path: str) -> Dict[str, str]: + """Parses a database path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instances/(?P.+?)/databases/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def session_path( + project: str, + instance: str, + database: str, + session: str, + ) -> str: + """Returns a fully-qualified session string.""" + return "projects/{project}/instances/{instance}/databases/{database}/sessions/{session}".format( + project=project, + instance=instance, + database=database, + session=session, + ) + + @staticmethod + def parse_session_path(path: str) -> Dict[str, str]: + """Parses a session path into its component segments.""" + m = re.match( + r"^projects/(?P.+?)/instances/(?P.+?)/databases/(?P.+?)/sessions/(?P.+?)$", + path, + ) + return m.groupdict() if m else {} + + @staticmethod + def common_billing_account_path( + billing_account: str, + ) -> str: + """Returns a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path( + folder: str, + ) -> str: + """Returns a fully-qualified folder string.""" + return "folders/{folder}".format( + folder=folder, + ) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path( + organization: str, + ) -> str: + """Returns a fully-qualified organization string.""" + return "organizations/{organization}".format( + organization=organization, + ) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path( + project: str, + ) -> str: + """Returns a fully-qualified project string.""" + return "projects/{project}".format( + project=project, + ) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path( + project: str, + location: str, + ) -> str: + """Returns a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, + location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @classmethod + def get_mtls_endpoint_and_cert_source( + cls, client_options: Optional[client_options_lib.ClientOptions] = None + ): + """Deprecated. Return the API endpoint and client cert source for mutual TLS. + + The client cert source is determined in the following order: + (1) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is not "true", the + client cert source is None. + (2) if `client_options.client_cert_source` is provided, use the provided one; if the + default client cert source exists, use the default one; otherwise the client cert + source is None. + + The API endpoint is determined in the following order: + (1) if `client_options.api_endpoint` if provided, use the provided one. + (2) if `GOOGLE_API_USE_CLIENT_CERTIFICATE` environment variable is "always", use the + default mTLS endpoint; if the environment variable is "never", use the default API + endpoint; otherwise if client cert source exists, use the default mTLS endpoint, otherwise + use the default API endpoint. + + More details can be found at https://google.aip.dev/auth/4114. + + Args: + client_options (google.api_core.client_options.ClientOptions): Custom options for the + client. Only the `api_endpoint` and `client_cert_source` properties may be used + in this method. + + Returns: + Tuple[str, Callable[[], Tuple[bytes, bytes]]]: returns the API endpoint and the + client cert source to use. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If any errors happen. + """ + + warnings.warn( + "get_mtls_endpoint_and_cert_source is deprecated. Use the api_endpoint property instead.", + DeprecationWarning, + ) + if client_options is None: + client_options = client_options_lib.ClientOptions() + use_client_cert = os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false") + use_mtls_endpoint = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_client_cert not in ("true", "false"): + raise ValueError( + "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + if use_mtls_endpoint not in ("auto", "never", "always"): + raise MutualTLSChannelError( + "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Figure out the client cert source to use. + client_cert_source = None + if use_client_cert == "true": + if client_options.client_cert_source: + client_cert_source = client_options.client_cert_source + elif mtls.has_default_client_cert_source(): + client_cert_source = mtls.default_client_cert_source() + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + elif use_mtls_endpoint == "always" or ( + use_mtls_endpoint == "auto" and client_cert_source + ): + api_endpoint = cls.DEFAULT_MTLS_ENDPOINT + else: + api_endpoint = cls.DEFAULT_ENDPOINT + + return api_endpoint, client_cert_source + + @staticmethod + def _read_environment_variables(): + """Returns the environment variables used by the client. + + Returns: + Tuple[bool, str, str]: returns the GOOGLE_API_USE_CLIENT_CERTIFICATE, + GOOGLE_API_USE_MTLS_ENDPOINT, and GOOGLE_CLOUD_UNIVERSE_DOMAIN environment variables. + + Raises: + ValueError: If GOOGLE_API_USE_CLIENT_CERTIFICATE is not + any of ["true", "false"]. + google.auth.exceptions.MutualTLSChannelError: If GOOGLE_API_USE_MTLS_ENDPOINT + is not any of ["auto", "never", "always"]. + """ + use_client_cert = os.getenv( + "GOOGLE_API_USE_CLIENT_CERTIFICATE", "false" + ).lower() + use_mtls_endpoint = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto").lower() + universe_domain_env = os.getenv("GOOGLE_CLOUD_UNIVERSE_DOMAIN") + if use_client_cert not in ("true", "false"): + raise ValueError( + "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + if use_mtls_endpoint not in ("auto", "never", "always"): + raise MutualTLSChannelError( + "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + return use_client_cert == "true", use_mtls_endpoint, universe_domain_env + + @staticmethod + def _get_client_cert_source(provided_cert_source, use_cert_flag): + """Return the client cert source to be used by the client. + + Args: + provided_cert_source (bytes): The client certificate source provided. + use_cert_flag (bool): A flag indicating whether to use the client certificate. + + Returns: + bytes or None: The client cert source to be used by the client. + """ + client_cert_source = None + if use_cert_flag: + if provided_cert_source: + client_cert_source = provided_cert_source + elif mtls.has_default_client_cert_source(): + client_cert_source = mtls.default_client_cert_source() + return client_cert_source + + @staticmethod + def _get_api_endpoint( + api_override, client_cert_source, universe_domain, use_mtls_endpoint + ): + """Return the API endpoint used by the client. + + Args: + api_override (str): The API endpoint override. If specified, this is always + the return value of this function and the other arguments are not used. + client_cert_source (bytes): The client certificate source used by the client. + universe_domain (str): The universe domain used by the client. + use_mtls_endpoint (str): How to use the mTLS endpoint, which depends also on the other parameters. + Possible values are "always", "auto", or "never". + + Returns: + str: The API endpoint to be used by the client. + """ + if api_override is not None: + api_endpoint = api_override + elif use_mtls_endpoint == "always" or ( + use_mtls_endpoint == "auto" and client_cert_source + ): + _default_universe = SpannerClient._DEFAULT_UNIVERSE + if universe_domain != _default_universe: + raise MutualTLSChannelError( + f"mTLS is not supported in any universe other than {_default_universe}." + ) + api_endpoint = SpannerClient.DEFAULT_MTLS_ENDPOINT + else: + api_endpoint = SpannerClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=universe_domain + ) + return api_endpoint + + @staticmethod + def _get_universe_domain( + client_universe_domain: Optional[str], universe_domain_env: Optional[str] + ) -> str: + """Return the universe domain used by the client. + + Args: + client_universe_domain (Optional[str]): The universe domain configured via the client options. + universe_domain_env (Optional[str]): The universe domain configured via the "GOOGLE_CLOUD_UNIVERSE_DOMAIN" environment variable. + + Returns: + str: The universe domain to be used by the client. + + Raises: + ValueError: If the universe domain is an empty string. + """ + universe_domain = SpannerClient._DEFAULT_UNIVERSE + if client_universe_domain is not None: + universe_domain = client_universe_domain + elif universe_domain_env is not None: + universe_domain = universe_domain_env + if len(universe_domain.strip()) == 0: + raise ValueError("Universe Domain cannot be an empty string.") + return universe_domain + + def _validate_universe_domain(self): + """Validates client's and credentials' universe domains are consistent. + + Returns: + bool: True iff the configured universe domain is valid. + + Raises: + ValueError: If the configured universe domain is not valid. + """ + + # NOTE (b/349488459): universe validation is disabled until further notice. + return True + + def _add_cred_info_for_auth_errors( + self, error: core_exceptions.GoogleAPICallError + ) -> None: + """Adds credential info string to error details for 401/403/404 errors. + + Args: + error (google.api_core.exceptions.GoogleAPICallError): The error to add the cred info. + """ + if error.code not in [ + HTTPStatus.UNAUTHORIZED, + HTTPStatus.FORBIDDEN, + HTTPStatus.NOT_FOUND, + ]: + return + + cred = self._transport._credentials + + # get_cred_info is only available in google-auth>=2.35.0 + if not hasattr(cred, "get_cred_info"): + return + + # ignore the type check since pypy test fails when get_cred_info + # is not available + cred_info = cred.get_cred_info() # type: ignore + if cred_info and hasattr(error._details, "append"): + error._details.append(json.dumps(cred_info)) + + @property + def api_endpoint(self): + """Return the API endpoint used by the client instance. + + Returns: + str: The API endpoint used by the client instance. + """ + return self._api_endpoint + + @property + def universe_domain(self) -> str: + """Return the universe domain used by the client instance. + + Returns: + str: The universe domain used by the client instance. + """ + return self._universe_domain + + def __init__( + self, + *, + credentials: Optional[ga_credentials.Credentials] = None, + transport: Optional[ + Union[str, SpannerTransport, Callable[..., SpannerTransport]] + ] = None, + client_options: Optional[Union[client_options_lib.ClientOptions, dict]] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiates the spanner client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Optional[Union[str,SpannerTransport,Callable[..., SpannerTransport]]]): + The transport to use, or a Callable that constructs and returns a new transport. + If a Callable is given, it will be called with the same set of initialization + arguments as used in the SpannerTransport constructor. + If set to None, a transport is chosen automatically. + client_options (Optional[Union[google.api_core.client_options.ClientOptions, dict]]): + Custom options for the client. + + 1. The ``api_endpoint`` property can be used to override the + default endpoint provided by the client when ``transport`` is + not explicitly provided. Only if this property is not set and + ``transport`` was not explicitly provided, the endpoint is + determined by the GOOGLE_API_USE_MTLS_ENDPOINT environment + variable, which have one of the following values: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto-switch to the + default mTLS endpoint if client certificate is present; this is + the default value). + + 2. If the GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide a client certificate for mTLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + 3. The ``universe_domain`` property can be used to override the + default "googleapis.com" universe. Note that the ``api_endpoint`` + property still takes precedence; and ``universe_domain`` is + currently not supported for mTLS. + + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + self._client_options = client_options + if isinstance(self._client_options, dict): + self._client_options = client_options_lib.from_dict(self._client_options) + if self._client_options is None: + self._client_options = client_options_lib.ClientOptions() + self._client_options = cast( + client_options_lib.ClientOptions, self._client_options + ) + + universe_domain_opt = getattr(self._client_options, "universe_domain", None) + + ( + self._use_client_cert, + self._use_mtls_endpoint, + self._universe_domain_env, + ) = SpannerClient._read_environment_variables() + self._client_cert_source = SpannerClient._get_client_cert_source( + self._client_options.client_cert_source, self._use_client_cert + ) + self._universe_domain = SpannerClient._get_universe_domain( + universe_domain_opt, self._universe_domain_env + ) + self._api_endpoint = None # updated below, depending on `transport` + + # Initialize the universe domain validation. + self._is_universe_domain_valid = False + + if CLIENT_LOGGING_SUPPORTED: # pragma: NO COVER + # Setup logging. + client_logging.initialize_logging() + + api_key_value = getattr(self._client_options, "api_key", None) + if api_key_value and credentials: + raise ValueError( + "client_options.api_key and credentials are mutually exclusive" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + transport_provided = isinstance(transport, SpannerTransport) + if transport_provided: + # transport is a SpannerTransport instance. + if credentials or self._client_options.credentials_file or api_key_value: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if self._client_options.scopes: + raise ValueError( + "When providing a transport instance, provide its scopes " + "directly." + ) + self._transport = cast(SpannerTransport, transport) + self._api_endpoint = self._transport.host + + self._api_endpoint = self._api_endpoint or SpannerClient._get_api_endpoint( + self._client_options.api_endpoint, + self._client_cert_source, + self._universe_domain, + self._use_mtls_endpoint, + ) + + if not transport_provided: + import google.auth._default # type: ignore + + if api_key_value and hasattr( + google.auth._default, "get_api_key_credentials" + ): + credentials = google.auth._default.get_api_key_credentials( + api_key_value + ) + + transport_init: Union[ + Type[SpannerTransport], Callable[..., SpannerTransport] + ] = ( + SpannerClient.get_transport_class(transport) + if isinstance(transport, str) or transport is None + else cast(Callable[..., SpannerTransport], transport) + ) + # initialize with the provided callable or the passed in class + self._transport = transport_init( + credentials=credentials, + credentials_file=self._client_options.credentials_file, + host=self._api_endpoint, + scopes=self._client_options.scopes, + client_cert_source_for_mtls=self._client_cert_source, + quota_project_id=self._client_options.quota_project_id, + client_info=client_info, + always_use_jwt_access=True, + api_audience=self._client_options.api_audience, + metrics_interceptor=MetricsInterceptor(), + ) + + if "async" not in str(self._transport): + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ): # pragma: NO COVER + _LOGGER.debug( + "Created client `google.spanner_v1.SpannerClient`.", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "universeDomain": getattr( + self._transport._credentials, "universe_domain", "" + ), + "credentialsType": f"{type(self._transport._credentials).__module__}.{type(self._transport._credentials).__qualname__}", + "credentialsInfo": getattr( + self.transport._credentials, "get_cred_info", lambda: None + )(), + } + if hasattr(self._transport, "_credentials") + else { + "serviceName": "google.spanner.v1.Spanner", + "credentialsType": None, + }, + ) + + def create_session( + self, + request: Optional[Union[spanner.CreateSessionRequest, dict]] = None, + *, + database: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.Session: + r"""Creates a new session. A session can be used to perform + transactions that read and/or modify data in a Cloud Spanner + database. Sessions are meant to be reused for many consecutive + transactions. + + Sessions can only execute one transaction at a time. To execute + multiple concurrent read-write/write-only transactions, create + multiple sessions. Note that standalone reads and queries use a + transaction internally, and count toward the one transaction + limit. + + Active sessions use additional server resources, so it's a good + idea to delete idle and unneeded sessions. Aside from explicit + deletes, Cloud Spanner can delete sessions when no operations + are sent for more than an hour. If a session is deleted, + requests to it return ``NOT_FOUND``. + + Idle sessions can be kept alive by sending a trivial SQL query + periodically, for example, ``"SELECT 1"``. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_create_session(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.CreateSessionRequest( + database="database_value", + ) + + # Make the request + response = client.create_session(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.CreateSessionRequest, dict]): + The request object. The request for + [CreateSession][google.spanner.v1.Spanner.CreateSession]. + database (str): + Required. The database in which the + new session is created. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.Session: + A session in the Cloud Spanner API. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.CreateSessionRequest): + request = spanner.CreateSessionRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.create_session] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def batch_create_sessions( + self, + request: Optional[Union[spanner.BatchCreateSessionsRequest, dict]] = None, + *, + database: Optional[str] = None, + session_count: Optional[int] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.BatchCreateSessionsResponse: + r"""Creates multiple new sessions. + + This API can be used to initialize a session cache on + the clients. See https://goo.gl/TgSFN2 for best + practices on session cache management. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_batch_create_sessions(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.BatchCreateSessionsRequest( + database="database_value", + session_count=1420, + ) + + # Make the request + response = client.batch_create_sessions(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.BatchCreateSessionsRequest, dict]): + The request object. The request for + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. + database (str): + Required. The database in which the + new sessions are created. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + session_count (int): + Required. The number of sessions to be created in this + batch call. The API can return fewer than the requested + number of sessions. If a specific number of sessions are + desired, the client can make additional calls to + ``BatchCreateSessions`` (adjusting + [session_count][google.spanner.v1.BatchCreateSessionsRequest.session_count] + as necessary). + + This corresponds to the ``session_count`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.BatchCreateSessionsResponse: + The response for + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database, session_count] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.BatchCreateSessionsRequest): + request = spanner.BatchCreateSessionsRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + if session_count is not None: + request.session_count = session_count + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.batch_create_sessions] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def get_session( + self, + request: Optional[Union[spanner.GetSessionRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.Session: + r"""Gets a session. Returns ``NOT_FOUND`` if the session doesn't + exist. This is mainly useful for determining whether a session + is still alive. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_get_session(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.GetSessionRequest( + name="name_value", + ) + + # Make the request + response = client.get_session(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.GetSessionRequest, dict]): + The request object. The request for + [GetSession][google.spanner.v1.Spanner.GetSession]. + name (str): + Required. The name of the session to + retrieve. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.Session: + A session in the Cloud Spanner API. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.GetSessionRequest): + request = spanner.GetSessionRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.get_session] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def list_sessions( + self, + request: Optional[Union[spanner.ListSessionsRequest, dict]] = None, + *, + database: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> pagers.ListSessionsPager: + r"""Lists all sessions in a given database. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_list_sessions(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ListSessionsRequest( + database="database_value", + ) + + # Make the request + page_result = client.list_sessions(request=request) + + # Handle the response + for response in page_result: + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.ListSessionsRequest, dict]): + The request object. The request for + [ListSessions][google.spanner.v1.Spanner.ListSessions]. + database (str): + Required. The database in which to + list sessions. + + This corresponds to the ``database`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.services.spanner.pagers.ListSessionsPager: + The response for + [ListSessions][google.spanner.v1.Spanner.ListSessions]. + + Iterating over this object will yield results and + resolve additional pages automatically. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [database] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ListSessionsRequest): + request = spanner.ListSessionsRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if database is not None: + request.database = database + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.list_sessions] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", request.database),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # This method is paged; wrap the response in a pager, which provides + # an `__iter__` convenience method. + response = pagers.ListSessionsPager( + method=rpc, + request=request, + response=response, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def delete_session( + self, + request: Optional[Union[spanner.DeleteSessionRequest, dict]] = None, + *, + name: Optional[str] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Ends a session, releasing server resources associated + with it. This asynchronously triggers the cancellation + of any operations that are running with this session. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_delete_session(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.DeleteSessionRequest( + name="name_value", + ) + + # Make the request + client.delete_session(request=request) + + Args: + request (Union[google.cloud.spanner_v1.types.DeleteSessionRequest, dict]): + The request object. The request for + [DeleteSession][google.spanner.v1.Spanner.DeleteSession]. + name (str): + Required. The name of the session to + delete. + + This corresponds to the ``name`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [name] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.DeleteSessionRequest): + request = spanner.DeleteSessionRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if name is not None: + request.name = name + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.delete_session] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("name", request.name),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def execute_sql( + self, + request: Optional[Union[spanner.ExecuteSqlRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> result_set.ResultSet: + r"""Executes an SQL statement, returning all results in a single + reply. This method can't be used to return a result set larger + than 10 MiB; if the query yields more data than that, the query + fails with a ``FAILED_PRECONDITION`` error. + + Operations inside read-write transactions might return + ``ABORTED``. If this occurs, the application should restart the + transaction from the beginning. See + [Transaction][google.spanner.v1.Transaction] for more details. + + Larger result sets can be fetched in streaming fashion by + calling + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + instead. + + The query string can be SQL or `Graph Query Language + (GQL) `__. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_execute_sql(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + response = client.execute_sql(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.ExecuteSqlRequest, dict]): + The request object. The request for + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.ResultSet: + Results from [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ExecuteSqlRequest): + request = spanner.ExecuteSqlRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.execute_sql] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def execute_streaming_sql( + self, + request: Optional[Union[spanner.ExecuteSqlRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> Iterable[result_set.PartialResultSet]: + r"""Like [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], except + returns the result set as a stream. Unlike + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], there is no + limit on the size of the returned result set. However, no + individual row in the result set can exceed 100 MiB, and no + column value can exceed 10 MiB. + + The query string can be SQL or `Graph Query Language + (GQL) `__. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_execute_streaming_sql(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + stream = client.execute_streaming_sql(request=request) + + # Handle the response + for response in stream: + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.ExecuteSqlRequest, dict]): + The request object. The request for + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + Iterable[google.cloud.spanner_v1.types.PartialResultSet]: + Partial results from a streaming read + or SQL query. Streaming reads and SQL + queries better tolerate large result + sets, large rows, and large values, but + are a little trickier to consume. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ExecuteSqlRequest): + request = spanner.ExecuteSqlRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.execute_streaming_sql] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def execute_batch_dml( + self, + request: Optional[Union[spanner.ExecuteBatchDmlRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.ExecuteBatchDmlResponse: + r"""Executes a batch of SQL DML statements. This method allows many + statements to be run with lower latency than submitting them + sequentially with + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + Statements are executed in sequential order. A request can + succeed even if a statement fails. The + [ExecuteBatchDmlResponse.status][google.spanner.v1.ExecuteBatchDmlResponse.status] + field in the response provides information about the statement + that failed. Clients must inspect this field to determine + whether an error occurred. + + Execution stops after the first failed statement; the remaining + statements are not executed. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_execute_batch_dml(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + statements = spanner_v1.Statement() + statements.sql = "sql_value" + + request = spanner_v1.ExecuteBatchDmlRequest( + session="session_value", + statements=statements, + seqno=550, + ) + + # Make the request + response = client.execute_batch_dml(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.ExecuteBatchDmlRequest, dict]): + The request object. The request for + [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.ExecuteBatchDmlResponse: + The response for + [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. + Contains a list of + [ResultSet][google.spanner.v1.ResultSet] messages, + one for each DML statement that has successfully + executed, in the same order as the statements in the + request. If a statement fails, the status in the + response body identifies the cause of the failure. + + To check for DML statements that failed, use the + following approach: + + 1. Check the status in the response message. The + [google.rpc.Code][google.rpc.Code] enum value OK + indicates that all statements were executed + successfully. 2. If the status was not OK, check the + number of result sets in the response. If the + response contains N + [ResultSet][google.spanner.v1.ResultSet] messages, + then statement N+1 in the request failed. + + Example 1: + + - Request: 5 DML statements, all executed + successfully. + + \* Response: 5 + [ResultSet][google.spanner.v1.ResultSet] messages, + with the status OK. + + Example 2: + + - Request: 5 DML statements. The third statement has + a syntax error. + + \* Response: 2 + [ResultSet][google.spanner.v1.ResultSet] messages, + and a syntax error (INVALID_ARGUMENT) status. The + number of [ResultSet][google.spanner.v1.ResultSet] + messages indicates that the third statement failed, + and the fourth and fifth statements were not + executed. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ExecuteBatchDmlRequest): + request = spanner.ExecuteBatchDmlRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.execute_batch_dml] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def read( + self, + request: Optional[Union[spanner.ReadRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> result_set.ResultSet: + r"""Reads rows from the database using key lookups and scans, as a + simple key/value style alternative to + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. This method + can't be used to return a result set larger than 10 MiB; if the + read matches more data than that, the read fails with a + ``FAILED_PRECONDITION`` error. + + Reads inside read-write transactions might return ``ABORTED``. + If this occurs, the application should restart the transaction + from the beginning. See + [Transaction][google.spanner.v1.Transaction] for more details. + + Larger result sets can be yielded in streaming fashion by + calling [StreamingRead][google.spanner.v1.Spanner.StreamingRead] + instead. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_read(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ReadRequest( + session="session_value", + table="table_value", + columns=['columns_value1', 'columns_value2'], + ) + + # Make the request + response = client.read(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.ReadRequest, dict]): + The request object. The request for [Read][google.spanner.v1.Spanner.Read] + and + [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.ResultSet: + Results from [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ReadRequest): + request = spanner.ReadRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.read] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def streaming_read( + self, + request: Optional[Union[spanner.ReadRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> Iterable[result_set.PartialResultSet]: + r"""Like [Read][google.spanner.v1.Spanner.Read], except returns the + result set as a stream. Unlike + [Read][google.spanner.v1.Spanner.Read], there is no limit on the + size of the returned result set. However, no individual row in + the result set can exceed 100 MiB, and no column value can + exceed 10 MiB. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_streaming_read(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ReadRequest( + session="session_value", + table="table_value", + columns=['columns_value1', 'columns_value2'], + ) + + # Make the request + stream = client.streaming_read(request=request) + + # Handle the response + for response in stream: + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.ReadRequest, dict]): + The request object. The request for [Read][google.spanner.v1.Spanner.Read] + and + [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + Iterable[google.cloud.spanner_v1.types.PartialResultSet]: + Partial results from a streaming read + or SQL query. Streaming reads and SQL + queries better tolerate large result + sets, large rows, and large values, but + are a little trickier to consume. + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.ReadRequest): + request = spanner.ReadRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.streaming_read] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def begin_transaction( + self, + request: Optional[Union[spanner.BeginTransactionRequest, dict]] = None, + *, + session: Optional[str] = None, + options: Optional[transaction.TransactionOptions] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> transaction.Transaction: + r"""Begins a new transaction. This step can often be skipped: + [Read][google.spanner.v1.Spanner.Read], + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [Commit][google.spanner.v1.Spanner.Commit] can begin a new + transaction as a side-effect. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_begin_transaction(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.BeginTransactionRequest( + session="session_value", + ) + + # Make the request + response = client.begin_transaction(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.BeginTransactionRequest, dict]): + The request object. The request for + [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. + session (str): + Required. The session in which the + transaction runs. + + This corresponds to the ``session`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + options (google.cloud.spanner_v1.types.TransactionOptions): + Required. Options for the new + transaction. + + This corresponds to the ``options`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.Transaction: + A transaction. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [session, options] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.BeginTransactionRequest): + request = spanner.BeginTransactionRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if session is not None: + request.session = session + if options is not None: + request.options = options + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.begin_transaction] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def commit( + self, + request: Optional[Union[spanner.CommitRequest, dict]] = None, + *, + session: Optional[str] = None, + transaction_id: Optional[bytes] = None, + mutations: Optional[MutableSequence[mutation.Mutation]] = None, + single_use_transaction: Optional[transaction.TransactionOptions] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> commit_response.CommitResponse: + r"""Commits a transaction. The request includes the mutations to be + applied to rows in the database. + + ``Commit`` might return an ``ABORTED`` error. This can occur at + any time; commonly, the cause is conflicts with concurrent + transactions. However, it can also happen for a variety of other + reasons. If ``Commit`` returns ``ABORTED``, the caller should + retry the transaction from the beginning, reusing the same + session. + + On very rare occasions, ``Commit`` might return ``UNKNOWN``. + This can happen, for example, if the client job experiences a 1+ + hour networking failure. At that point, Cloud Spanner has lost + track of the transaction outcome and we recommend that you + perform another read from the database to see the state of + things as they are now. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_commit(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.CommitRequest( + transaction_id=b'transaction_id_blob', + session="session_value", + ) + + # Make the request + response = client.commit(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.CommitRequest, dict]): + The request object. The request for + [Commit][google.spanner.v1.Spanner.Commit]. + session (str): + Required. The session in which the + transaction to be committed is running. + + This corresponds to the ``session`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + transaction_id (bytes): + Commit a previously-started + transaction. + + This corresponds to the ``transaction_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + mutations (MutableSequence[google.cloud.spanner_v1.types.Mutation]): + The mutations to be executed when + this transaction commits. All mutations + are applied atomically, in the order + they appear in this list. + + This corresponds to the ``mutations`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + single_use_transaction (google.cloud.spanner_v1.types.TransactionOptions): + Execute mutations in a temporary transaction. Note that + unlike commit of a previously-started transaction, + commit with a temporary transaction is non-idempotent. + That is, if the ``CommitRequest`` is sent to Cloud + Spanner more than once (for instance, due to retries in + the application, or in the transport library), it's + possible that the mutations are executed more than once. + If this is undesirable, use + [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction] + and [Commit][google.spanner.v1.Spanner.Commit] instead. + + This corresponds to the ``single_use_transaction`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.CommitResponse: + The response for + [Commit][google.spanner.v1.Spanner.Commit]. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [session, transaction_id, mutations, single_use_transaction] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.CommitRequest): + request = spanner.CommitRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if session is not None: + request.session = session + if transaction_id is not None: + request.transaction_id = transaction_id + if mutations is not None: + request.mutations = mutations + if single_use_transaction is not None: + request.single_use_transaction = single_use_transaction + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.commit] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def rollback( + self, + request: Optional[Union[spanner.RollbackRequest, dict]] = None, + *, + session: Optional[str] = None, + transaction_id: Optional[bytes] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> None: + r"""Rolls back a transaction, releasing any locks it holds. It's a + good idea to call this for any transaction that includes one or + more [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] requests and + ultimately decides not to commit. + + ``Rollback`` returns ``OK`` if it successfully aborts the + transaction, the transaction was already aborted, or the + transaction isn't found. ``Rollback`` never returns ``ABORTED``. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_rollback(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.RollbackRequest( + session="session_value", + transaction_id=b'transaction_id_blob', + ) + + # Make the request + client.rollback(request=request) + + Args: + request (Union[google.cloud.spanner_v1.types.RollbackRequest, dict]): + The request object. The request for + [Rollback][google.spanner.v1.Spanner.Rollback]. + session (str): + Required. The session in which the + transaction to roll back is running. + + This corresponds to the ``session`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + transaction_id (bytes): + Required. The transaction to roll + back. + + This corresponds to the ``transaction_id`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [session, transaction_id] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.RollbackRequest): + request = spanner.RollbackRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if session is not None: + request.session = session + if transaction_id is not None: + request.transaction_id = transaction_id + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.rollback] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + def partition_query( + self, + request: Optional[Union[spanner.PartitionQueryRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.PartitionResponse: + r"""Creates a set of partition tokens that can be used to execute a + query operation in parallel. Each of the returned partition + tokens can be used by + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + to specify a subset of the query result to read. The same + session and read-only transaction must be used by the + ``PartitionQueryRequest`` used to create the partition tokens + and the ``ExecuteSqlRequests`` that use the partition tokens. + + Partition tokens become invalid when the session used to create + them is deleted, is idle for too long, begins a new transaction, + or becomes too old. When any of these happen, it isn't possible + to resume the query, and the whole operation must be restarted + from the beginning. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_partition_query(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.PartitionQueryRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + response = client.partition_query(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.PartitionQueryRequest, dict]): + The request object. The request for + [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.PartitionResponse: + The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.PartitionQueryRequest): + request = spanner.PartitionQueryRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.partition_query] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def partition_read( + self, + request: Optional[Union[spanner.PartitionReadRequest, dict]] = None, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.PartitionResponse: + r"""Creates a set of partition tokens that can be used to execute a + read operation in parallel. Each of the returned partition + tokens can be used by + [StreamingRead][google.spanner.v1.Spanner.StreamingRead] to + specify a subset of the read result to read. The same session + and read-only transaction must be used by the + ``PartitionReadRequest`` used to create the partition tokens and + the ``ReadRequests`` that use the partition tokens. There are no + ordering guarantees on rows returned among the returned + partition tokens, or even within each individual + ``StreamingRead`` call issued with a ``partition_token``. + + Partition tokens become invalid when the session used to create + them is deleted, is idle for too long, begins a new transaction, + or becomes too old. When any of these happen, it isn't possible + to resume the read, and the whole operation must be restarted + from the beginning. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_partition_read(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.PartitionReadRequest( + session="session_value", + table="table_value", + ) + + # Make the request + response = client.partition_read(request=request) + + # Handle the response + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.PartitionReadRequest, dict]): + The request object. The request for + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + google.cloud.spanner_v1.types.PartitionResponse: + The response for [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + + """ + # Create or coerce a protobuf request object. + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.PartitionReadRequest): + request = spanner.PartitionReadRequest(request) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.partition_read] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def batch_write( + self, + request: Optional[Union[spanner.BatchWriteRequest, dict]] = None, + *, + session: Optional[str] = None, + mutation_groups: Optional[ + MutableSequence[spanner.BatchWriteRequest.MutationGroup] + ] = None, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> Iterable[spanner.BatchWriteResponse]: + r"""Batches the supplied mutation groups in a collection of + efficient transactions. All mutations in a group are committed + atomically. However, mutations across groups can be committed + non-atomically in an unspecified order and thus, they must be + independent of each other. Partial failure is possible, that is, + some groups might have been committed successfully, while some + might have failed. The results of individual batches are + streamed into the response as the batches are applied. + + ``BatchWrite`` requests are not replay protected, meaning that + each mutation group can be applied more than once. Replays of + non-idempotent mutations can have undesirable effects. For + example, replays of an insert mutation can produce an already + exists error or if you use generated or commit timestamp-based + keys, it can result in additional rows being added to the + mutation's table. We recommend structuring your mutation groups + to be idempotent to avoid this issue. + + .. code-block:: python + + # This snippet has been automatically generated and should be regarded as a + # code template only. + # It will require modifications to work: + # - It may require correct/in-range values for request initialization. + # - It may require specifying regional endpoints when creating the service + # client as shown in: + # https://googleapis.dev/python/google-api-core/latest/client_options.html + from google.cloud import spanner_v1 + + def sample_batch_write(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + mutation_groups = spanner_v1.MutationGroup() + mutation_groups.mutations.insert.table = "table_value" + + request = spanner_v1.BatchWriteRequest( + session="session_value", + mutation_groups=mutation_groups, + ) + + # Make the request + stream = client.batch_write(request=request) + + # Handle the response + for response in stream: + print(response) + + Args: + request (Union[google.cloud.spanner_v1.types.BatchWriteRequest, dict]): + The request object. The request for + [BatchWrite][google.spanner.v1.Spanner.BatchWrite]. + session (str): + Required. The session in which the + batch request is to be run. + + This corresponds to the ``session`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + mutation_groups (MutableSequence[google.cloud.spanner_v1.types.BatchWriteRequest.MutationGroup]): + Required. The groups of mutations to + be applied. + + This corresponds to the ``mutation_groups`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + Iterable[google.cloud.spanner_v1.types.BatchWriteResponse]: + The result of applying a batch of + mutations. + + """ + # Create or coerce a protobuf request object. + # - Quick check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + flattened_params = [session, mutation_groups] + has_flattened_params = ( + len([param for param in flattened_params if param is not None]) > 0 + ) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # - Use the request object if provided (there's no risk of modifying the input as + # there are no flattened fields), or create one. + if not isinstance(request, spanner.BatchWriteRequest): + request = spanner.BatchWriteRequest(request) + # If we have keyword arguments corresponding to fields on the + # request, apply these. + if session is not None: + request.session = session + if mutation_groups is not None: + request.mutation_groups = mutation_groups + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.batch_write] + + # Certain fields should be provided within the metadata header; + # add these here. + metadata = tuple(metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("session", request.session),)), + ) + + # Validate the universe domain. + self._validate_universe_domain() + + # Send the request. + response = rpc( + request, + retry=retry, + timeout=timeout, + metadata=metadata, + ) + + # Done; return the response. + return response + + def __enter__(self) -> "SpannerClient": + return self + + def __exit__(self, type, value, traceback): + """Releases underlying transport's resources. + + .. warning:: + ONLY use as a context manager if the transport is NOT shared + with other clients! Exiting the with block will CLOSE the transport + and may cause errors in other clients! + """ + self.transport.close() + + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + +__all__ = ("SpannerClient",) diff --git a/google/cloud/spanner_v1/services/spanner/pagers.py b/google/cloud/spanner_v1/services/spanner/pagers.py new file mode 100644 index 0000000000..90927b54ee --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/pagers.py @@ -0,0 +1,196 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.api_core import retry_async as retries_async +from typing import ( + Any, + AsyncIterator, + Awaitable, + Callable, + Sequence, + Tuple, + Optional, + Iterator, + Union, +) + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] + OptionalAsyncRetry = Union[ + retries_async.AsyncRetry, gapic_v1.method._MethodDefault, None + ] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + OptionalAsyncRetry = Union[retries_async.AsyncRetry, object, None] # type: ignore + +from google.cloud.spanner_v1.types import spanner + + +class ListSessionsPager: + """A pager for iterating through ``list_sessions`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_v1.types.ListSessionsResponse` object, and + provides an ``__iter__`` method to iterate through its + ``sessions`` field. + + If there are more pages, the ``__iter__`` method will make additional + ``ListSessions`` requests and continue to iterate + through the ``sessions`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_v1.types.ListSessionsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., spanner.ListSessionsResponse], + request: spanner.ListSessionsRequest, + response: spanner.ListSessionsResponse, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiate the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_v1.types.ListSessionsRequest): + The initial request object. + response (google.cloud.spanner_v1.types.ListSessionsResponse): + The initial response object. + retry (google.api_core.retry.Retry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner.ListSessionsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + def pages(self) -> Iterator[spanner.ListSessionsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __iter__(self) -> Iterator[spanner.Session]: + for page in self.pages: + yield from page.sessions + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) + + +class ListSessionsAsyncPager: + """A pager for iterating through ``list_sessions`` requests. + + This class thinly wraps an initial + :class:`google.cloud.spanner_v1.types.ListSessionsResponse` object, and + provides an ``__aiter__`` method to iterate through its + ``sessions`` field. + + If there are more pages, the ``__aiter__`` method will make additional + ``ListSessions`` requests and continue to iterate + through the ``sessions`` field on the + corresponding responses. + + All the usual :class:`google.cloud.spanner_v1.types.ListSessionsResponse` + attributes are available on the pager. If multiple requests are made, only + the most recent response is retained, and thus used for attribute lookup. + """ + + def __init__( + self, + method: Callable[..., Awaitable[spanner.ListSessionsResponse]], + request: spanner.ListSessionsRequest, + response: spanner.ListSessionsResponse, + *, + retry: OptionalAsyncRetry = gapic_v1.method.DEFAULT, + timeout: Union[float, object] = gapic_v1.method.DEFAULT, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = () + ): + """Instantiates the pager. + + Args: + method (Callable): The method that was originally called, and + which instantiated this pager. + request (google.cloud.spanner_v1.types.ListSessionsRequest): + The initial request object. + response (google.cloud.spanner_v1.types.ListSessionsResponse): + The initial response object. + retry (google.api_core.retry.AsyncRetry): Designation of what errors, + if any, should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + self._method = method + self._request = spanner.ListSessionsRequest(request) + self._response = response + self._retry = retry + self._timeout = timeout + self._metadata = metadata + + def __getattr__(self, name: str) -> Any: + return getattr(self._response, name) + + @property + async def pages(self) -> AsyncIterator[spanner.ListSessionsResponse]: + yield self._response + while self._response.next_page_token: + self._request.page_token = self._response.next_page_token + self._response = await self._method( + self._request, + retry=self._retry, + timeout=self._timeout, + metadata=self._metadata, + ) + yield self._response + + def __aiter__(self) -> AsyncIterator[spanner.Session]: + async def async_generator(): + async for page in self.pages: + for response in page.sessions: + yield response + + return async_generator() + + def __repr__(self) -> str: + return "{0}<{1!r}>".format(self.__class__.__name__, self._response) diff --git a/google/cloud/spanner_v1/services/spanner/transports/README.rst b/google/cloud/spanner_v1/services/spanner/transports/README.rst new file mode 100644 index 0000000000..99997401d5 --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/transports/README.rst @@ -0,0 +1,9 @@ + +transport inheritance structure +_______________________________ + +`SpannerTransport` is the ABC for all transports. +- public child `SpannerGrpcTransport` for sync gRPC transport (defined in `grpc.py`). +- public child `SpannerGrpcAsyncIOTransport` for async gRPC transport (defined in `grpc_asyncio.py`). +- private child `_BaseSpannerRestTransport` for base REST transport with inner classes `_BaseMETHOD` (defined in `rest_base.py`). +- public child `SpannerRestTransport` for sync REST transport with inner classes `METHOD` derived from the parent's corresponding `_BaseMETHOD` classes (defined in `rest.py`). diff --git a/google/cloud/spanner_v1/services/spanner/transports/__init__.py b/google/cloud/spanner_v1/services/spanner/transports/__init__.py new file mode 100644 index 0000000000..4442420c7f --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/transports/__init__.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from collections import OrderedDict +from typing import Dict, Type + +from .base import SpannerTransport +from .grpc import SpannerGrpcTransport +from .grpc_asyncio import SpannerGrpcAsyncIOTransport +from .rest import SpannerRestTransport +from .rest import SpannerRestInterceptor + + +# Compile a registry of transports. +_transport_registry = OrderedDict() # type: Dict[str, Type[SpannerTransport]] +_transport_registry["grpc"] = SpannerGrpcTransport +_transport_registry["grpc_asyncio"] = SpannerGrpcAsyncIOTransport +_transport_registry["rest"] = SpannerRestTransport + +__all__ = ( + "SpannerTransport", + "SpannerGrpcTransport", + "SpannerGrpcAsyncIOTransport", + "SpannerRestTransport", + "SpannerRestInterceptor", +) diff --git a/google/cloud/spanner_v1/services/spanner/transports/base.py b/google/cloud/spanner_v1/services/spanner/transports/base.py new file mode 100644 index 0000000000..d1dfe07291 --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/transports/base.py @@ -0,0 +1,520 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import abc +from typing import Awaitable, Callable, Dict, Optional, Sequence, Union + +from google.cloud.spanner_v1 import gapic_version as package_version + +import google.auth # type: ignore +import google.api_core +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import retry as retries +from google.auth import credentials as ga_credentials # type: ignore +from google.oauth2 import service_account # type: ignore +import google.protobuf + +from google.cloud.spanner_v1.types import commit_response +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import spanner +from google.cloud.spanner_v1.types import transaction +from google.cloud.spanner_v1.metrics.metrics_interceptor import MetricsInterceptor +from google.protobuf import empty_pb2 # type: ignore + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=package_version.__version__ +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +class SpannerTransport(abc.ABC): + """Abstract transport class for Spanner.""" + + AUTH_SCOPES = ( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.data", + ) + + DEFAULT_HOST: str = "spanner.googleapis.com" + + def __init__( + self, + *, + host: str = DEFAULT_HOST, + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + metrics_interceptor: Optional[MetricsInterceptor] = None, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + """ + + scopes_kwargs = {"scopes": scopes, "default_scopes": self.AUTH_SCOPES} + + # Save the scopes. + self._scopes = scopes + if not hasattr(self, "_ignore_credentials"): + self._ignore_credentials: bool = False + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise core_exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = google.auth.load_credentials_from_file( + credentials_file, **scopes_kwargs, quota_project_id=quota_project_id + ) + elif credentials is None and not self._ignore_credentials: + credentials, _ = google.auth.default( + **scopes_kwargs, quota_project_id=quota_project_id + ) + # Don't apply audience if the credentials file passed from user. + if hasattr(credentials, "with_gdch_audience"): + credentials = credentials.with_gdch_audience( + api_audience if api_audience else host + ) + + # If the credentials are service account credentials, then always try to use self signed JWT. + if ( + always_use_jwt_access + and isinstance(credentials, service_account.Credentials) + and hasattr(service_account.Credentials, "with_always_use_jwt_access") + ): + credentials = credentials.with_always_use_jwt_access(True) + + # Save the credentials. + self._credentials = credentials + + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + @property + def host(self): + return self._host + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.create_session: gapic_v1.method.wrap_method( + self.create_session, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.batch_create_sessions: gapic_v1.method.wrap_method( + self.batch_create_sessions, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=60.0, + ), + default_timeout=60.0, + client_info=client_info, + ), + self.get_session: gapic_v1.method.wrap_method( + self.get_session, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.list_sessions: gapic_v1.method.wrap_method( + self.list_sessions, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.delete_session: gapic_v1.method.wrap_method( + self.delete_session, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.execute_sql: gapic_v1.method.wrap_method( + self.execute_sql, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.execute_streaming_sql: gapic_v1.method.wrap_method( + self.execute_streaming_sql, + default_timeout=3600.0, + client_info=client_info, + ), + self.execute_batch_dml: gapic_v1.method.wrap_method( + self.execute_batch_dml, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.read: gapic_v1.method.wrap_method( + self.read, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.streaming_read: gapic_v1.method.wrap_method( + self.streaming_read, + default_timeout=3600.0, + client_info=client_info, + ), + self.begin_transaction: gapic_v1.method.wrap_method( + self.begin_transaction, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.commit: gapic_v1.method.wrap_method( + self.commit, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.rollback: gapic_v1.method.wrap_method( + self.rollback, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.partition_query: gapic_v1.method.wrap_method( + self.partition_query, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.partition_read: gapic_v1.method.wrap_method( + self.partition_read, + default_retry=retries.Retry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.batch_write: gapic_v1.method.wrap_method( + self.batch_write, + default_timeout=3600.0, + client_info=client_info, + ), + } + + def close(self): + """Closes resources associated with the transport. + + .. warning:: + Only call this method if the transport is NOT shared + with other clients - this may cause errors in other clients! + """ + raise NotImplementedError() + + @property + def create_session( + self, + ) -> Callable[ + [spanner.CreateSessionRequest], + Union[spanner.Session, Awaitable[spanner.Session]], + ]: + raise NotImplementedError() + + @property + def batch_create_sessions( + self, + ) -> Callable[ + [spanner.BatchCreateSessionsRequest], + Union[ + spanner.BatchCreateSessionsResponse, + Awaitable[spanner.BatchCreateSessionsResponse], + ], + ]: + raise NotImplementedError() + + @property + def get_session( + self, + ) -> Callable[ + [spanner.GetSessionRequest], Union[spanner.Session, Awaitable[spanner.Session]] + ]: + raise NotImplementedError() + + @property + def list_sessions( + self, + ) -> Callable[ + [spanner.ListSessionsRequest], + Union[spanner.ListSessionsResponse, Awaitable[spanner.ListSessionsResponse]], + ]: + raise NotImplementedError() + + @property + def delete_session( + self, + ) -> Callable[ + [spanner.DeleteSessionRequest], + Union[empty_pb2.Empty, Awaitable[empty_pb2.Empty]], + ]: + raise NotImplementedError() + + @property + def execute_sql( + self, + ) -> Callable[ + [spanner.ExecuteSqlRequest], + Union[result_set.ResultSet, Awaitable[result_set.ResultSet]], + ]: + raise NotImplementedError() + + @property + def execute_streaming_sql( + self, + ) -> Callable[ + [spanner.ExecuteSqlRequest], + Union[result_set.PartialResultSet, Awaitable[result_set.PartialResultSet]], + ]: + raise NotImplementedError() + + @property + def execute_batch_dml( + self, + ) -> Callable[ + [spanner.ExecuteBatchDmlRequest], + Union[ + spanner.ExecuteBatchDmlResponse, Awaitable[spanner.ExecuteBatchDmlResponse] + ], + ]: + raise NotImplementedError() + + @property + def read( + self, + ) -> Callable[ + [spanner.ReadRequest], + Union[result_set.ResultSet, Awaitable[result_set.ResultSet]], + ]: + raise NotImplementedError() + + @property + def streaming_read( + self, + ) -> Callable[ + [spanner.ReadRequest], + Union[result_set.PartialResultSet, Awaitable[result_set.PartialResultSet]], + ]: + raise NotImplementedError() + + @property + def begin_transaction( + self, + ) -> Callable[ + [spanner.BeginTransactionRequest], + Union[transaction.Transaction, Awaitable[transaction.Transaction]], + ]: + raise NotImplementedError() + + @property + def commit( + self, + ) -> Callable[ + [spanner.CommitRequest], + Union[ + commit_response.CommitResponse, Awaitable[commit_response.CommitResponse] + ], + ]: + raise NotImplementedError() + + @property + def rollback( + self, + ) -> Callable[ + [spanner.RollbackRequest], Union[empty_pb2.Empty, Awaitable[empty_pb2.Empty]] + ]: + raise NotImplementedError() + + @property + def partition_query( + self, + ) -> Callable[ + [spanner.PartitionQueryRequest], + Union[spanner.PartitionResponse, Awaitable[spanner.PartitionResponse]], + ]: + raise NotImplementedError() + + @property + def partition_read( + self, + ) -> Callable[ + [spanner.PartitionReadRequest], + Union[spanner.PartitionResponse, Awaitable[spanner.PartitionResponse]], + ]: + raise NotImplementedError() + + @property + def batch_write( + self, + ) -> Callable[ + [spanner.BatchWriteRequest], + Union[spanner.BatchWriteResponse, Awaitable[spanner.BatchWriteResponse]], + ]: + raise NotImplementedError() + + @property + def kind(self) -> str: + raise NotImplementedError() + + +__all__ = ("SpannerTransport",) diff --git a/google/cloud/spanner_v1/services/spanner/transports/grpc.py b/google/cloud/spanner_v1/services/spanner/transports/grpc.py new file mode 100644 index 0000000000..8b377d7725 --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/transports/grpc.py @@ -0,0 +1,914 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import json +import logging as std_logging +import pickle +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple, Union + +from google.api_core import grpc_helpers +from google.api_core import gapic_v1 +import google.auth # type: ignore +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.protobuf.json_format import MessageToJson +import google.protobuf.message + +import grpc # type: ignore +import proto # type: ignore + +from google.cloud.spanner_v1.types import commit_response +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import spanner +from google.cloud.spanner_v1.types import transaction +from google.cloud.spanner_v1.metrics.metrics_interceptor import MetricsInterceptor +from google.protobuf import empty_pb2 # type: ignore +from .base import SpannerTransport, DEFAULT_CLIENT_INFO + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class _LoggingClientInterceptor(grpc.UnaryUnaryClientInterceptor): # pragma: NO COVER + def intercept_unary_unary(self, continuation, client_call_details, request): + logging_enabled = CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ) + if logging_enabled: # pragma: NO COVER + request_metadata = client_call_details.metadata + if isinstance(request, proto.Message): + request_payload = type(request).to_json(request) + elif isinstance(request, google.protobuf.message.Message): + request_payload = MessageToJson(request) + else: + request_payload = f"{type(request).__name__}: {pickle.dumps(request)}" + + request_metadata = { + key: value.decode("utf-8") if isinstance(value, bytes) else value + for key, value in request_metadata + } + grpc_request = { + "payload": request_payload, + "requestMethod": "grpc", + "metadata": dict(request_metadata), + } + _LOGGER.debug( + f"Sending request for {client_call_details.method}", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": str(client_call_details.method), + "request": grpc_request, + "metadata": grpc_request["metadata"], + }, + ) + response = continuation(client_call_details, request) + if logging_enabled: # pragma: NO COVER + response_metadata = response.trailing_metadata() + # Convert gRPC metadata `` to list of tuples + metadata = ( + dict([(k, str(v)) for k, v in response_metadata]) + if response_metadata + else None + ) + result = response.result() + if isinstance(result, proto.Message): + response_payload = type(result).to_json(result) + elif isinstance(result, google.protobuf.message.Message): + response_payload = MessageToJson(result) + else: + response_payload = f"{type(result).__name__}: {pickle.dumps(result)}" + grpc_response = { + "payload": response_payload, + "metadata": metadata, + "status": "OK", + } + _LOGGER.debug( + f"Received response for {client_call_details.method}.", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": client_call_details.method, + "response": grpc_response, + "metadata": grpc_response["metadata"], + }, + ) + return response + + +class SpannerGrpcTransport(SpannerTransport): + """gRPC backend transport for Spanner. + + Cloud Spanner API + + The Cloud Spanner API can be used to manage sessions and execute + transactions on data stored in Cloud Spanner databases. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: Optional[Union[grpc.Channel, Callable[..., grpc.Channel]]] = None, + api_mtls_endpoint: Optional[str] = None, + client_cert_source: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + ssl_channel_credentials: Optional[grpc.ChannelCredentials] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + metrics_interceptor: Optional[MetricsInterceptor] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if a ``channel`` instance is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if a ``channel`` instance is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if a ``channel`` instance is provided. + channel (Optional[Union[grpc.Channel, Callable[..., grpc.Channel]]]): + A ``Channel`` instance through which to make calls, or a Callable + that constructs and returns one. If set to None, ``self.create_channel`` + is used to create the channel. If a Callable is given, it will be called + with the same arguments as used in ``self.create_channel``. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or application default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for the grpc channel. It is ignored if a ``channel`` instance is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure a mutual TLS channel. It is + ignored if a ``channel`` instance or ``ssl_channel_credentials`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._grpc_channel = None + self._ssl_channel_credentials = ssl_channel_credentials + self._stubs: Dict[str, Callable] = {} + self._metrics_interceptor = None + + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + + if isinstance(channel, grpc.Channel): + # Ignore credentials if a channel was passed. + credentials = None + self._ignore_credentials = True + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + + else: + if api_mtls_endpoint: + host = api_mtls_endpoint + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + self._ssl_channel_credentials = SslCredentials().ssl_credentials + + else: + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + + # The base transport sets the host, credentials and scopes + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + if not self._grpc_channel: + # initialize with the provided callable or the default channel + channel_init = channel or type(self).create_channel + self._grpc_channel = channel_init( + self._host, + # use the credentials which are saved + credentials=self._credentials, + # Set ``credentials_file`` to ``None`` here as + # the credentials that we saved earlier should be used. + credentials_file=None, + scopes=self._scopes, + ssl_credentials=self._ssl_channel_credentials, + quota_project_id=quota_project_id, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + # Wrap the gRPC channel with the metric interceptor + if metrics_interceptor is not None: + self._metrics_interceptor = metrics_interceptor + self._grpc_channel = grpc.intercept_channel( + self._grpc_channel, metrics_interceptor + ) + + self._interceptor = _LoggingClientInterceptor() + self._logged_channel = grpc.intercept_channel( + self._grpc_channel, self._interceptor + ) + + # Wrap messages. This must be done after self._logged_channel exists + self._prep_wrapped_messages(client_info) + + @classmethod + def create_channel( + cls, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + host (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + quota_project_id=quota_project_id, + default_scopes=cls.AUTH_SCOPES, + scopes=scopes, + default_host=cls.DEFAULT_HOST, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service.""" + return self._grpc_channel + + @property + def create_session( + self, + ) -> Callable[[spanner.CreateSessionRequest], spanner.Session]: + r"""Return a callable for the create session method over gRPC. + + Creates a new session. A session can be used to perform + transactions that read and/or modify data in a Cloud Spanner + database. Sessions are meant to be reused for many consecutive + transactions. + + Sessions can only execute one transaction at a time. To execute + multiple concurrent read-write/write-only transactions, create + multiple sessions. Note that standalone reads and queries use a + transaction internally, and count toward the one transaction + limit. + + Active sessions use additional server resources, so it's a good + idea to delete idle and unneeded sessions. Aside from explicit + deletes, Cloud Spanner can delete sessions when no operations + are sent for more than an hour. If a session is deleted, + requests to it return ``NOT_FOUND``. + + Idle sessions can be kept alive by sending a trivial SQL query + periodically, for example, ``"SELECT 1"``. + + Returns: + Callable[[~.CreateSessionRequest], + ~.Session]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_session" not in self._stubs: + self._stubs["create_session"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/CreateSession", + request_serializer=spanner.CreateSessionRequest.serialize, + response_deserializer=spanner.Session.deserialize, + ) + return self._stubs["create_session"] + + @property + def batch_create_sessions( + self, + ) -> Callable[ + [spanner.BatchCreateSessionsRequest], spanner.BatchCreateSessionsResponse + ]: + r"""Return a callable for the batch create sessions method over gRPC. + + Creates multiple new sessions. + + This API can be used to initialize a session cache on + the clients. See https://goo.gl/TgSFN2 for best + practices on session cache management. + + Returns: + Callable[[~.BatchCreateSessionsRequest], + ~.BatchCreateSessionsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "batch_create_sessions" not in self._stubs: + self._stubs["batch_create_sessions"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/BatchCreateSessions", + request_serializer=spanner.BatchCreateSessionsRequest.serialize, + response_deserializer=spanner.BatchCreateSessionsResponse.deserialize, + ) + return self._stubs["batch_create_sessions"] + + @property + def get_session(self) -> Callable[[spanner.GetSessionRequest], spanner.Session]: + r"""Return a callable for the get session method over gRPC. + + Gets a session. Returns ``NOT_FOUND`` if the session doesn't + exist. This is mainly useful for determining whether a session + is still alive. + + Returns: + Callable[[~.GetSessionRequest], + ~.Session]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_session" not in self._stubs: + self._stubs["get_session"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/GetSession", + request_serializer=spanner.GetSessionRequest.serialize, + response_deserializer=spanner.Session.deserialize, + ) + return self._stubs["get_session"] + + @property + def list_sessions( + self, + ) -> Callable[[spanner.ListSessionsRequest], spanner.ListSessionsResponse]: + r"""Return a callable for the list sessions method over gRPC. + + Lists all sessions in a given database. + + Returns: + Callable[[~.ListSessionsRequest], + ~.ListSessionsResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_sessions" not in self._stubs: + self._stubs["list_sessions"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/ListSessions", + request_serializer=spanner.ListSessionsRequest.serialize, + response_deserializer=spanner.ListSessionsResponse.deserialize, + ) + return self._stubs["list_sessions"] + + @property + def delete_session( + self, + ) -> Callable[[spanner.DeleteSessionRequest], empty_pb2.Empty]: + r"""Return a callable for the delete session method over gRPC. + + Ends a session, releasing server resources associated + with it. This asynchronously triggers the cancellation + of any operations that are running with this session. + + Returns: + Callable[[~.DeleteSessionRequest], + ~.Empty]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_session" not in self._stubs: + self._stubs["delete_session"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/DeleteSession", + request_serializer=spanner.DeleteSessionRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_session"] + + @property + def execute_sql( + self, + ) -> Callable[[spanner.ExecuteSqlRequest], result_set.ResultSet]: + r"""Return a callable for the execute sql method over gRPC. + + Executes an SQL statement, returning all results in a single + reply. This method can't be used to return a result set larger + than 10 MiB; if the query yields more data than that, the query + fails with a ``FAILED_PRECONDITION`` error. + + Operations inside read-write transactions might return + ``ABORTED``. If this occurs, the application should restart the + transaction from the beginning. See + [Transaction][google.spanner.v1.Transaction] for more details. + + Larger result sets can be fetched in streaming fashion by + calling + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + instead. + + The query string can be SQL or `Graph Query Language + (GQL) `__. + + Returns: + Callable[[~.ExecuteSqlRequest], + ~.ResultSet]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "execute_sql" not in self._stubs: + self._stubs["execute_sql"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/ExecuteSql", + request_serializer=spanner.ExecuteSqlRequest.serialize, + response_deserializer=result_set.ResultSet.deserialize, + ) + return self._stubs["execute_sql"] + + @property + def execute_streaming_sql( + self, + ) -> Callable[[spanner.ExecuteSqlRequest], result_set.PartialResultSet]: + r"""Return a callable for the execute streaming sql method over gRPC. + + Like [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], except + returns the result set as a stream. Unlike + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], there is no + limit on the size of the returned result set. However, no + individual row in the result set can exceed 100 MiB, and no + column value can exceed 10 MiB. + + The query string can be SQL or `Graph Query Language + (GQL) `__. + + Returns: + Callable[[~.ExecuteSqlRequest], + ~.PartialResultSet]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "execute_streaming_sql" not in self._stubs: + self._stubs["execute_streaming_sql"] = self._logged_channel.unary_stream( + "/google.spanner.v1.Spanner/ExecuteStreamingSql", + request_serializer=spanner.ExecuteSqlRequest.serialize, + response_deserializer=result_set.PartialResultSet.deserialize, + ) + return self._stubs["execute_streaming_sql"] + + @property + def execute_batch_dml( + self, + ) -> Callable[[spanner.ExecuteBatchDmlRequest], spanner.ExecuteBatchDmlResponse]: + r"""Return a callable for the execute batch dml method over gRPC. + + Executes a batch of SQL DML statements. This method allows many + statements to be run with lower latency than submitting them + sequentially with + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + Statements are executed in sequential order. A request can + succeed even if a statement fails. The + [ExecuteBatchDmlResponse.status][google.spanner.v1.ExecuteBatchDmlResponse.status] + field in the response provides information about the statement + that failed. Clients must inspect this field to determine + whether an error occurred. + + Execution stops after the first failed statement; the remaining + statements are not executed. + + Returns: + Callable[[~.ExecuteBatchDmlRequest], + ~.ExecuteBatchDmlResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "execute_batch_dml" not in self._stubs: + self._stubs["execute_batch_dml"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/ExecuteBatchDml", + request_serializer=spanner.ExecuteBatchDmlRequest.serialize, + response_deserializer=spanner.ExecuteBatchDmlResponse.deserialize, + ) + return self._stubs["execute_batch_dml"] + + @property + def read(self) -> Callable[[spanner.ReadRequest], result_set.ResultSet]: + r"""Return a callable for the read method over gRPC. + + Reads rows from the database using key lookups and scans, as a + simple key/value style alternative to + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. This method + can't be used to return a result set larger than 10 MiB; if the + read matches more data than that, the read fails with a + ``FAILED_PRECONDITION`` error. + + Reads inside read-write transactions might return ``ABORTED``. + If this occurs, the application should restart the transaction + from the beginning. See + [Transaction][google.spanner.v1.Transaction] for more details. + + Larger result sets can be yielded in streaming fashion by + calling [StreamingRead][google.spanner.v1.Spanner.StreamingRead] + instead. + + Returns: + Callable[[~.ReadRequest], + ~.ResultSet]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "read" not in self._stubs: + self._stubs["read"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/Read", + request_serializer=spanner.ReadRequest.serialize, + response_deserializer=result_set.ResultSet.deserialize, + ) + return self._stubs["read"] + + @property + def streaming_read( + self, + ) -> Callable[[spanner.ReadRequest], result_set.PartialResultSet]: + r"""Return a callable for the streaming read method over gRPC. + + Like [Read][google.spanner.v1.Spanner.Read], except returns the + result set as a stream. Unlike + [Read][google.spanner.v1.Spanner.Read], there is no limit on the + size of the returned result set. However, no individual row in + the result set can exceed 100 MiB, and no column value can + exceed 10 MiB. + + Returns: + Callable[[~.ReadRequest], + ~.PartialResultSet]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "streaming_read" not in self._stubs: + self._stubs["streaming_read"] = self._logged_channel.unary_stream( + "/google.spanner.v1.Spanner/StreamingRead", + request_serializer=spanner.ReadRequest.serialize, + response_deserializer=result_set.PartialResultSet.deserialize, + ) + return self._stubs["streaming_read"] + + @property + def begin_transaction( + self, + ) -> Callable[[spanner.BeginTransactionRequest], transaction.Transaction]: + r"""Return a callable for the begin transaction method over gRPC. + + Begins a new transaction. This step can often be skipped: + [Read][google.spanner.v1.Spanner.Read], + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [Commit][google.spanner.v1.Spanner.Commit] can begin a new + transaction as a side-effect. + + Returns: + Callable[[~.BeginTransactionRequest], + ~.Transaction]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "begin_transaction" not in self._stubs: + self._stubs["begin_transaction"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/BeginTransaction", + request_serializer=spanner.BeginTransactionRequest.serialize, + response_deserializer=transaction.Transaction.deserialize, + ) + return self._stubs["begin_transaction"] + + @property + def commit( + self, + ) -> Callable[[spanner.CommitRequest], commit_response.CommitResponse]: + r"""Return a callable for the commit method over gRPC. + + Commits a transaction. The request includes the mutations to be + applied to rows in the database. + + ``Commit`` might return an ``ABORTED`` error. This can occur at + any time; commonly, the cause is conflicts with concurrent + transactions. However, it can also happen for a variety of other + reasons. If ``Commit`` returns ``ABORTED``, the caller should + retry the transaction from the beginning, reusing the same + session. + + On very rare occasions, ``Commit`` might return ``UNKNOWN``. + This can happen, for example, if the client job experiences a 1+ + hour networking failure. At that point, Cloud Spanner has lost + track of the transaction outcome and we recommend that you + perform another read from the database to see the state of + things as they are now. + + Returns: + Callable[[~.CommitRequest], + ~.CommitResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "commit" not in self._stubs: + self._stubs["commit"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/Commit", + request_serializer=spanner.CommitRequest.serialize, + response_deserializer=commit_response.CommitResponse.deserialize, + ) + return self._stubs["commit"] + + @property + def rollback(self) -> Callable[[spanner.RollbackRequest], empty_pb2.Empty]: + r"""Return a callable for the rollback method over gRPC. + + Rolls back a transaction, releasing any locks it holds. It's a + good idea to call this for any transaction that includes one or + more [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] requests and + ultimately decides not to commit. + + ``Rollback`` returns ``OK`` if it successfully aborts the + transaction, the transaction was already aborted, or the + transaction isn't found. ``Rollback`` never returns ``ABORTED``. + + Returns: + Callable[[~.RollbackRequest], + ~.Empty]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "rollback" not in self._stubs: + self._stubs["rollback"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/Rollback", + request_serializer=spanner.RollbackRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["rollback"] + + @property + def partition_query( + self, + ) -> Callable[[spanner.PartitionQueryRequest], spanner.PartitionResponse]: + r"""Return a callable for the partition query method over gRPC. + + Creates a set of partition tokens that can be used to execute a + query operation in parallel. Each of the returned partition + tokens can be used by + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + to specify a subset of the query result to read. The same + session and read-only transaction must be used by the + ``PartitionQueryRequest`` used to create the partition tokens + and the ``ExecuteSqlRequests`` that use the partition tokens. + + Partition tokens become invalid when the session used to create + them is deleted, is idle for too long, begins a new transaction, + or becomes too old. When any of these happen, it isn't possible + to resume the query, and the whole operation must be restarted + from the beginning. + + Returns: + Callable[[~.PartitionQueryRequest], + ~.PartitionResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "partition_query" not in self._stubs: + self._stubs["partition_query"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/PartitionQuery", + request_serializer=spanner.PartitionQueryRequest.serialize, + response_deserializer=spanner.PartitionResponse.deserialize, + ) + return self._stubs["partition_query"] + + @property + def partition_read( + self, + ) -> Callable[[spanner.PartitionReadRequest], spanner.PartitionResponse]: + r"""Return a callable for the partition read method over gRPC. + + Creates a set of partition tokens that can be used to execute a + read operation in parallel. Each of the returned partition + tokens can be used by + [StreamingRead][google.spanner.v1.Spanner.StreamingRead] to + specify a subset of the read result to read. The same session + and read-only transaction must be used by the + ``PartitionReadRequest`` used to create the partition tokens and + the ``ReadRequests`` that use the partition tokens. There are no + ordering guarantees on rows returned among the returned + partition tokens, or even within each individual + ``StreamingRead`` call issued with a ``partition_token``. + + Partition tokens become invalid when the session used to create + them is deleted, is idle for too long, begins a new transaction, + or becomes too old. When any of these happen, it isn't possible + to resume the read, and the whole operation must be restarted + from the beginning. + + Returns: + Callable[[~.PartitionReadRequest], + ~.PartitionResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "partition_read" not in self._stubs: + self._stubs["partition_read"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/PartitionRead", + request_serializer=spanner.PartitionReadRequest.serialize, + response_deserializer=spanner.PartitionResponse.deserialize, + ) + return self._stubs["partition_read"] + + @property + def batch_write( + self, + ) -> Callable[[spanner.BatchWriteRequest], spanner.BatchWriteResponse]: + r"""Return a callable for the batch write method over gRPC. + + Batches the supplied mutation groups in a collection of + efficient transactions. All mutations in a group are committed + atomically. However, mutations across groups can be committed + non-atomically in an unspecified order and thus, they must be + independent of each other. Partial failure is possible, that is, + some groups might have been committed successfully, while some + might have failed. The results of individual batches are + streamed into the response as the batches are applied. + + ``BatchWrite`` requests are not replay protected, meaning that + each mutation group can be applied more than once. Replays of + non-idempotent mutations can have undesirable effects. For + example, replays of an insert mutation can produce an already + exists error or if you use generated or commit timestamp-based + keys, it can result in additional rows being added to the + mutation's table. We recommend structuring your mutation groups + to be idempotent to avoid this issue. + + Returns: + Callable[[~.BatchWriteRequest], + ~.BatchWriteResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "batch_write" not in self._stubs: + self._stubs["batch_write"] = self._logged_channel.unary_stream( + "/google.spanner.v1.Spanner/BatchWrite", + request_serializer=spanner.BatchWriteRequest.serialize, + response_deserializer=spanner.BatchWriteResponse.deserialize, + ) + return self._stubs["batch_write"] + + def close(self): + self._logged_channel.close() + + @property + def kind(self) -> str: + return "grpc" + + +__all__ = ("SpannerGrpcTransport",) diff --git a/google/cloud/spanner_v1/services/spanner/transports/grpc_asyncio.py b/google/cloud/spanner_v1/services/spanner/transports/grpc_asyncio.py new file mode 100644 index 0000000000..2c6cec52a9 --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/transports/grpc_asyncio.py @@ -0,0 +1,1147 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import inspect +import json +import pickle +import logging as std_logging +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple, Union + +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers_async +from google.api_core import exceptions as core_exceptions +from google.api_core import retry_async as retries +from google.auth import credentials as ga_credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.protobuf.json_format import MessageToJson +import google.protobuf.message + +import grpc # type: ignore +import proto # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.spanner_v1.types import commit_response +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import spanner +from google.cloud.spanner_v1.types import transaction +from google.cloud.spanner_v1.metrics.metrics_interceptor import MetricsInterceptor +from google.protobuf import empty_pb2 # type: ignore +from .base import SpannerTransport, DEFAULT_CLIENT_INFO +from .grpc import SpannerGrpcTransport + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = std_logging.getLogger(__name__) + + +class _LoggingClientAIOInterceptor( + grpc.aio.UnaryUnaryClientInterceptor +): # pragma: NO COVER + async def intercept_unary_unary(self, continuation, client_call_details, request): + logging_enabled = CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + std_logging.DEBUG + ) + if logging_enabled: # pragma: NO COVER + request_metadata = client_call_details.metadata + if isinstance(request, proto.Message): + request_payload = type(request).to_json(request) + elif isinstance(request, google.protobuf.message.Message): + request_payload = MessageToJson(request) + else: + request_payload = f"{type(request).__name__}: {pickle.dumps(request)}" + + request_metadata = { + key: value.decode("utf-8") if isinstance(value, bytes) else value + for key, value in request_metadata + } + grpc_request = { + "payload": request_payload, + "requestMethod": "grpc", + "metadata": dict(request_metadata), + } + _LOGGER.debug( + f"Sending request for {client_call_details.method}", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": str(client_call_details.method), + "request": grpc_request, + "metadata": grpc_request["metadata"], + }, + ) + response = await continuation(client_call_details, request) + if logging_enabled: # pragma: NO COVER + response_metadata = await response.trailing_metadata() + # Convert gRPC metadata `` to list of tuples + metadata = ( + dict([(k, str(v)) for k, v in response_metadata]) + if response_metadata + else None + ) + result = await response + if isinstance(result, proto.Message): + response_payload = type(result).to_json(result) + elif isinstance(result, google.protobuf.message.Message): + response_payload = MessageToJson(result) + else: + response_payload = f"{type(result).__name__}: {pickle.dumps(result)}" + grpc_response = { + "payload": response_payload, + "metadata": metadata, + "status": "OK", + } + _LOGGER.debug( + f"Received response to rpc {client_call_details.method}.", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": str(client_call_details.method), + "response": grpc_response, + "metadata": grpc_response["metadata"], + }, + ) + return response + + +class SpannerGrpcAsyncIOTransport(SpannerTransport): + """gRPC AsyncIO backend transport for Spanner. + + Cloud Spanner API + + The Cloud Spanner API can be used to manage sessions and execute + transactions on data stored in Cloud Spanner databases. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + host (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + quota_project_id=quota_project_id, + default_scopes=cls.AUTH_SCOPES, + scopes=scopes, + default_host=cls.DEFAULT_HOST, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: Optional[Union[aio.Channel, Callable[..., aio.Channel]]] = None, + api_mtls_endpoint: Optional[str] = None, + client_cert_source: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + ssl_channel_credentials: Optional[grpc.ChannelCredentials] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + api_audience: Optional[str] = None, + metrics_interceptor: Optional[MetricsInterceptor] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if a ``channel`` instance is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if a ``channel`` instance is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[Union[aio.Channel, Callable[..., aio.Channel]]]): + A ``Channel`` instance through which to make calls, or a Callable + that constructs and returns one. If set to None, ``self.create_channel`` + is used to create the channel. If a Callable is given, it will be called + with the same arguments as used in ``self.create_channel``. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or application default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for the grpc channel. It is ignored if a ``channel`` instance is provided. + client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): + A callback to provide client certificate bytes and private key bytes, + both in PEM format. It is used to configure a mutual TLS channel. It is + ignored if a ``channel`` instance or ``ssl_channel_credentials`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._grpc_channel = None + self._ssl_channel_credentials = ssl_channel_credentials + self._stubs: Dict[str, Callable] = {} + + if api_mtls_endpoint: + warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) + if client_cert_source: + warnings.warn("client_cert_source is deprecated", DeprecationWarning) + + if isinstance(channel, aio.Channel): + # Ignore credentials if a channel was passed. + credentials = None + self._ignore_credentials = True + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + else: + if api_mtls_endpoint: + host = api_mtls_endpoint + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + self._ssl_channel_credentials = SslCredentials().ssl_credentials + + else: + if client_cert_source_for_mtls and not ssl_channel_credentials: + cert, key = client_cert_source_for_mtls() + self._ssl_channel_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + + # The base transport sets the host, credentials and scopes + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + if not self._grpc_channel: + # initialize with the provided callable or the default channel + channel_init = channel or type(self).create_channel + self._grpc_channel = channel_init( + self._host, + # use the credentials which are saved + credentials=self._credentials, + # Set ``credentials_file`` to ``None`` here as + # the credentials that we saved earlier should be used. + credentials_file=None, + scopes=self._scopes, + ssl_credentials=self._ssl_channel_credentials, + quota_project_id=quota_project_id, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + self._interceptor = _LoggingClientAIOInterceptor() + self._grpc_channel._unary_unary_interceptors.append(self._interceptor) + self._logged_channel = self._grpc_channel + self._wrap_with_kind = ( + "kind" in inspect.signature(gapic_v1.method_async.wrap_method).parameters + ) + # Wrap messages. This must be done after self._logged_channel exists + self._prep_wrapped_messages(client_info) + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def create_session( + self, + ) -> Callable[[spanner.CreateSessionRequest], Awaitable[spanner.Session]]: + r"""Return a callable for the create session method over gRPC. + + Creates a new session. A session can be used to perform + transactions that read and/or modify data in a Cloud Spanner + database. Sessions are meant to be reused for many consecutive + transactions. + + Sessions can only execute one transaction at a time. To execute + multiple concurrent read-write/write-only transactions, create + multiple sessions. Note that standalone reads and queries use a + transaction internally, and count toward the one transaction + limit. + + Active sessions use additional server resources, so it's a good + idea to delete idle and unneeded sessions. Aside from explicit + deletes, Cloud Spanner can delete sessions when no operations + are sent for more than an hour. If a session is deleted, + requests to it return ``NOT_FOUND``. + + Idle sessions can be kept alive by sending a trivial SQL query + periodically, for example, ``"SELECT 1"``. + + Returns: + Callable[[~.CreateSessionRequest], + Awaitable[~.Session]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "create_session" not in self._stubs: + self._stubs["create_session"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/CreateSession", + request_serializer=spanner.CreateSessionRequest.serialize, + response_deserializer=spanner.Session.deserialize, + ) + return self._stubs["create_session"] + + @property + def batch_create_sessions( + self, + ) -> Callable[ + [spanner.BatchCreateSessionsRequest], + Awaitable[spanner.BatchCreateSessionsResponse], + ]: + r"""Return a callable for the batch create sessions method over gRPC. + + Creates multiple new sessions. + + This API can be used to initialize a session cache on + the clients. See https://goo.gl/TgSFN2 for best + practices on session cache management. + + Returns: + Callable[[~.BatchCreateSessionsRequest], + Awaitable[~.BatchCreateSessionsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "batch_create_sessions" not in self._stubs: + self._stubs["batch_create_sessions"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/BatchCreateSessions", + request_serializer=spanner.BatchCreateSessionsRequest.serialize, + response_deserializer=spanner.BatchCreateSessionsResponse.deserialize, + ) + return self._stubs["batch_create_sessions"] + + @property + def get_session( + self, + ) -> Callable[[spanner.GetSessionRequest], Awaitable[spanner.Session]]: + r"""Return a callable for the get session method over gRPC. + + Gets a session. Returns ``NOT_FOUND`` if the session doesn't + exist. This is mainly useful for determining whether a session + is still alive. + + Returns: + Callable[[~.GetSessionRequest], + Awaitable[~.Session]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "get_session" not in self._stubs: + self._stubs["get_session"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/GetSession", + request_serializer=spanner.GetSessionRequest.serialize, + response_deserializer=spanner.Session.deserialize, + ) + return self._stubs["get_session"] + + @property + def list_sessions( + self, + ) -> Callable[ + [spanner.ListSessionsRequest], Awaitable[spanner.ListSessionsResponse] + ]: + r"""Return a callable for the list sessions method over gRPC. + + Lists all sessions in a given database. + + Returns: + Callable[[~.ListSessionsRequest], + Awaitable[~.ListSessionsResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "list_sessions" not in self._stubs: + self._stubs["list_sessions"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/ListSessions", + request_serializer=spanner.ListSessionsRequest.serialize, + response_deserializer=spanner.ListSessionsResponse.deserialize, + ) + return self._stubs["list_sessions"] + + @property + def delete_session( + self, + ) -> Callable[[spanner.DeleteSessionRequest], Awaitable[empty_pb2.Empty]]: + r"""Return a callable for the delete session method over gRPC. + + Ends a session, releasing server resources associated + with it. This asynchronously triggers the cancellation + of any operations that are running with this session. + + Returns: + Callable[[~.DeleteSessionRequest], + Awaitable[~.Empty]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "delete_session" not in self._stubs: + self._stubs["delete_session"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/DeleteSession", + request_serializer=spanner.DeleteSessionRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["delete_session"] + + @property + def execute_sql( + self, + ) -> Callable[[spanner.ExecuteSqlRequest], Awaitable[result_set.ResultSet]]: + r"""Return a callable for the execute sql method over gRPC. + + Executes an SQL statement, returning all results in a single + reply. This method can't be used to return a result set larger + than 10 MiB; if the query yields more data than that, the query + fails with a ``FAILED_PRECONDITION`` error. + + Operations inside read-write transactions might return + ``ABORTED``. If this occurs, the application should restart the + transaction from the beginning. See + [Transaction][google.spanner.v1.Transaction] for more details. + + Larger result sets can be fetched in streaming fashion by + calling + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + instead. + + The query string can be SQL or `Graph Query Language + (GQL) `__. + + Returns: + Callable[[~.ExecuteSqlRequest], + Awaitable[~.ResultSet]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "execute_sql" not in self._stubs: + self._stubs["execute_sql"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/ExecuteSql", + request_serializer=spanner.ExecuteSqlRequest.serialize, + response_deserializer=result_set.ResultSet.deserialize, + ) + return self._stubs["execute_sql"] + + @property + def execute_streaming_sql( + self, + ) -> Callable[[spanner.ExecuteSqlRequest], Awaitable[result_set.PartialResultSet]]: + r"""Return a callable for the execute streaming sql method over gRPC. + + Like [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], except + returns the result set as a stream. Unlike + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], there is no + limit on the size of the returned result set. However, no + individual row in the result set can exceed 100 MiB, and no + column value can exceed 10 MiB. + + The query string can be SQL or `Graph Query Language + (GQL) `__. + + Returns: + Callable[[~.ExecuteSqlRequest], + Awaitable[~.PartialResultSet]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "execute_streaming_sql" not in self._stubs: + self._stubs["execute_streaming_sql"] = self._logged_channel.unary_stream( + "/google.spanner.v1.Spanner/ExecuteStreamingSql", + request_serializer=spanner.ExecuteSqlRequest.serialize, + response_deserializer=result_set.PartialResultSet.deserialize, + ) + return self._stubs["execute_streaming_sql"] + + @property + def execute_batch_dml( + self, + ) -> Callable[ + [spanner.ExecuteBatchDmlRequest], Awaitable[spanner.ExecuteBatchDmlResponse] + ]: + r"""Return a callable for the execute batch dml method over gRPC. + + Executes a batch of SQL DML statements. This method allows many + statements to be run with lower latency than submitting them + sequentially with + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + Statements are executed in sequential order. A request can + succeed even if a statement fails. The + [ExecuteBatchDmlResponse.status][google.spanner.v1.ExecuteBatchDmlResponse.status] + field in the response provides information about the statement + that failed. Clients must inspect this field to determine + whether an error occurred. + + Execution stops after the first failed statement; the remaining + statements are not executed. + + Returns: + Callable[[~.ExecuteBatchDmlRequest], + Awaitable[~.ExecuteBatchDmlResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "execute_batch_dml" not in self._stubs: + self._stubs["execute_batch_dml"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/ExecuteBatchDml", + request_serializer=spanner.ExecuteBatchDmlRequest.serialize, + response_deserializer=spanner.ExecuteBatchDmlResponse.deserialize, + ) + return self._stubs["execute_batch_dml"] + + @property + def read(self) -> Callable[[spanner.ReadRequest], Awaitable[result_set.ResultSet]]: + r"""Return a callable for the read method over gRPC. + + Reads rows from the database using key lookups and scans, as a + simple key/value style alternative to + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. This method + can't be used to return a result set larger than 10 MiB; if the + read matches more data than that, the read fails with a + ``FAILED_PRECONDITION`` error. + + Reads inside read-write transactions might return ``ABORTED``. + If this occurs, the application should restart the transaction + from the beginning. See + [Transaction][google.spanner.v1.Transaction] for more details. + + Larger result sets can be yielded in streaming fashion by + calling [StreamingRead][google.spanner.v1.Spanner.StreamingRead] + instead. + + Returns: + Callable[[~.ReadRequest], + Awaitable[~.ResultSet]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "read" not in self._stubs: + self._stubs["read"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/Read", + request_serializer=spanner.ReadRequest.serialize, + response_deserializer=result_set.ResultSet.deserialize, + ) + return self._stubs["read"] + + @property + def streaming_read( + self, + ) -> Callable[[spanner.ReadRequest], Awaitable[result_set.PartialResultSet]]: + r"""Return a callable for the streaming read method over gRPC. + + Like [Read][google.spanner.v1.Spanner.Read], except returns the + result set as a stream. Unlike + [Read][google.spanner.v1.Spanner.Read], there is no limit on the + size of the returned result set. However, no individual row in + the result set can exceed 100 MiB, and no column value can + exceed 10 MiB. + + Returns: + Callable[[~.ReadRequest], + Awaitable[~.PartialResultSet]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "streaming_read" not in self._stubs: + self._stubs["streaming_read"] = self._logged_channel.unary_stream( + "/google.spanner.v1.Spanner/StreamingRead", + request_serializer=spanner.ReadRequest.serialize, + response_deserializer=result_set.PartialResultSet.deserialize, + ) + return self._stubs["streaming_read"] + + @property + def begin_transaction( + self, + ) -> Callable[ + [spanner.BeginTransactionRequest], Awaitable[transaction.Transaction] + ]: + r"""Return a callable for the begin transaction method over gRPC. + + Begins a new transaction. This step can often be skipped: + [Read][google.spanner.v1.Spanner.Read], + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [Commit][google.spanner.v1.Spanner.Commit] can begin a new + transaction as a side-effect. + + Returns: + Callable[[~.BeginTransactionRequest], + Awaitable[~.Transaction]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "begin_transaction" not in self._stubs: + self._stubs["begin_transaction"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/BeginTransaction", + request_serializer=spanner.BeginTransactionRequest.serialize, + response_deserializer=transaction.Transaction.deserialize, + ) + return self._stubs["begin_transaction"] + + @property + def commit( + self, + ) -> Callable[[spanner.CommitRequest], Awaitable[commit_response.CommitResponse]]: + r"""Return a callable for the commit method over gRPC. + + Commits a transaction. The request includes the mutations to be + applied to rows in the database. + + ``Commit`` might return an ``ABORTED`` error. This can occur at + any time; commonly, the cause is conflicts with concurrent + transactions. However, it can also happen for a variety of other + reasons. If ``Commit`` returns ``ABORTED``, the caller should + retry the transaction from the beginning, reusing the same + session. + + On very rare occasions, ``Commit`` might return ``UNKNOWN``. + This can happen, for example, if the client job experiences a 1+ + hour networking failure. At that point, Cloud Spanner has lost + track of the transaction outcome and we recommend that you + perform another read from the database to see the state of + things as they are now. + + Returns: + Callable[[~.CommitRequest], + Awaitable[~.CommitResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "commit" not in self._stubs: + self._stubs["commit"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/Commit", + request_serializer=spanner.CommitRequest.serialize, + response_deserializer=commit_response.CommitResponse.deserialize, + ) + return self._stubs["commit"] + + @property + def rollback( + self, + ) -> Callable[[spanner.RollbackRequest], Awaitable[empty_pb2.Empty]]: + r"""Return a callable for the rollback method over gRPC. + + Rolls back a transaction, releasing any locks it holds. It's a + good idea to call this for any transaction that includes one or + more [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] requests and + ultimately decides not to commit. + + ``Rollback`` returns ``OK`` if it successfully aborts the + transaction, the transaction was already aborted, or the + transaction isn't found. ``Rollback`` never returns ``ABORTED``. + + Returns: + Callable[[~.RollbackRequest], + Awaitable[~.Empty]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "rollback" not in self._stubs: + self._stubs["rollback"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/Rollback", + request_serializer=spanner.RollbackRequest.serialize, + response_deserializer=empty_pb2.Empty.FromString, + ) + return self._stubs["rollback"] + + @property + def partition_query( + self, + ) -> Callable[ + [spanner.PartitionQueryRequest], Awaitable[spanner.PartitionResponse] + ]: + r"""Return a callable for the partition query method over gRPC. + + Creates a set of partition tokens that can be used to execute a + query operation in parallel. Each of the returned partition + tokens can be used by + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + to specify a subset of the query result to read. The same + session and read-only transaction must be used by the + ``PartitionQueryRequest`` used to create the partition tokens + and the ``ExecuteSqlRequests`` that use the partition tokens. + + Partition tokens become invalid when the session used to create + them is deleted, is idle for too long, begins a new transaction, + or becomes too old. When any of these happen, it isn't possible + to resume the query, and the whole operation must be restarted + from the beginning. + + Returns: + Callable[[~.PartitionQueryRequest], + Awaitable[~.PartitionResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "partition_query" not in self._stubs: + self._stubs["partition_query"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/PartitionQuery", + request_serializer=spanner.PartitionQueryRequest.serialize, + response_deserializer=spanner.PartitionResponse.deserialize, + ) + return self._stubs["partition_query"] + + @property + def partition_read( + self, + ) -> Callable[[spanner.PartitionReadRequest], Awaitable[spanner.PartitionResponse]]: + r"""Return a callable for the partition read method over gRPC. + + Creates a set of partition tokens that can be used to execute a + read operation in parallel. Each of the returned partition + tokens can be used by + [StreamingRead][google.spanner.v1.Spanner.StreamingRead] to + specify a subset of the read result to read. The same session + and read-only transaction must be used by the + ``PartitionReadRequest`` used to create the partition tokens and + the ``ReadRequests`` that use the partition tokens. There are no + ordering guarantees on rows returned among the returned + partition tokens, or even within each individual + ``StreamingRead`` call issued with a ``partition_token``. + + Partition tokens become invalid when the session used to create + them is deleted, is idle for too long, begins a new transaction, + or becomes too old. When any of these happen, it isn't possible + to resume the read, and the whole operation must be restarted + from the beginning. + + Returns: + Callable[[~.PartitionReadRequest], + Awaitable[~.PartitionResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "partition_read" not in self._stubs: + self._stubs["partition_read"] = self._logged_channel.unary_unary( + "/google.spanner.v1.Spanner/PartitionRead", + request_serializer=spanner.PartitionReadRequest.serialize, + response_deserializer=spanner.PartitionResponse.deserialize, + ) + return self._stubs["partition_read"] + + @property + def batch_write( + self, + ) -> Callable[[spanner.BatchWriteRequest], Awaitable[spanner.BatchWriteResponse]]: + r"""Return a callable for the batch write method over gRPC. + + Batches the supplied mutation groups in a collection of + efficient transactions. All mutations in a group are committed + atomically. However, mutations across groups can be committed + non-atomically in an unspecified order and thus, they must be + independent of each other. Partial failure is possible, that is, + some groups might have been committed successfully, while some + might have failed. The results of individual batches are + streamed into the response as the batches are applied. + + ``BatchWrite`` requests are not replay protected, meaning that + each mutation group can be applied more than once. Replays of + non-idempotent mutations can have undesirable effects. For + example, replays of an insert mutation can produce an already + exists error or if you use generated or commit timestamp-based + keys, it can result in additional rows being added to the + mutation's table. We recommend structuring your mutation groups + to be idempotent to avoid this issue. + + Returns: + Callable[[~.BatchWriteRequest], + Awaitable[~.BatchWriteResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "batch_write" not in self._stubs: + self._stubs["batch_write"] = self._logged_channel.unary_stream( + "/google.spanner.v1.Spanner/BatchWrite", + request_serializer=spanner.BatchWriteRequest.serialize, + response_deserializer=spanner.BatchWriteResponse.deserialize, + ) + return self._stubs["batch_write"] + + def _prep_wrapped_messages(self, client_info): + """Precompute the wrapped methods, overriding the base class method to use async wrappers.""" + self._wrapped_methods = { + self.create_session: self._wrap_method( + self.create_session, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.batch_create_sessions: self._wrap_method( + self.batch_create_sessions, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=60.0, + ), + default_timeout=60.0, + client_info=client_info, + ), + self.get_session: self._wrap_method( + self.get_session, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.list_sessions: self._wrap_method( + self.list_sessions, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.delete_session: self._wrap_method( + self.delete_session, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.execute_sql: self._wrap_method( + self.execute_sql, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.execute_streaming_sql: self._wrap_method( + self.execute_streaming_sql, + default_timeout=3600.0, + client_info=client_info, + ), + self.execute_batch_dml: self._wrap_method( + self.execute_batch_dml, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.read: self._wrap_method( + self.read, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.streaming_read: self._wrap_method( + self.streaming_read, + default_timeout=3600.0, + client_info=client_info, + ), + self.begin_transaction: self._wrap_method( + self.begin_transaction, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.commit: self._wrap_method( + self.commit, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=3600.0, + ), + default_timeout=3600.0, + client_info=client_info, + ), + self.rollback: self._wrap_method( + self.rollback, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.partition_query: self._wrap_method( + self.partition_query, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.partition_read: self._wrap_method( + self.partition_read, + default_retry=retries.AsyncRetry( + initial=0.25, + maximum=32.0, + multiplier=1.3, + predicate=retries.if_exception_type( + core_exceptions.ResourceExhausted, + core_exceptions.ServiceUnavailable, + ), + deadline=30.0, + ), + default_timeout=30.0, + client_info=client_info, + ), + self.batch_write: self._wrap_method( + self.batch_write, + default_timeout=3600.0, + client_info=client_info, + ), + } + + def _wrap_method(self, func, *args, **kwargs): + if self._wrap_with_kind: # pragma: NO COVER + kwargs["kind"] = self.kind + return gapic_v1.method_async.wrap_method(func, *args, **kwargs) + + def close(self): + return self._logged_channel.close() + + @property + def kind(self) -> str: + return "grpc_asyncio" + + +__all__ = ("SpannerGrpcAsyncIOTransport",) diff --git a/google/cloud/spanner_v1/services/spanner/transports/rest.py b/google/cloud/spanner_v1/services/spanner/transports/rest.py new file mode 100644 index 0000000000..7b49a0d76a --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/transports/rest.py @@ -0,0 +1,3516 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import logging +import json # type: ignore + +from google.auth.transport.requests import AuthorizedSession # type: ignore +from google.auth import credentials as ga_credentials # type: ignore +from google.api_core import exceptions as core_exceptions +from google.api_core import retry as retries +from google.api_core import rest_helpers +from google.api_core import rest_streaming +from google.api_core import gapic_v1 +import google.protobuf + +from google.protobuf import json_format + +from requests import __version__ as requests_version +import dataclasses +from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union +import warnings + + +from google.cloud.spanner_v1.types import commit_response +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import spanner +from google.cloud.spanner_v1.types import transaction +from google.cloud.spanner_v1.metrics.metrics_interceptor import MetricsInterceptor +from google.protobuf import empty_pb2 # type: ignore + + +from .rest_base import _BaseSpannerRestTransport +from .base import DEFAULT_CLIENT_INFO as BASE_DEFAULT_CLIENT_INFO + +try: + OptionalRetry = Union[retries.Retry, gapic_v1.method._MethodDefault, None] +except AttributeError: # pragma: NO COVER + OptionalRetry = Union[retries.Retry, object, None] # type: ignore + +try: + from google.api_core import client_logging # type: ignore + + CLIENT_LOGGING_SUPPORTED = True # pragma: NO COVER +except ImportError: # pragma: NO COVER + CLIENT_LOGGING_SUPPORTED = False + +_LOGGER = logging.getLogger(__name__) + +DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=BASE_DEFAULT_CLIENT_INFO.gapic_version, + grpc_version=None, + rest_version=f"requests@{requests_version}", +) + +if hasattr(DEFAULT_CLIENT_INFO, "protobuf_runtime_version"): # pragma: NO COVER + DEFAULT_CLIENT_INFO.protobuf_runtime_version = google.protobuf.__version__ + + +class SpannerRestInterceptor: + """Interceptor for Spanner. + + Interceptors are used to manipulate requests, request metadata, and responses + in arbitrary ways. + Example use cases include: + * Logging + * Verifying requests according to service or custom semantics + * Stripping extraneous information from responses + + These use cases and more can be enabled by injecting an + instance of a custom subclass when constructing the SpannerRestTransport. + + .. code-block:: python + class MyCustomSpannerInterceptor(SpannerRestInterceptor): + def pre_batch_create_sessions(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_batch_create_sessions(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_batch_write(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_batch_write(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_begin_transaction(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_begin_transaction(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_commit(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_commit(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_create_session(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_create_session(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_delete_session(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def pre_execute_batch_dml(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_execute_batch_dml(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_execute_sql(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_execute_sql(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_execute_streaming_sql(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_execute_streaming_sql(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_get_session(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_get_session(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_list_sessions(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_list_sessions(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_partition_query(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_partition_query(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_partition_read(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_partition_read(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_read(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_read(self, response): + logging.log(f"Received response: {response}") + return response + + def pre_rollback(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def pre_streaming_read(self, request, metadata): + logging.log(f"Received request: {request}") + return request, metadata + + def post_streaming_read(self, response): + logging.log(f"Received response: {response}") + return response + + transport = SpannerRestTransport(interceptor=MyCustomSpannerInterceptor()) + client = SpannerClient(transport=transport) + + + """ + + def pre_batch_create_sessions( + self, + request: spanner.BatchCreateSessionsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner.BatchCreateSessionsRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for batch_create_sessions + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_batch_create_sessions( + self, response: spanner.BatchCreateSessionsResponse + ) -> spanner.BatchCreateSessionsResponse: + """Post-rpc interceptor for batch_create_sessions + + DEPRECATED. Please use the `post_batch_create_sessions_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_batch_create_sessions` interceptor runs + before the `post_batch_create_sessions_with_metadata` interceptor. + """ + return response + + def post_batch_create_sessions_with_metadata( + self, + response: spanner.BatchCreateSessionsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner.BatchCreateSessionsResponse, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for batch_create_sessions + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_batch_create_sessions_with_metadata` + interceptor in new development instead of the `post_batch_create_sessions` interceptor. + When both interceptors are used, this `post_batch_create_sessions_with_metadata` interceptor runs after the + `post_batch_create_sessions` interceptor. The (possibly modified) response returned by + `post_batch_create_sessions` will be passed to + `post_batch_create_sessions_with_metadata`. + """ + return response, metadata + + def pre_batch_write( + self, + request: spanner.BatchWriteRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.BatchWriteRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for batch_write + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_batch_write( + self, response: rest_streaming.ResponseIterator + ) -> rest_streaming.ResponseIterator: + """Post-rpc interceptor for batch_write + + DEPRECATED. Please use the `post_batch_write_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_batch_write` interceptor runs + before the `post_batch_write_with_metadata` interceptor. + """ + return response + + def post_batch_write_with_metadata( + self, + response: rest_streaming.ResponseIterator, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + rest_streaming.ResponseIterator, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for batch_write + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_batch_write_with_metadata` + interceptor in new development instead of the `post_batch_write` interceptor. + When both interceptors are used, this `post_batch_write_with_metadata` interceptor runs after the + `post_batch_write` interceptor. The (possibly modified) response returned by + `post_batch_write` will be passed to + `post_batch_write_with_metadata`. + """ + return response, metadata + + def pre_begin_transaction( + self, + request: spanner.BeginTransactionRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner.BeginTransactionRequest, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Pre-rpc interceptor for begin_transaction + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_begin_transaction( + self, response: transaction.Transaction + ) -> transaction.Transaction: + """Post-rpc interceptor for begin_transaction + + DEPRECATED. Please use the `post_begin_transaction_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_begin_transaction` interceptor runs + before the `post_begin_transaction_with_metadata` interceptor. + """ + return response + + def post_begin_transaction_with_metadata( + self, + response: transaction.Transaction, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[transaction.Transaction, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for begin_transaction + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_begin_transaction_with_metadata` + interceptor in new development instead of the `post_begin_transaction` interceptor. + When both interceptors are used, this `post_begin_transaction_with_metadata` interceptor runs after the + `post_begin_transaction` interceptor. The (possibly modified) response returned by + `post_begin_transaction` will be passed to + `post_begin_transaction_with_metadata`. + """ + return response, metadata + + def pre_commit( + self, + request: spanner.CommitRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.CommitRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for commit + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_commit( + self, response: commit_response.CommitResponse + ) -> commit_response.CommitResponse: + """Post-rpc interceptor for commit + + DEPRECATED. Please use the `post_commit_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_commit` interceptor runs + before the `post_commit_with_metadata` interceptor. + """ + return response + + def post_commit_with_metadata( + self, + response: commit_response.CommitResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[commit_response.CommitResponse, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for commit + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_commit_with_metadata` + interceptor in new development instead of the `post_commit` interceptor. + When both interceptors are used, this `post_commit_with_metadata` interceptor runs after the + `post_commit` interceptor. The (possibly modified) response returned by + `post_commit` will be passed to + `post_commit_with_metadata`. + """ + return response, metadata + + def pre_create_session( + self, + request: spanner.CreateSessionRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.CreateSessionRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for create_session + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_create_session(self, response: spanner.Session) -> spanner.Session: + """Post-rpc interceptor for create_session + + DEPRECATED. Please use the `post_create_session_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_create_session` interceptor runs + before the `post_create_session_with_metadata` interceptor. + """ + return response + + def post_create_session_with_metadata( + self, + response: spanner.Session, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.Session, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for create_session + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_create_session_with_metadata` + interceptor in new development instead of the `post_create_session` interceptor. + When both interceptors are used, this `post_create_session_with_metadata` interceptor runs after the + `post_create_session` interceptor. The (possibly modified) response returned by + `post_create_session` will be passed to + `post_create_session_with_metadata`. + """ + return response, metadata + + def pre_delete_session( + self, + request: spanner.DeleteSessionRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.DeleteSessionRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for delete_session + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def pre_execute_batch_dml( + self, + request: spanner.ExecuteBatchDmlRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.ExecuteBatchDmlRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for execute_batch_dml + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_execute_batch_dml( + self, response: spanner.ExecuteBatchDmlResponse + ) -> spanner.ExecuteBatchDmlResponse: + """Post-rpc interceptor for execute_batch_dml + + DEPRECATED. Please use the `post_execute_batch_dml_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_execute_batch_dml` interceptor runs + before the `post_execute_batch_dml_with_metadata` interceptor. + """ + return response + + def post_execute_batch_dml_with_metadata( + self, + response: spanner.ExecuteBatchDmlResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + spanner.ExecuteBatchDmlResponse, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for execute_batch_dml + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_execute_batch_dml_with_metadata` + interceptor in new development instead of the `post_execute_batch_dml` interceptor. + When both interceptors are used, this `post_execute_batch_dml_with_metadata` interceptor runs after the + `post_execute_batch_dml` interceptor. The (possibly modified) response returned by + `post_execute_batch_dml` will be passed to + `post_execute_batch_dml_with_metadata`. + """ + return response, metadata + + def pre_execute_sql( + self, + request: spanner.ExecuteSqlRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.ExecuteSqlRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for execute_sql + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_execute_sql(self, response: result_set.ResultSet) -> result_set.ResultSet: + """Post-rpc interceptor for execute_sql + + DEPRECATED. Please use the `post_execute_sql_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_execute_sql` interceptor runs + before the `post_execute_sql_with_metadata` interceptor. + """ + return response + + def post_execute_sql_with_metadata( + self, + response: result_set.ResultSet, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[result_set.ResultSet, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for execute_sql + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_execute_sql_with_metadata` + interceptor in new development instead of the `post_execute_sql` interceptor. + When both interceptors are used, this `post_execute_sql_with_metadata` interceptor runs after the + `post_execute_sql` interceptor. The (possibly modified) response returned by + `post_execute_sql` will be passed to + `post_execute_sql_with_metadata`. + """ + return response, metadata + + def pre_execute_streaming_sql( + self, + request: spanner.ExecuteSqlRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.ExecuteSqlRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for execute_streaming_sql + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_execute_streaming_sql( + self, response: rest_streaming.ResponseIterator + ) -> rest_streaming.ResponseIterator: + """Post-rpc interceptor for execute_streaming_sql + + DEPRECATED. Please use the `post_execute_streaming_sql_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_execute_streaming_sql` interceptor runs + before the `post_execute_streaming_sql_with_metadata` interceptor. + """ + return response + + def post_execute_streaming_sql_with_metadata( + self, + response: rest_streaming.ResponseIterator, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + rest_streaming.ResponseIterator, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for execute_streaming_sql + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_execute_streaming_sql_with_metadata` + interceptor in new development instead of the `post_execute_streaming_sql` interceptor. + When both interceptors are used, this `post_execute_streaming_sql_with_metadata` interceptor runs after the + `post_execute_streaming_sql` interceptor. The (possibly modified) response returned by + `post_execute_streaming_sql` will be passed to + `post_execute_streaming_sql_with_metadata`. + """ + return response, metadata + + def pre_get_session( + self, + request: spanner.GetSessionRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.GetSessionRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for get_session + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_get_session(self, response: spanner.Session) -> spanner.Session: + """Post-rpc interceptor for get_session + + DEPRECATED. Please use the `post_get_session_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_get_session` interceptor runs + before the `post_get_session_with_metadata` interceptor. + """ + return response + + def post_get_session_with_metadata( + self, + response: spanner.Session, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.Session, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for get_session + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_get_session_with_metadata` + interceptor in new development instead of the `post_get_session` interceptor. + When both interceptors are used, this `post_get_session_with_metadata` interceptor runs after the + `post_get_session` interceptor. The (possibly modified) response returned by + `post_get_session` will be passed to + `post_get_session_with_metadata`. + """ + return response, metadata + + def pre_list_sessions( + self, + request: spanner.ListSessionsRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.ListSessionsRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for list_sessions + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_list_sessions( + self, response: spanner.ListSessionsResponse + ) -> spanner.ListSessionsResponse: + """Post-rpc interceptor for list_sessions + + DEPRECATED. Please use the `post_list_sessions_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_list_sessions` interceptor runs + before the `post_list_sessions_with_metadata` interceptor. + """ + return response + + def post_list_sessions_with_metadata( + self, + response: spanner.ListSessionsResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.ListSessionsResponse, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for list_sessions + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_list_sessions_with_metadata` + interceptor in new development instead of the `post_list_sessions` interceptor. + When both interceptors are used, this `post_list_sessions_with_metadata` interceptor runs after the + `post_list_sessions` interceptor. The (possibly modified) response returned by + `post_list_sessions` will be passed to + `post_list_sessions_with_metadata`. + """ + return response, metadata + + def pre_partition_query( + self, + request: spanner.PartitionQueryRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.PartitionQueryRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for partition_query + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_partition_query( + self, response: spanner.PartitionResponse + ) -> spanner.PartitionResponse: + """Post-rpc interceptor for partition_query + + DEPRECATED. Please use the `post_partition_query_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_partition_query` interceptor runs + before the `post_partition_query_with_metadata` interceptor. + """ + return response + + def post_partition_query_with_metadata( + self, + response: spanner.PartitionResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.PartitionResponse, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for partition_query + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_partition_query_with_metadata` + interceptor in new development instead of the `post_partition_query` interceptor. + When both interceptors are used, this `post_partition_query_with_metadata` interceptor runs after the + `post_partition_query` interceptor. The (possibly modified) response returned by + `post_partition_query` will be passed to + `post_partition_query_with_metadata`. + """ + return response, metadata + + def pre_partition_read( + self, + request: spanner.PartitionReadRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.PartitionReadRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for partition_read + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_partition_read( + self, response: spanner.PartitionResponse + ) -> spanner.PartitionResponse: + """Post-rpc interceptor for partition_read + + DEPRECATED. Please use the `post_partition_read_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_partition_read` interceptor runs + before the `post_partition_read_with_metadata` interceptor. + """ + return response + + def post_partition_read_with_metadata( + self, + response: spanner.PartitionResponse, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.PartitionResponse, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for partition_read + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_partition_read_with_metadata` + interceptor in new development instead of the `post_partition_read` interceptor. + When both interceptors are used, this `post_partition_read_with_metadata` interceptor runs after the + `post_partition_read` interceptor. The (possibly modified) response returned by + `post_partition_read` will be passed to + `post_partition_read_with_metadata`. + """ + return response, metadata + + def pre_read( + self, + request: spanner.ReadRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.ReadRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for read + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_read(self, response: result_set.ResultSet) -> result_set.ResultSet: + """Post-rpc interceptor for read + + DEPRECATED. Please use the `post_read_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_read` interceptor runs + before the `post_read_with_metadata` interceptor. + """ + return response + + def post_read_with_metadata( + self, + response: result_set.ResultSet, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[result_set.ResultSet, Sequence[Tuple[str, Union[str, bytes]]]]: + """Post-rpc interceptor for read + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_read_with_metadata` + interceptor in new development instead of the `post_read` interceptor. + When both interceptors are used, this `post_read_with_metadata` interceptor runs after the + `post_read` interceptor. The (possibly modified) response returned by + `post_read` will be passed to + `post_read_with_metadata`. + """ + return response, metadata + + def pre_rollback( + self, + request: spanner.RollbackRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.RollbackRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for rollback + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def pre_streaming_read( + self, + request: spanner.ReadRequest, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[spanner.ReadRequest, Sequence[Tuple[str, Union[str, bytes]]]]: + """Pre-rpc interceptor for streaming_read + + Override in a subclass to manipulate the request or metadata + before they are sent to the Spanner server. + """ + return request, metadata + + def post_streaming_read( + self, response: rest_streaming.ResponseIterator + ) -> rest_streaming.ResponseIterator: + """Post-rpc interceptor for streaming_read + + DEPRECATED. Please use the `post_streaming_read_with_metadata` + interceptor instead. + + Override in a subclass to read or manipulate the response + after it is returned by the Spanner server but before + it is returned to user code. This `post_streaming_read` interceptor runs + before the `post_streaming_read_with_metadata` interceptor. + """ + return response + + def post_streaming_read_with_metadata( + self, + response: rest_streaming.ResponseIterator, + metadata: Sequence[Tuple[str, Union[str, bytes]]], + ) -> Tuple[ + rest_streaming.ResponseIterator, Sequence[Tuple[str, Union[str, bytes]]] + ]: + """Post-rpc interceptor for streaming_read + + Override in a subclass to read or manipulate the response or metadata after it + is returned by the Spanner server but before it is returned to user code. + + We recommend only using this `post_streaming_read_with_metadata` + interceptor in new development instead of the `post_streaming_read` interceptor. + When both interceptors are used, this `post_streaming_read_with_metadata` interceptor runs after the + `post_streaming_read` interceptor. The (possibly modified) response returned by + `post_streaming_read` will be passed to + `post_streaming_read_with_metadata`. + """ + return response, metadata + + +@dataclasses.dataclass +class SpannerRestStub: + _session: AuthorizedSession + _host: str + _interceptor: SpannerRestInterceptor + + +class SpannerRestTransport(_BaseSpannerRestTransport): + """REST backend synchronous transport for Spanner. + + Cloud Spanner API + + The Cloud Spanner API can be used to manage sessions and execute + transactions on data stored in Cloud Spanner databases. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends JSON representations of protocol buffers over HTTP/1.1 + """ + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[ga_credentials.Credentials] = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + client_cert_source_for_mtls: Optional[Callable[[], Tuple[bytes, bytes]]] = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + url_scheme: str = "https", + interceptor: Optional[SpannerRestInterceptor] = None, + api_audience: Optional[str] = None, + metrics_interceptor: Optional[MetricsInterceptor] = None, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + client_cert_source_for_mtls (Callable[[], Tuple[bytes, bytes]]): Client + certificate to configure mutual TLS HTTP channel. It is ignored + if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you are developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + url_scheme: the protocol scheme for the API endpoint. Normally + "https", but for testing or local servers, + "http" can be specified. + """ + # Run the base constructor + # TODO(yon-mg): resolve other ctor params i.e. scopes, quota, etc. + # TODO: When custom host (api_endpoint) is set, `scopes` must *also* be set on the + # credentials object + super().__init__( + host=host, + credentials=credentials, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + url_scheme=url_scheme, + api_audience=api_audience, + ) + self._session = AuthorizedSession( + self._credentials, default_host=self.DEFAULT_HOST + ) + if client_cert_source_for_mtls: + self._session.configure_mtls_channel(client_cert_source_for_mtls) + self._interceptor = interceptor or SpannerRestInterceptor() + self._prep_wrapped_messages(client_info) + + class _BatchCreateSessions( + _BaseSpannerRestTransport._BaseBatchCreateSessions, SpannerRestStub + ): + def __hash__(self): + return hash("SpannerRestTransport.BatchCreateSessions") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.BatchCreateSessionsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.BatchCreateSessionsResponse: + r"""Call the batch create sessions method over HTTP. + + Args: + request (~.spanner.BatchCreateSessionsRequest): + The request object. The request for + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner.BatchCreateSessionsResponse: + The response for + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. + + """ + + http_options = ( + _BaseSpannerRestTransport._BaseBatchCreateSessions._get_http_options() + ) + + request, metadata = self._interceptor.pre_batch_create_sessions( + request, metadata + ) + transcoded_request = _BaseSpannerRestTransport._BaseBatchCreateSessions._get_transcoded_request( + http_options, request + ) + + body = _BaseSpannerRestTransport._BaseBatchCreateSessions._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseSpannerRestTransport._BaseBatchCreateSessions._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.BatchCreateSessions", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "BatchCreateSessions", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._BatchCreateSessions._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner.BatchCreateSessionsResponse() + pb_resp = spanner.BatchCreateSessionsResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_batch_create_sessions(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_batch_create_sessions_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner.BatchCreateSessionsResponse.to_json( + response + ) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.batch_create_sessions", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "BatchCreateSessions", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _BatchWrite(_BaseSpannerRestTransport._BaseBatchWrite, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.BatchWrite") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + stream=True, + ) + return response + + def __call__( + self, + request: spanner.BatchWriteRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> rest_streaming.ResponseIterator: + r"""Call the batch write method over HTTP. + + Args: + request (~.spanner.BatchWriteRequest): + The request object. The request for + [BatchWrite][google.spanner.v1.Spanner.BatchWrite]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner.BatchWriteResponse: + The result of applying a batch of + mutations. + + """ + + http_options = _BaseSpannerRestTransport._BaseBatchWrite._get_http_options() + + request, metadata = self._interceptor.pre_batch_write(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseBatchWrite._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BaseBatchWrite._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseBatchWrite._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.BatchWrite", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "BatchWrite", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._BatchWrite._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = rest_streaming.ResponseIterator(response, spanner.BatchWriteResponse) + + resp = self._interceptor.post_batch_write(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_batch_write_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + http_response = { + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.batch_write", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "BatchWrite", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _BeginTransaction( + _BaseSpannerRestTransport._BaseBeginTransaction, SpannerRestStub + ): + def __hash__(self): + return hash("SpannerRestTransport.BeginTransaction") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.BeginTransactionRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> transaction.Transaction: + r"""Call the begin transaction method over HTTP. + + Args: + request (~.spanner.BeginTransactionRequest): + The request object. The request for + [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.transaction.Transaction: + A transaction. + """ + + http_options = ( + _BaseSpannerRestTransport._BaseBeginTransaction._get_http_options() + ) + + request, metadata = self._interceptor.pre_begin_transaction( + request, metadata + ) + transcoded_request = ( + _BaseSpannerRestTransport._BaseBeginTransaction._get_transcoded_request( + http_options, request + ) + ) + + body = ( + _BaseSpannerRestTransport._BaseBeginTransaction._get_request_body_json( + transcoded_request + ) + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseBeginTransaction._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.BeginTransaction", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "BeginTransaction", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._BeginTransaction._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = transaction.Transaction() + pb_resp = transaction.Transaction.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_begin_transaction(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_begin_transaction_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = transaction.Transaction.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.begin_transaction", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "BeginTransaction", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _Commit(_BaseSpannerRestTransport._BaseCommit, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.Commit") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.CommitRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> commit_response.CommitResponse: + r"""Call the commit method over HTTP. + + Args: + request (~.spanner.CommitRequest): + The request object. The request for + [Commit][google.spanner.v1.Spanner.Commit]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.commit_response.CommitResponse: + The response for + [Commit][google.spanner.v1.Spanner.Commit]. + + """ + + http_options = _BaseSpannerRestTransport._BaseCommit._get_http_options() + + request, metadata = self._interceptor.pre_commit(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseCommit._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BaseCommit._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseSpannerRestTransport._BaseCommit._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.Commit", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "Commit", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._Commit._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = commit_response.CommitResponse() + pb_resp = commit_response.CommitResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_commit(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_commit_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = commit_response.CommitResponse.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.commit", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "Commit", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _CreateSession(_BaseSpannerRestTransport._BaseCreateSession, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.CreateSession") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.CreateSessionRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.Session: + r"""Call the create session method over HTTP. + + Args: + request (~.spanner.CreateSessionRequest): + The request object. The request for + [CreateSession][google.spanner.v1.Spanner.CreateSession]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner.Session: + A session in the Cloud Spanner API. + """ + + http_options = ( + _BaseSpannerRestTransport._BaseCreateSession._get_http_options() + ) + + request, metadata = self._interceptor.pre_create_session(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseCreateSession._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BaseCreateSession._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseCreateSession._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.CreateSession", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "CreateSession", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._CreateSession._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner.Session() + pb_resp = spanner.Session.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_create_session(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_create_session_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner.Session.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.create_session", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "CreateSession", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _DeleteSession(_BaseSpannerRestTransport._BaseDeleteSession, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.DeleteSession") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner.DeleteSessionRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ): + r"""Call the delete session method over HTTP. + + Args: + request (~.spanner.DeleteSessionRequest): + The request object. The request for + [DeleteSession][google.spanner.v1.Spanner.DeleteSession]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = ( + _BaseSpannerRestTransport._BaseDeleteSession._get_http_options() + ) + + request, metadata = self._interceptor.pre_delete_session(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseDeleteSession._get_transcoded_request( + http_options, request + ) + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseDeleteSession._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.DeleteSession", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "DeleteSession", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._DeleteSession._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + class _ExecuteBatchDml( + _BaseSpannerRestTransport._BaseExecuteBatchDml, SpannerRestStub + ): + def __hash__(self): + return hash("SpannerRestTransport.ExecuteBatchDml") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.ExecuteBatchDmlRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.ExecuteBatchDmlResponse: + r"""Call the execute batch dml method over HTTP. + + Args: + request (~.spanner.ExecuteBatchDmlRequest): + The request object. The request for + [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner.ExecuteBatchDmlResponse: + The response for + [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. + Contains a list of + [ResultSet][google.spanner.v1.ResultSet] messages, one + for each DML statement that has successfully executed, + in the same order as the statements in the request. If a + statement fails, the status in the response body + identifies the cause of the failure. + + To check for DML statements that failed, use the + following approach: + + 1. Check the status in the response message. The + [google.rpc.Code][google.rpc.Code] enum value ``OK`` + indicates that all statements were executed + successfully. + 2. If the status was not ``OK``, check the number of + result sets in the response. If the response contains + ``N`` [ResultSet][google.spanner.v1.ResultSet] + messages, then statement ``N+1`` in the request + failed. + + Example 1: + + - Request: 5 DML statements, all executed successfully. + - Response: 5 [ResultSet][google.spanner.v1.ResultSet] + messages, with the status ``OK``. + + Example 2: + + - Request: 5 DML statements. The third statement has a + syntax error. + - Response: 2 [ResultSet][google.spanner.v1.ResultSet] + messages, and a syntax error (``INVALID_ARGUMENT``) + status. The number of + [ResultSet][google.spanner.v1.ResultSet] messages + indicates that the third statement failed, and the + fourth and fifth statements were not executed. + + """ + + http_options = ( + _BaseSpannerRestTransport._BaseExecuteBatchDml._get_http_options() + ) + + request, metadata = self._interceptor.pre_execute_batch_dml( + request, metadata + ) + transcoded_request = ( + _BaseSpannerRestTransport._BaseExecuteBatchDml._get_transcoded_request( + http_options, request + ) + ) + + body = ( + _BaseSpannerRestTransport._BaseExecuteBatchDml._get_request_body_json( + transcoded_request + ) + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseExecuteBatchDml._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.ExecuteBatchDml", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "ExecuteBatchDml", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._ExecuteBatchDml._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner.ExecuteBatchDmlResponse() + pb_resp = spanner.ExecuteBatchDmlResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_execute_batch_dml(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_execute_batch_dml_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner.ExecuteBatchDmlResponse.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.execute_batch_dml", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "ExecuteBatchDml", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ExecuteSql(_BaseSpannerRestTransport._BaseExecuteSql, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.ExecuteSql") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.ExecuteSqlRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> result_set.ResultSet: + r"""Call the execute sql method over HTTP. + + Args: + request (~.spanner.ExecuteSqlRequest): + The request object. The request for + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.result_set.ResultSet: + Results from [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + """ + + http_options = _BaseSpannerRestTransport._BaseExecuteSql._get_http_options() + + request, metadata = self._interceptor.pre_execute_sql(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseExecuteSql._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BaseExecuteSql._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseExecuteSql._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.ExecuteSql", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "ExecuteSql", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._ExecuteSql._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = result_set.ResultSet() + pb_resp = result_set.ResultSet.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_execute_sql(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_execute_sql_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = result_set.ResultSet.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.execute_sql", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "ExecuteSql", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ExecuteStreamingSql( + _BaseSpannerRestTransport._BaseExecuteStreamingSql, SpannerRestStub + ): + def __hash__(self): + return hash("SpannerRestTransport.ExecuteStreamingSql") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + stream=True, + ) + return response + + def __call__( + self, + request: spanner.ExecuteSqlRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> rest_streaming.ResponseIterator: + r"""Call the execute streaming sql method over HTTP. + + Args: + request (~.spanner.ExecuteSqlRequest): + The request object. The request for + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.result_set.PartialResultSet: + Partial results from a streaming read + or SQL query. Streaming reads and SQL + queries better tolerate large result + sets, large rows, and large values, but + are a little trickier to consume. + + """ + + http_options = ( + _BaseSpannerRestTransport._BaseExecuteStreamingSql._get_http_options() + ) + + request, metadata = self._interceptor.pre_execute_streaming_sql( + request, metadata + ) + transcoded_request = _BaseSpannerRestTransport._BaseExecuteStreamingSql._get_transcoded_request( + http_options, request + ) + + body = _BaseSpannerRestTransport._BaseExecuteStreamingSql._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseSpannerRestTransport._BaseExecuteStreamingSql._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.ExecuteStreamingSql", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "ExecuteStreamingSql", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._ExecuteStreamingSql._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = rest_streaming.ResponseIterator( + response, result_set.PartialResultSet + ) + + resp = self._interceptor.post_execute_streaming_sql(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_execute_streaming_sql_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + http_response = { + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.execute_streaming_sql", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "ExecuteStreamingSql", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _GetSession(_BaseSpannerRestTransport._BaseGetSession, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.GetSession") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner.GetSessionRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.Session: + r"""Call the get session method over HTTP. + + Args: + request (~.spanner.GetSessionRequest): + The request object. The request for + [GetSession][google.spanner.v1.Spanner.GetSession]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner.Session: + A session in the Cloud Spanner API. + """ + + http_options = _BaseSpannerRestTransport._BaseGetSession._get_http_options() + + request, metadata = self._interceptor.pre_get_session(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseGetSession._get_transcoded_request( + http_options, request + ) + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseGetSession._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.GetSession", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "GetSession", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._GetSession._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner.Session() + pb_resp = spanner.Session.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_get_session(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_get_session_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner.Session.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.get_session", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "GetSession", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _ListSessions(_BaseSpannerRestTransport._BaseListSessions, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.ListSessions") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + ) + return response + + def __call__( + self, + request: spanner.ListSessionsRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.ListSessionsResponse: + r"""Call the list sessions method over HTTP. + + Args: + request (~.spanner.ListSessionsRequest): + The request object. The request for + [ListSessions][google.spanner.v1.Spanner.ListSessions]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner.ListSessionsResponse: + The response for + [ListSessions][google.spanner.v1.Spanner.ListSessions]. + + """ + + http_options = ( + _BaseSpannerRestTransport._BaseListSessions._get_http_options() + ) + + request, metadata = self._interceptor.pre_list_sessions(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseListSessions._get_transcoded_request( + http_options, request + ) + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseListSessions._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.ListSessions", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "ListSessions", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._ListSessions._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner.ListSessionsResponse() + pb_resp = spanner.ListSessionsResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_list_sessions(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_list_sessions_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner.ListSessionsResponse.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.list_sessions", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "ListSessions", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _PartitionQuery( + _BaseSpannerRestTransport._BasePartitionQuery, SpannerRestStub + ): + def __hash__(self): + return hash("SpannerRestTransport.PartitionQuery") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.PartitionQueryRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.PartitionResponse: + r"""Call the partition query method over HTTP. + + Args: + request (~.spanner.PartitionQueryRequest): + The request object. The request for + [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner.PartitionResponse: + The response for + [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + + """ + + http_options = ( + _BaseSpannerRestTransport._BasePartitionQuery._get_http_options() + ) + + request, metadata = self._interceptor.pre_partition_query(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BasePartitionQuery._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BasePartitionQuery._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BasePartitionQuery._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.PartitionQuery", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "PartitionQuery", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._PartitionQuery._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner.PartitionResponse() + pb_resp = spanner.PartitionResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_partition_query(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_partition_query_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner.PartitionResponse.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.partition_query", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "PartitionQuery", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _PartitionRead(_BaseSpannerRestTransport._BasePartitionRead, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.PartitionRead") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.PartitionReadRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> spanner.PartitionResponse: + r"""Call the partition read method over HTTP. + + Args: + request (~.spanner.PartitionReadRequest): + The request object. The request for + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.spanner.PartitionResponse: + The response for + [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + + """ + + http_options = ( + _BaseSpannerRestTransport._BasePartitionRead._get_http_options() + ) + + request, metadata = self._interceptor.pre_partition_read(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BasePartitionRead._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BasePartitionRead._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BasePartitionRead._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.PartitionRead", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "PartitionRead", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._PartitionRead._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = spanner.PartitionResponse() + pb_resp = spanner.PartitionResponse.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_partition_read(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_partition_read_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = spanner.PartitionResponse.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.partition_read", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "PartitionRead", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _Read(_BaseSpannerRestTransport._BaseRead, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.Read") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.ReadRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> result_set.ResultSet: + r"""Call the read method over HTTP. + + Args: + request (~.spanner.ReadRequest): + The request object. The request for [Read][google.spanner.v1.Spanner.Read] + and + [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.result_set.ResultSet: + Results from [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + """ + + http_options = _BaseSpannerRestTransport._BaseRead._get_http_options() + + request, metadata = self._interceptor.pre_read(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseRead._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BaseRead._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = _BaseSpannerRestTransport._BaseRead._get_query_params_json( + transcoded_request + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.Read", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "Read", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._Read._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = result_set.ResultSet() + pb_resp = result_set.ResultSet.pb(resp) + + json_format.Parse(response.content, pb_resp, ignore_unknown_fields=True) + + resp = self._interceptor.post_read(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_read_with_metadata(resp, response_metadata) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + try: + response_payload = result_set.ResultSet.to_json(response) + except: + response_payload = None + http_response = { + "payload": response_payload, + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.read", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "Read", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + class _Rollback(_BaseSpannerRestTransport._BaseRollback, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.Rollback") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + ) + return response + + def __call__( + self, + request: spanner.RollbackRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ): + r"""Call the rollback method over HTTP. + + Args: + request (~.spanner.RollbackRequest): + The request object. The request for + [Rollback][google.spanner.v1.Spanner.Rollback]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + """ + + http_options = _BaseSpannerRestTransport._BaseRollback._get_http_options() + + request, metadata = self._interceptor.pre_rollback(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseRollback._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BaseRollback._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseRollback._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = json_format.MessageToJson(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.Rollback", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "Rollback", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._Rollback._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + class _StreamingRead(_BaseSpannerRestTransport._BaseStreamingRead, SpannerRestStub): + def __hash__(self): + return hash("SpannerRestTransport.StreamingRead") + + @staticmethod + def _get_response( + host, + metadata, + query_params, + session, + timeout, + transcoded_request, + body=None, + ): + uri = transcoded_request["uri"] + method = transcoded_request["method"] + headers = dict(metadata) + headers["Content-Type"] = "application/json" + response = getattr(session, method)( + "{host}{uri}".format(host=host, uri=uri), + timeout=timeout, + headers=headers, + params=rest_helpers.flatten_query_params(query_params, strict=True), + data=body, + stream=True, + ) + return response + + def __call__( + self, + request: spanner.ReadRequest, + *, + retry: OptionalRetry = gapic_v1.method.DEFAULT, + timeout: Optional[float] = None, + metadata: Sequence[Tuple[str, Union[str, bytes]]] = (), + ) -> rest_streaming.ResponseIterator: + r"""Call the streaming read method over HTTP. + + Args: + request (~.spanner.ReadRequest): + The request object. The request for [Read][google.spanner.v1.Spanner.Read] + and + [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, Union[str, bytes]]]): Key/value pairs which should be + sent along with the request as metadata. Normally, each value must be of type `str`, + but for metadata keys ending with the suffix `-bin`, the corresponding values must + be of type `bytes`. + + Returns: + ~.result_set.PartialResultSet: + Partial results from a streaming read + or SQL query. Streaming reads and SQL + queries better tolerate large result + sets, large rows, and large values, but + are a little trickier to consume. + + """ + + http_options = ( + _BaseSpannerRestTransport._BaseStreamingRead._get_http_options() + ) + + request, metadata = self._interceptor.pre_streaming_read(request, metadata) + transcoded_request = ( + _BaseSpannerRestTransport._BaseStreamingRead._get_transcoded_request( + http_options, request + ) + ) + + body = _BaseSpannerRestTransport._BaseStreamingRead._get_request_body_json( + transcoded_request + ) + + # Jsonify the query params + query_params = ( + _BaseSpannerRestTransport._BaseStreamingRead._get_query_params_json( + transcoded_request + ) + ) + + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + request_url = "{host}{uri}".format( + host=self._host, uri=transcoded_request["uri"] + ) + method = transcoded_request["method"] + try: + request_payload = type(request).to_json(request) + except: + request_payload = None + http_request = { + "payload": request_payload, + "requestMethod": method, + "requestUrl": request_url, + "headers": dict(metadata), + } + _LOGGER.debug( + f"Sending request for google.spanner_v1.SpannerClient.StreamingRead", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "StreamingRead", + "httpRequest": http_request, + "metadata": http_request["headers"], + }, + ) + + # Send the request + response = SpannerRestTransport._StreamingRead._get_response( + self._host, + metadata, + query_params, + self._session, + timeout, + transcoded_request, + body, + ) + + # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception + # subclass. + if response.status_code >= 400: + raise core_exceptions.from_http_response(response) + + # Return the response + resp = rest_streaming.ResponseIterator( + response, result_set.PartialResultSet + ) + + resp = self._interceptor.post_streaming_read(resp) + response_metadata = [(k, str(v)) for k, v in response.headers.items()] + resp, _ = self._interceptor.post_streaming_read_with_metadata( + resp, response_metadata + ) + if CLIENT_LOGGING_SUPPORTED and _LOGGER.isEnabledFor( + logging.DEBUG + ): # pragma: NO COVER + http_response = { + "headers": dict(response.headers), + "status": response.status_code, + } + _LOGGER.debug( + "Received response for google.spanner_v1.SpannerClient.streaming_read", + extra={ + "serviceName": "google.spanner.v1.Spanner", + "rpcName": "StreamingRead", + "metadata": http_response["headers"], + "httpResponse": http_response, + }, + ) + return resp + + @property + def batch_create_sessions( + self, + ) -> Callable[ + [spanner.BatchCreateSessionsRequest], spanner.BatchCreateSessionsResponse + ]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._BatchCreateSessions(self._session, self._host, self._interceptor) # type: ignore + + @property + def batch_write( + self, + ) -> Callable[[spanner.BatchWriteRequest], spanner.BatchWriteResponse]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._BatchWrite(self._session, self._host, self._interceptor) # type: ignore + + @property + def begin_transaction( + self, + ) -> Callable[[spanner.BeginTransactionRequest], transaction.Transaction]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._BeginTransaction(self._session, self._host, self._interceptor) # type: ignore + + @property + def commit( + self, + ) -> Callable[[spanner.CommitRequest], commit_response.CommitResponse]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._Commit(self._session, self._host, self._interceptor) # type: ignore + + @property + def create_session( + self, + ) -> Callable[[spanner.CreateSessionRequest], spanner.Session]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._CreateSession(self._session, self._host, self._interceptor) # type: ignore + + @property + def delete_session( + self, + ) -> Callable[[spanner.DeleteSessionRequest], empty_pb2.Empty]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._DeleteSession(self._session, self._host, self._interceptor) # type: ignore + + @property + def execute_batch_dml( + self, + ) -> Callable[[spanner.ExecuteBatchDmlRequest], spanner.ExecuteBatchDmlResponse]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ExecuteBatchDml(self._session, self._host, self._interceptor) # type: ignore + + @property + def execute_sql( + self, + ) -> Callable[[spanner.ExecuteSqlRequest], result_set.ResultSet]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ExecuteSql(self._session, self._host, self._interceptor) # type: ignore + + @property + def execute_streaming_sql( + self, + ) -> Callable[[spanner.ExecuteSqlRequest], result_set.PartialResultSet]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ExecuteStreamingSql(self._session, self._host, self._interceptor) # type: ignore + + @property + def get_session(self) -> Callable[[spanner.GetSessionRequest], spanner.Session]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._GetSession(self._session, self._host, self._interceptor) # type: ignore + + @property + def list_sessions( + self, + ) -> Callable[[spanner.ListSessionsRequest], spanner.ListSessionsResponse]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._ListSessions(self._session, self._host, self._interceptor) # type: ignore + + @property + def partition_query( + self, + ) -> Callable[[spanner.PartitionQueryRequest], spanner.PartitionResponse]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._PartitionQuery(self._session, self._host, self._interceptor) # type: ignore + + @property + def partition_read( + self, + ) -> Callable[[spanner.PartitionReadRequest], spanner.PartitionResponse]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._PartitionRead(self._session, self._host, self._interceptor) # type: ignore + + @property + def read(self) -> Callable[[spanner.ReadRequest], result_set.ResultSet]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._Read(self._session, self._host, self._interceptor) # type: ignore + + @property + def rollback(self) -> Callable[[spanner.RollbackRequest], empty_pb2.Empty]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._Rollback(self._session, self._host, self._interceptor) # type: ignore + + @property + def streaming_read( + self, + ) -> Callable[[spanner.ReadRequest], result_set.PartialResultSet]: + # The return type is fine, but mypy isn't sophisticated enough to determine what's going on here. + # In C++ this would require a dynamic_cast + return self._StreamingRead(self._session, self._host, self._interceptor) # type: ignore + + @property + def kind(self) -> str: + return "rest" + + def close(self): + self._session.close() + + +__all__ = ("SpannerRestTransport",) diff --git a/google/cloud/spanner_v1/services/spanner/transports/rest_base.py b/google/cloud/spanner_v1/services/spanner/transports/rest_base.py new file mode 100644 index 0000000000..e93f5d4b58 --- /dev/null +++ b/google/cloud/spanner_v1/services/spanner/transports/rest_base.py @@ -0,0 +1,981 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import json # type: ignore +from google.api_core import path_template +from google.api_core import gapic_v1 + +from google.protobuf import json_format +from .base import SpannerTransport, DEFAULT_CLIENT_INFO + +import re +from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union + + +from google.cloud.spanner_v1.types import commit_response +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import spanner +from google.cloud.spanner_v1.types import transaction +from google.cloud.spanner_v1.metrics.metrics_interceptor import MetricsInterceptor +from google.protobuf import empty_pb2 # type: ignore + + +class _BaseSpannerRestTransport(SpannerTransport): + """Base REST backend transport for Spanner. + + Note: This class is not meant to be used directly. Use its sync and + async sub-classes instead. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends JSON representations of protocol buffers over HTTP/1.1 + """ + + def __init__( + self, + *, + host: str = "spanner.googleapis.com", + credentials: Optional[Any] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + always_use_jwt_access: Optional[bool] = False, + url_scheme: str = "https", + api_audience: Optional[str] = None, + metrics_interceptor: Optional[MetricsInterceptor] = None, + ) -> None: + """Instantiate the transport. + Args: + host (Optional[str]): + The hostname to connect to (default: 'spanner.googleapis.com'). + credentials (Optional[Any]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you are developing + your own client library. + always_use_jwt_access (Optional[bool]): Whether self signed JWT should + be used for service account credentials. + url_scheme: the protocol scheme for the API endpoint. Normally + "https", but for testing or local servers, + "http" can be specified. + """ + # Run the base constructor + maybe_url_match = re.match("^(?Phttp(?:s)?://)?(?P.*)$", host) + if maybe_url_match is None: + raise ValueError( + f"Unexpected hostname structure: {host}" + ) # pragma: NO COVER + + url_match_items = maybe_url_match.groupdict() + + host = f"{url_scheme}://{host}" if not url_match_items["scheme"] else host + + super().__init__( + host=host, + credentials=credentials, + client_info=client_info, + always_use_jwt_access=always_use_jwt_access, + api_audience=api_audience, + ) + + class _BaseBatchCreateSessions: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{database=projects/*/instances/*/databases/*}/sessions:batchCreate", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.BatchCreateSessionsRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseBatchCreateSessions._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseBatchWrite: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:batchWrite", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.BatchWriteRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseBatchWrite._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseBeginTransaction: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:beginTransaction", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.BeginTransactionRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseBeginTransaction._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCommit: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:commit", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.CommitRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseCommit._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseCreateSession: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{database=projects/*/instances/*/databases/*}/sessions", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.CreateSessionRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseCreateSession._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseDeleteSession: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "delete", + "uri": "/v1/{name=projects/*/instances/*/databases/*/sessions/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.DeleteSessionRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseDeleteSession._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseExecuteBatchDml: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeBatchDml", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.ExecuteBatchDmlRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseExecuteBatchDml._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseExecuteSql: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeSql", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.ExecuteSqlRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseExecuteSql._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseExecuteStreamingSql: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:executeStreamingSql", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.ExecuteSqlRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseExecuteStreamingSql._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseGetSession: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{name=projects/*/instances/*/databases/*/sessions/*}", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.GetSessionRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseGetSession._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseListSessions: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "get", + "uri": "/v1/{database=projects/*/instances/*/databases/*}/sessions", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.ListSessionsRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseListSessions._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BasePartitionQuery: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:partitionQuery", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.PartitionQueryRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BasePartitionQuery._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BasePartitionRead: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:partitionRead", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.PartitionReadRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BasePartitionRead._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseRead: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:read", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.ReadRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseRead._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseRollback: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:rollback", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.RollbackRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseRollback._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + class _BaseStreamingRead: + def __hash__(self): # pragma: NO COVER + return NotImplementedError("__hash__ must be implemented.") + + __REQUIRED_FIELDS_DEFAULT_VALUES: Dict[str, Any] = {} + + @classmethod + def _get_unset_required_fields(cls, message_dict): + return { + k: v + for k, v in cls.__REQUIRED_FIELDS_DEFAULT_VALUES.items() + if k not in message_dict + } + + @staticmethod + def _get_http_options(): + http_options: List[Dict[str, str]] = [ + { + "method": "post", + "uri": "/v1/{session=projects/*/instances/*/databases/*/sessions/*}:streamingRead", + "body": "*", + }, + ] + return http_options + + @staticmethod + def _get_transcoded_request(http_options, request): + pb_request = spanner.ReadRequest.pb(request) + transcoded_request = path_template.transcode(http_options, pb_request) + return transcoded_request + + @staticmethod + def _get_request_body_json(transcoded_request): + # Jsonify the request body + + body = json_format.MessageToJson( + transcoded_request["body"], use_integers_for_enums=True + ) + return body + + @staticmethod + def _get_query_params_json(transcoded_request): + query_params = json.loads( + json_format.MessageToJson( + transcoded_request["query_params"], + use_integers_for_enums=True, + ) + ) + query_params.update( + _BaseSpannerRestTransport._BaseStreamingRead._get_unset_required_fields( + query_params + ) + ) + + query_params["$alt"] = "json;enum-encoding=int" + return query_params + + +__all__ = ("_BaseSpannerRestTransport",) diff --git a/google/cloud/spanner_v1/session.py b/google/cloud/spanner_v1/session.py index 61e4322012..7b6634c728 100644 --- a/google/cloud/spanner_v1/session.py +++ b/google/cloud/spanner_v1/session.py @@ -16,19 +16,31 @@ from functools import total_ordering import time - -from google.rpc.error_details_pb2 import RetryInfo - -# pylint: disable=ungrouped-imports -from google.api_core.exceptions import Aborted, GoogleAPICallError, NotFound -import google.api_core.gapic_v1.method -from google.cloud.spanner_v1._helpers import _metadata_with_prefix +from datetime import datetime +from typing import MutableMapping, Optional + +from google.api_core.exceptions import Aborted +from google.api_core.exceptions import GoogleAPICallError +from google.api_core.exceptions import NotFound +from google.api_core.gapic_v1 import method +from google.cloud.spanner_v1._helpers import _delay_until_retry +from google.cloud.spanner_v1._helpers import _get_retry_delay + +from google.cloud.spanner_v1 import ExecuteSqlRequest +from google.cloud.spanner_v1 import CreateSessionRequest +from google.cloud.spanner_v1._helpers import ( + _metadata_with_prefix, + _metadata_with_leader_aware_routing, +) +from google.cloud.spanner_v1._opentelemetry_tracing import ( + add_span_event, + get_current_span, + trace_call, +) from google.cloud.spanner_v1.batch import Batch from google.cloud.spanner_v1.snapshot import Snapshot from google.cloud.spanner_v1.transaction import Transaction -import random - -# pylint: enable=ungrouped-imports +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture DEFAULT_RETRY_TIMEOUT_SECS = 30 @@ -50,16 +62,25 @@ class Session(object): :type labels: dict (str -> str) :param labels: (Optional) User-assigned labels for the session. - """ - _session_id = None - _transaction = None + :type database_role: str + :param database_role: (Optional) user-assigned database_role for the session. + + :type is_multiplexed: bool + :param is_multiplexed: (Optional) whether this session is a multiplexed session. + """ - def __init__(self, database, labels=None): + def __init__(self, database, labels=None, database_role=None, is_multiplexed=False): self._database = database + self._session_id: Optional[str] = None + if labels is None: labels = {} - self._labels = labels + + self._labels: MutableMapping[str, str] = labels + self._database_role: Optional[str] = database_role + self._is_multiplexed: bool = is_multiplexed + self._last_use_time: datetime = datetime.utcnow() def __lt__(self, other): return self._session_id < other._session_id @@ -69,6 +90,31 @@ def session_id(self): """Read-only ID, set by the back-end during :meth:`create`.""" return self._session_id + @property + def is_multiplexed(self): + """Whether this session is a multiplexed session. + + :rtype: bool + :returns: True if this is a multiplexed session, False otherwise. + """ + return self._is_multiplexed + + @property + def last_use_time(self): + """Approximate last use time of this session + + :rtype: datetime + :returns: the approximate last use time of this session""" + return self._last_use_time + + @property + def database_role(self): + """User-assigned database-role for the session. + + :rtype: str + :returns: the database role str (None if no database role were assigned).""" + return self._database_role + @property def labels(self): """User-assigned labels for the session. @@ -105,16 +151,56 @@ def create(self): See https://cloud.google.com/spanner/reference/rpc/google.spanner.v1#google.spanner.v1.Spanner.CreateSession - :raises: :exc:`ValueError` if :attr:`session_id` is already set. + :raises ValueError: if :attr:`session_id` is already set. """ + current_span = get_current_span() + add_span_event(current_span, "Creating Session") + if self._session_id is not None: raise ValueError("Session ID already set by back-end") - api = self._database.spanner_api - metadata = _metadata_with_prefix(self._database.name) - kw = {} + + database = self._database + api = database.spanner_api + + metadata = _metadata_with_prefix(database.name) + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + + create_session_request = CreateSessionRequest(database=database.name) + if database.database_role is not None: + create_session_request.session.creator_role = database.database_role + if self._labels: - kw = {"session": {"labels": self._labels}} - session_pb = api.create_session(self._database.name, metadata=metadata, **kw) + create_session_request.session.labels = self._labels + + # Set the multiplexed field for multiplexed sessions + if self._is_multiplexed: + create_session_request.session.multiplexed = True + + observability_options = getattr(database, "observability_options", None) + span_name = ( + "CloudSpanner.CreateMultiplexedSession" + if self._is_multiplexed + else "CloudSpanner.CreateSession" + ) + with trace_call( + span_name, + self, + self._labels, + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + session_pb = api.create_session( + request=create_session_request, + metadata=database.metadata_with_request_id( + database._next_nth_request, + 1, + metadata, + span, + ), + ) self._session_id = session_pb.name.split("/")[-1] def exists(self): @@ -126,14 +212,51 @@ def exists(self): :rtype: bool :returns: True if the session exists on the back-end, else False. """ + current_span = get_current_span() if self._session_id is None: + add_span_event( + current_span, + "Checking session existence: Session does not exist as it has not been created yet", + ) return False - api = self._database.spanner_api + + add_span_event( + current_span, "Checking if Session exists", {"session.id": self._session_id} + ) + + database = self._database + api = database.spanner_api metadata = _metadata_with_prefix(self._database.name) - try: - api.get_session(self.name, metadata=metadata) - except NotFound: - return False + if self._database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing( + self._database._route_to_leader_enabled + ) + ) + + observability_options = getattr(self._database, "observability_options", None) + with trace_call( + "CloudSpanner.GetSession", + self, + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + try: + api.get_session( + name=self.name, + metadata=database.metadata_with_request_id( + database._next_nth_request, + 1, + metadata, + span, + ), + ) + if span: + span.set_attribute("session_found", True) + except NotFound: + if span: + span.set_attribute("session_found", False) + return False return True @@ -146,12 +269,66 @@ def delete(self): :raises ValueError: if :attr:`session_id` is not already set. :raises NotFound: if the session does not exist """ + current_span = get_current_span() if self._session_id is None: + add_span_event( + current_span, "Deleting Session failed due to unset session_id" + ) raise ValueError("Session ID not set by back-end") - api = self._database.spanner_api - metadata = _metadata_with_prefix(self._database.name) + if self._is_multiplexed: + add_span_event( + current_span, + "Skipped deleting Multiplexed Session", + {"session.id": self._session_id}, + ) + return + add_span_event( + current_span, "Deleting Session", {"session.id": self._session_id} + ) - api.delete_session(self.name, metadata=metadata) + database = self._database + api = database.spanner_api + metadata = _metadata_with_prefix(database.name) + observability_options = getattr(self._database, "observability_options", None) + with trace_call( + "CloudSpanner.DeleteSession", + self, + extra_attributes={ + "session.id": self._session_id, + "session.name": self.name, + }, + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + api.delete_session( + name=self.name, + metadata=database.metadata_with_request_id( + database._next_nth_request, + 1, + metadata, + span, + ), + ) + + def ping(self): + """Ping the session to keep it alive by executing "SELECT 1". + + :raises ValueError: if :attr:`session_id` is not already set. + """ + if self._session_id is None: + raise ValueError("Session ID not set by back-end") + database = self._database + api = database.spanner_api + request = ExecuteSqlRequest(session=self.name, sql="SELECT 1") + api.execute_sql( + request=request, + metadata=database.metadata_with_request_id( + database._next_nth_request, + 1, + _metadata_with_prefix(database.name), + ), + ) + self._last_use_time = datetime.now() def snapshot(self, **kw): """Create a snapshot to perform a set of reads with shared staleness. @@ -172,7 +349,7 @@ def snapshot(self, **kw): return Snapshot(self, **kw) - def read(self, table, columns, keyset, index="", limit=0): + def read(self, table, columns, keyset, index="", limit=0, column_info=None): """Perform a ``StreamingRead`` API request for rows in a table. :type table: str @@ -191,10 +368,21 @@ def read(self, table, columns, keyset, index="", limit=0): :type limit: int :param limit: (Optional) maximum number of rows to return + :type column_info: dict + :param column_info: (Optional) dict of mapping between column names and additional column information. + An object where column names as keys and custom objects as corresponding + values for deserialization. It's specifically useful for data types like + protobuf where deserialization logic is on user-specific code. When provided, + the custom object enables deserialization of backend-received column data. + If not provided, data remains serialized as bytes for Proto Messages and + integer for Proto Enums. + :rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet` :returns: a result set instance which can be used to consume rows. """ - return self.snapshot().read(table, columns, keyset, index, limit) + return self.snapshot().read( + table, columns, keyset, index, limit, column_info=column_info + ) def execute_sql( self, @@ -203,8 +391,10 @@ def execute_sql( param_types=None, query_mode=None, query_options=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, + request_options=None, + retry=method.DEFAULT, + timeout=method.DEFAULT, + column_info=None, ): """Perform an ``ExecuteStreamingSql`` API request. @@ -216,21 +406,43 @@ def execute_sql( the names used in ``sql``. :type param_types: - dict, {str -> :class:`~google.spanner.v1.type_pb2.TypeCode`} + dict, {str -> :class:`~google.spanner.v1.types.TypeCode`} :param param_types: (Optional) explicit types for one or more param values; overrides default type detection on the back-end. :type query_mode: - :class:`~google.spanner.v1.spanner_pb2.ExecuteSqlRequest.QueryMode` + :class:`~google.spanner.v1.types.ExecuteSqlRequest.QueryMode` :param query_mode: Mode governing return of results / query plan. See: `QueryMode `_. :type query_options: - :class:`~google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or :class:`dict` :param query_options: (Optional) Options that are provided for query plan stability. + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + + :type column_info: dict + :param column_info: (Optional) dict of mapping between column names and additional column information. + An object where column names as keys and custom objects as corresponding + values for deserialization. It's specifically useful for data types like + protobuf where deserialization logic is on user-specific code. When provided, + the custom object enables deserialization of backend-received column data. + If not provided, data remains serialized as bytes for Proto Messages and + integer for Proto Enums. + :rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet` :returns: a result set instance which can be used to consume rows. """ @@ -240,8 +452,10 @@ def execute_sql( param_types, query_mode, query_options=query_options, + request_options=request_options, retry=retry, timeout=timeout, + column_info=column_info, ) def batch(self): @@ -256,22 +470,18 @@ def batch(self): return Batch(self) - def transaction(self): + def transaction(self) -> Transaction: """Create a transaction to perform a set of reads with shared staleness. :rtype: :class:`~google.cloud.spanner_v1.transaction.Transaction` :returns: a transaction bound to this session + :raises ValueError: if the session has not yet been created. """ if self._session_id is None: raise ValueError("Session has not been created.") - if self._transaction is not None: - self._transaction.rolled_back = True - del self._transaction - - txn = self._transaction = Transaction(self) - return txn + return Transaction(self) def run_in_transaction(self, func, *args, **kw): """Perform a unit of work in a transaction, retrying on abort. @@ -286,108 +496,151 @@ def run_in_transaction(self, func, *args, **kw): :type kw: dict :param kw: (Optional) keyword arguments to be passed to ``func``. - If passed, "timeout_secs" will be removed and used to + If passed: + "timeout_secs" will be removed and used to override the default retry timeout which defines maximum timestamp to continue retrying the transaction. + "commit_request_options" will be removed and used to set the + request options for the commit request. + "max_commit_delay" will be removed and used to set the max commit delay for the request. + "transaction_tag" will be removed and used to set the transaction tag for the request. + "exclude_txn_from_change_streams" if true, instructs the transaction to be excluded + from being recorded in change streams with the DDL option `allow_txn_exclusion=true`. + This does not exclude the transaction from being recorded in the change streams with + the DDL option `allow_txn_exclusion` being false or unset. + "isolation_level" sets the isolation level for the transaction. + "read_lock_mode" sets the read lock mode for the transaction. :rtype: Any :returns: The return value of ``func``. :raises Exception: - reraises any non-ABORT execptions raised by ``func``. + reraises any non-ABORT exceptions raised by ``func``. """ deadline = time.time() + kw.pop("timeout_secs", DEFAULT_RETRY_TIMEOUT_SECS) - attempts = 0 - - while True: - if self._transaction is None: + default_retry_delay = kw.pop("default_retry_delay", None) + commit_request_options = kw.pop("commit_request_options", None) + max_commit_delay = kw.pop("max_commit_delay", None) + transaction_tag = kw.pop("transaction_tag", None) + exclude_txn_from_change_streams = kw.pop( + "exclude_txn_from_change_streams", None + ) + isolation_level = kw.pop("isolation_level", None) + read_lock_mode = kw.pop("read_lock_mode", None) + + database = self._database + log_commit_stats = database.log_commit_stats + + with trace_call( + "CloudSpanner.Session.run_in_transaction", + self, + observability_options=getattr(database, "observability_options", None), + ) as span, MetricsCapture(): + attempts: int = 0 + + # If a transaction using a multiplexed session is retried after an aborted + # user operation, it should include the previous transaction ID in the + # transaction options used to begin the transaction. This allows the backend + # to recognize the transaction and increase the lock order for the new + # transaction that is created. + # See :attr:`~google.cloud.spanner_v1.types.TransactionOptions.ReadWrite.multiplexed_session_previous_transaction_id` + previous_transaction_id: Optional[bytes] = None + + while True: txn = self.transaction() - else: - txn = self._transaction - if txn._transaction_id is None: - txn.begin() + txn.transaction_tag = transaction_tag + txn.exclude_txn_from_change_streams = exclude_txn_from_change_streams + txn.isolation_level = isolation_level + txn.read_lock_mode = read_lock_mode - try: - attempts += 1 - return_value = func(txn, *args, **kw) - except Aborted as exc: - del self._transaction - _delay_until_retry(exc, deadline, attempts) - continue - except GoogleAPICallError: - del self._transaction - raise - except Exception: - txn.rollback() - raise - - try: - txn.commit() - except Aborted as exc: - del self._transaction - _delay_until_retry(exc, deadline, attempts) - except GoogleAPICallError: - del self._transaction - raise - else: - return return_value - - -# pylint: disable=misplaced-bare-raise -# -# Rational: this function factors out complex shared deadline / retry -# handling from two `except:` clauses. -def _delay_until_retry(exc, deadline, attempts): - """Helper for :meth:`Session.run_in_transaction`. + if self.is_multiplexed: + txn._multiplexed_session_previous_transaction_id = ( + previous_transaction_id + ) - Detect retryable abort, and impose server-supplied delay. - - :type exc: :class:`google.api_core.exceptions.Aborted` - :param exc: exception for aborted transaction - - :type deadline: float - :param deadline: maximum timestamp to continue retrying the transaction. - - :type attempts: int - :param attempts: number of call retries - """ - cause = exc.errors[0] - - now = time.time() - - if now >= deadline: - raise - - delay = _get_retry_delay(cause, attempts) - if delay is not None: - - if now + delay > deadline: - raise - - time.sleep(delay) - - -# pylint: enable=misplaced-bare-raise - - -def _get_retry_delay(cause, attempts): - """Helper for :func:`_delay_until_retry`. - - :type exc: :class:`grpc.Call` - :param exc: exception for aborted transaction - - :rtype: float - :returns: seconds to wait before retrying the transaction. - - :type attempts: int - :param attempts: number of call retries - """ - metadata = dict(cause.trailing_metadata()) - retry_info_pb = metadata.get("google.rpc.retryinfo-bin") - if retry_info_pb is not None: - retry_info = RetryInfo() - retry_info.ParseFromString(retry_info_pb) - nanos = retry_info.retry_delay.nanos - return retry_info.retry_delay.seconds + nanos / 1.0e9 - - return 2 ** attempts + random.random() + attempts += 1 + span_attributes = dict(attempt=attempts) + + try: + return_value = func(txn, *args, **kw) + + except Aborted as exc: + previous_transaction_id = txn._transaction_id + if span: + delay_seconds = _get_retry_delay( + exc.errors[0], + attempts, + default_retry_delay=default_retry_delay, + ) + attributes = dict(delay_seconds=delay_seconds, cause=str(exc)) + attributes.update(span_attributes) + add_span_event( + span, + "Transaction was aborted in user operation, retrying", + attributes, + ) + + _delay_until_retry( + exc, deadline, attempts, default_retry_delay=default_retry_delay + ) + continue + + except GoogleAPICallError: + add_span_event( + span, + "User operation failed due to GoogleAPICallError, not retrying", + span_attributes, + ) + raise + + except Exception: + add_span_event( + span, + "User operation failed. Invoking Transaction.rollback(), not retrying", + span_attributes, + ) + txn.rollback() + raise + + try: + txn.commit( + return_commit_stats=log_commit_stats, + request_options=commit_request_options, + max_commit_delay=max_commit_delay, + ) + + except Aborted as exc: + previous_transaction_id = txn._transaction_id + if span: + delay_seconds = _get_retry_delay( + exc.errors[0], + attempts, + default_retry_delay=default_retry_delay, + ) + attributes = dict(delay_seconds=delay_seconds) + attributes.update(span_attributes) + add_span_event( + span, + "Transaction was aborted during commit, retrying", + attributes, + ) + + _delay_until_retry( + exc, deadline, attempts, default_retry_delay=default_retry_delay + ) + + except GoogleAPICallError: + add_span_event( + span, + "Transaction.commit failed due to GoogleAPICallError, not retrying", + span_attributes, + ) + raise + + else: + if log_commit_stats and txn.commit_stats: + database.logger.info( + "CommitStats: {}".format(txn.commit_stats), + extra={"commit_stats": txn.commit_stats}, + ) + return return_value diff --git a/google/cloud/spanner_v1/snapshot.py b/google/cloud/spanner_v1/snapshot.py index f7b9f07f8f..5633cd4486 100644 --- a/google/cloud/spanner_v1/snapshot.py +++ b/google/cloud/spanner_v1/snapshot.py @@ -15,42 +15,189 @@ """Model a set of read-only queries to a database as a snapshot.""" import functools +import threading +from typing import List, Union, Optional from google.protobuf.struct_pb2 import Struct -from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionOptions -from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionSelector - +from google.cloud.spanner_v1 import ( + ExecuteSqlRequest, + PartialResultSet, + ResultSet, + Transaction, + Mutation, + BeginTransactionRequest, +) +from google.cloud.spanner_v1 import ReadRequest +from google.cloud.spanner_v1 import TransactionOptions +from google.cloud.spanner_v1 import TransactionSelector +from google.cloud.spanner_v1 import PartitionOptions +from google.cloud.spanner_v1 import PartitionQueryRequest +from google.cloud.spanner_v1 import PartitionReadRequest + +from google.api_core.exceptions import InternalServerError, Aborted from google.api_core.exceptions import ServiceUnavailable -import google.api_core.gapic_v1.method -from google.cloud._helpers import _datetime_to_pb_timestamp -from google.cloud.spanner_v1._helpers import _merge_query_options -from google.cloud._helpers import _timedelta_to_duration_pb -from google.cloud.spanner_v1._helpers import _make_value_pb -from google.cloud.spanner_v1._helpers import _metadata_with_prefix -from google.cloud.spanner_v1._helpers import _SessionWrapper +from google.api_core.exceptions import InvalidArgument +from google.api_core import gapic_v1 +from google.cloud.spanner_v1._helpers import ( + _make_value_pb, + _merge_query_options, + _metadata_with_prefix, + _metadata_with_leader_aware_routing, + _retry, + _check_rst_stream_error, + _SessionWrapper, + AtomicCounter, +) +from google.cloud.spanner_v1._opentelemetry_tracing import trace_call, add_span_event from google.cloud.spanner_v1.streamed import StreamedResultSet -from google.cloud.spanner_v1.types import PartitionOptions +from google.cloud.spanner_v1 import RequestOptions + +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture +from google.cloud.spanner_v1.types import MultiplexedSessionPrecommitToken + +_STREAM_RESUMPTION_INTERNAL_ERROR_MESSAGES = ( + "RST_STREAM", + "Received unexpected EOS on DATA frame from server", +) + + +def _restart_on_unavailable( + method, + request, + metadata=None, + trace_name=None, + session=None, + attributes=None, + transaction=None, + transaction_selector=None, + observability_options=None, + request_id_manager=None, +): + """Restart iteration after :exc:`.ServiceUnavailable`. + :type method: callable + :param method: function returning iterator -def _restart_on_unavailable(restart): - """Restart iteration after :exc:`.ServiceUnavailable`. + :type request: proto + :param request: request proto to call the method with - :type restart: callable - :param restart: curried function returning iterator + :type transaction: :class:`google.cloud.spanner_v1.snapshot._SnapshotBase` + :param transaction: Snapshot or Transaction class object based on the type of transaction + + :type transaction_selector: :class:`transaction_pb2.TransactionSelector` + :param transaction_selector: Transaction selector object to be used in request if transaction is not passed, + if both transaction_selector and transaction are passed, then transaction is given priority. """ - resume_token = b"" - item_buffer = [] - iterator = restart() + + resume_token: bytes = b"" + item_buffer: List[PartialResultSet] = [] + + if transaction is not None: + transaction_selector = transaction._build_transaction_selector_pb() + elif transaction_selector is None: + raise InvalidArgument( + "Either transaction or transaction_selector should be set" + ) + + request.transaction = transaction_selector + iterator = None + attempt = 1 + nth_request = getattr(request_id_manager, "_next_nth_request", 0) + while True: try: + # Get results iterator. + if iterator is None: + with trace_call( + trace_name, + session, + attributes, + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + iterator = method( + request=request, + metadata=request_id_manager.metadata_with_request_id( + nth_request, + attempt, + metadata, + span, + ), + ) + + # Add items from iterator to buffer. + item: PartialResultSet for item in iterator: item_buffer.append(item) + + # Update the transaction from the response. + if transaction is not None: + transaction._update_for_result_set_pb(item) + if ( + item._pb is not None + and item._pb.HasField("precommit_token") + and transaction is not None + ): + transaction._update_for_precommit_token_pb(item.precommit_token) + if item.resume_token: resume_token = item.resume_token break + except ServiceUnavailable: del item_buffer[:] - iterator = restart(resume_token=resume_token) + with trace_call( + trace_name, + session, + attributes, + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + request.resume_token = resume_token + if transaction is not None: + transaction_selector = transaction._build_transaction_selector_pb() + request.transaction = transaction_selector + attempt += 1 + iterator = method( + request=request, + metadata=request_id_manager.metadata_with_request_id( + nth_request, + attempt, + metadata, + span, + ), + ) + continue + + except InternalServerError as exc: + resumable_error = any( + resumable_message in exc.message + for resumable_message in _STREAM_RESUMPTION_INTERNAL_ERROR_MESSAGES + ) + if not resumable_error: + raise + del item_buffer[:] + with trace_call( + trace_name, + session, + attributes, + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + request.resume_token = resume_token + if transaction is not None: + transaction_selector = transaction._build_transaction_selector_pb() + attempt += 1 + request.transaction = transaction_selector + iterator = method( + request=request, + metadata=request_id_manager.metadata_with_request_id( + nth_request, + attempt, + metadata, + span, + ), + ) continue if len(item_buffer) == 0: @@ -68,26 +215,62 @@ class _SnapshotBase(_SessionWrapper): Allows reuse of API request methods with different transaction selector. :type session: :class:`~google.cloud.spanner_v1.session.Session` - :param session: the session used to perform the commit + :param session: the session used to perform transaction operations. """ - _multi_use = False - _transaction_id = None - _read_request_count = 0 - _execute_sql_count = 0 + _read_only: bool = True + _multi_use: bool = False - def _make_txn_selector(self): # pylint: disable=redundant-returns-doc - """Helper for :meth:`read` / :meth:`execute_sql`. + def __init__(self, session): + super().__init__(session) - Subclasses must override, returning an instance of - :class:`transaction_pb2.TransactionSelector` - appropriate for making ``read`` / ``execute_sql`` requests + # Counts for execute SQL requests and total read requests (including + # execute SQL requests). Used to provide sequence numbers for + # :class:`google.cloud.spanner_v1.types.ExecuteSqlRequest` and to + # verify that single-use transactions are not used more than once, + # respectively. + self._execute_sql_request_count: int = 0 + self._read_request_count: int = 0 + + # Identifier for the transaction. + self._transaction_id: Optional[bytes] = None + + # Precommit tokens are returned for transactions with + # multiplexed sessions. The precommit token with the + # highest sequence number is included in the commit request. + self._precommit_token: Optional[MultiplexedSessionPrecommitToken] = None + + # Operations within a transaction can be performed using multiple + # threads, so we need to use a lock when updating the transaction. + self._lock: threading.Lock = threading.Lock() + + def begin(self) -> bytes: + """Begins a transaction on the database. + + :rtype: bytes + :returns: identifier for the transaction. - :raises: NotImplementedError, always + :raises ValueError: if the transaction has already begun. """ - raise NotImplementedError + return self._begin_transaction() - def read(self, table, columns, keyset, index="", limit=0, partition=None): + def read( + self, + table, + columns, + keyset, + index="", + limit=0, + partition=None, + request_options=None, + data_boost_enabled=False, + directed_read_options=None, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + column_info=None, + lazy_decode=False, + ): """Perform a ``StreamingRead`` API request for rows in a table. :type table: str @@ -112,45 +295,124 @@ def read(self, table, columns, keyset, index="", limit=0, partition=None): from :meth:`partition_read`. Incompatible with ``limit``. + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + Please note, the `transactionTag` setting will be ignored for + snapshot as it's not supported for read-only transactions. + + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + + :type data_boost_enabled: + :param data_boost_enabled: + (Optional) If this is for a partitioned read and this field is + set ``true``, the request will be executed via offline access. + If the field is set to ``true`` but the request does not set + ``partition_token``, the API will return an + ``INVALID_ARGUMENT`` error. + + :type directed_read_options: :class:`~google.cloud.spanner_v1.DirectedReadOptions` + or :class:`dict` + :param directed_read_options: (Optional) Request level option used to set the directed_read_options + for all ReadRequests and ExecuteSqlRequests that indicates which replicas + or regions should be used for non-transactional reads or queries. + + :type column_info: dict + :param column_info: (Optional) dict of mapping between column names and additional column information. + An object where column names as keys and custom objects as corresponding + values for deserialization. It's specifically useful for data types like + protobuf where deserialization logic is on user-specific code. When provided, + the custom object enables deserialization of backend-received column data. + If not provided, data remains serialized as bytes for Proto Messages and + integer for Proto Enums. + + :type lazy_decode: bool + :param lazy_decode: + (Optional) If this argument is set to ``true``, the iterator + returns the underlying protobuf values instead of decoded Python + objects. This reduces the time that is needed to iterate through + large result sets. The application is responsible for decoding + the data that is needed. The returned row iterator contains two + functions that can be used for this. ``iterator.decode_row(row)`` + decodes all the columns in the given row to an array of Python + objects. ``iterator.decode_column(row, column_index)`` decodes one + specific column in the given row. + :rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet` :returns: a result set instance which can be used to consume rows. - :raises ValueError: - for reuse of single-use snapshots, or if a transaction ID is - already pending for multiple-use snapshots. + :raises ValueError: if the Transaction already used to execute a + read request, but is not a multi-use transaction or has not begun. """ + if self._read_request_count > 0: if not self._multi_use: raise ValueError("Cannot re-use single-use snapshot.") if self._transaction_id is None: - raise ValueError("Transaction ID pending.") + raise ValueError("Transaction has not begun.") - database = self._session._database + session = self._session + database = session._database api = database.spanner_api + metadata = _metadata_with_prefix(database.name) - transaction = self._make_txn_selector() + if not self._read_only and database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) - restart = functools.partial( - api.streaming_read, - self._session.name, - table, - columns, - keyset._to_pb(), - transaction=transaction, + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + + if self._read_only: + # Transaction tags are not supported for read only transactions. + request_options.transaction_tag = None + if ( + directed_read_options is None + and database._directed_read_options is not None + ): + directed_read_options = database._directed_read_options + elif self.transaction_tag is not None: + request_options.transaction_tag = self.transaction_tag + + read_request = ReadRequest( + session=session.name, + table=table, + columns=columns, + key_set=keyset._to_pb(), index=index, limit=limit, partition_token=partition, - metadata=metadata, + request_options=request_options, + data_boost_enabled=data_boost_enabled, + directed_read_options=directed_read_options, ) - iterator = _restart_on_unavailable(restart) - - self._read_request_count += 1 + streaming_read_method = functools.partial( + api.streaming_read, + request=read_request, + metadata=metadata, + retry=retry, + timeout=timeout, + ) - if self._multi_use: - return StreamedResultSet(iterator, source=self) - else: - return StreamedResultSet(iterator) + return self._get_streamed_result_set( + method=streaming_read_method, + request=read_request, + metadata=metadata, + trace_attributes={"table_id": table, "columns": columns}, + column_info=column_info, + lazy_decode=lazy_decode, + ) def execute_sql( self, @@ -159,9 +421,15 @@ def execute_sql( param_types=None, query_mode=None, query_options=None, + request_options=None, + last_statement=False, partition=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + data_boost_enabled=False, + directed_read_options=None, + column_info=None, + lazy_decode=False, ): """Perform an ``ExecuteStreamingSql`` API request. @@ -178,19 +446,39 @@ def execute_sql( required if parameters are passed. :type query_mode: - :class:`~google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryMode` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryMode` :param query_mode: Mode governing return of results / query plan. See: `QueryMode `_. :type query_options: - :class:`~google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or :class:`dict` :param query_options: (Optional) Query optimizer configuration to use for the given query. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.QueryOptions` + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type last_statement: bool + :param last_statement: + If set to true, this option marks the end of the transaction. The + transaction should be committed or aborted after this statement + executes, and attempts to execute any other requests against this + transaction (including reads and queries) will be rejected. Mixing + mutations with statements that are marked as the last statement is + not allowed. + For DML statements, setting this option may cause some error + reporting to be deferred until commit time (e.g. validation of + unique constraints). Given this, successful execution of a DML + statement should not be assumed until the transaction commits. + :type partition: bytes :param partition: (Optional) one of the partition tokens returned from :meth:`partition_query`. @@ -198,60 +486,182 @@ def execute_sql( :rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet` :returns: a result set instance which can be used to consume rows. - :raises ValueError: - for reuse of single-use snapshots, or if a transaction ID is - already pending for multiple-use snapshots. + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + + :type data_boost_enabled: + :param data_boost_enabled: + (Optional) If this is for a partitioned query and this field is + set ``true``, the request will be executed via offline access. + If the field is set to ``true`` but the request does not set + ``partition_token``, the API will return an + ``INVALID_ARGUMENT`` error. + + :type directed_read_options: :class:`~google.cloud.spanner_v1.DirectedReadOptions` + or :class:`dict` + :param directed_read_options: (Optional) Request level option used to set the directed_read_options + for all ReadRequests and ExecuteSqlRequests that indicates which replicas + or regions should be used for non-transactional reads or queries. + + :type column_info: dict + :param column_info: (Optional) dict of mapping between column names and additional column information. + An object where column names as keys and custom objects as corresponding + values for deserialization. It's specifically useful for data types like + protobuf where deserialization logic is on user-specific code. When provided, + the custom object enables deserialization of backend-received column data. + If not provided, data remains serialized as bytes for Proto Messages and + integer for Proto Enums. + + :type lazy_decode: bool + :param lazy_decode: + (Optional) If this argument is set to ``true``, the iterator + returns the underlying protobuf values instead of decoded Python + objects. This reduces the time that is needed to iterate through + large result sets. The application is responsible for decoding + the data that is needed. The returned row iterator contains two + functions that can be used for this. ``iterator.decode_row(row)`` + decodes all the columns in the given row to an array of Python + objects. ``iterator.decode_column(row, column_index)`` decodes one + specific column in the given row. + + :raises ValueError: if the Transaction already used to execute a + read request, but is not a multi-use transaction or has not begun. """ + if self._read_request_count > 0: if not self._multi_use: raise ValueError("Cannot re-use single-use snapshot.") if self._transaction_id is None: - raise ValueError("Transaction ID pending.") + raise ValueError("Transaction has not begun.") if params is not None: - if param_types is None: - raise ValueError("Specify 'param_types' when passing 'params'.") params_pb = Struct( fields={key: _make_value_pb(value) for key, value in params.items()} ) else: - params_pb = None + params_pb = {} - database = self._session._database - metadata = _metadata_with_prefix(database.name) - transaction = self._make_txn_selector() + session = self._session + database = session._database api = database.spanner_api + metadata = _metadata_with_prefix(database.name) + if not self._read_only and database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + # Query-level options have higher precedence than client-level and # environment-level options default_query_options = database._instance._client._query_options query_options = _merge_query_options(default_query_options, query_options) - restart = functools.partial( - api.execute_streaming_sql, - self._session.name, - sql, - transaction=transaction, + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + if self._read_only: + # Transaction tags are not supported for read only transactions. + request_options.transaction_tag = None + if ( + directed_read_options is None + and database._directed_read_options is not None + ): + directed_read_options = database._directed_read_options + elif self.transaction_tag is not None: + request_options.transaction_tag = self.transaction_tag + + execute_sql_request = ExecuteSqlRequest( + session=session.name, + sql=sql, params=params_pb, param_types=param_types, query_mode=query_mode, partition_token=partition, - seqno=self._execute_sql_count, + seqno=self._execute_sql_request_count, query_options=query_options, + request_options=request_options, + last_statement=last_statement, + data_boost_enabled=data_boost_enabled, + directed_read_options=directed_read_options, + ) + + execute_streaming_sql_method = functools.partial( + api.execute_streaming_sql, + request=execute_sql_request, metadata=metadata, retry=retry, timeout=timeout, ) - iterator = _restart_on_unavailable(restart) + return self._get_streamed_result_set( + method=execute_streaming_sql_method, + request=execute_sql_request, + metadata=metadata, + trace_attributes={"db.statement": sql}, + column_info=column_info, + lazy_decode=lazy_decode, + ) + + def _get_streamed_result_set( + self, + method, + request, + metadata, + trace_attributes, + column_info, + lazy_decode, + ): + """Returns the streamed result set for a read or execute SQL request with the given arguments.""" + + session = self._session + database = session._database + + is_execute_sql_request = isinstance(request, ExecuteSqlRequest) + + trace_method_name = "execute_sql" if is_execute_sql_request else "read" + trace_name = f"CloudSpanner.{type(self).__name__}.{trace_method_name}" + + # If this request begins the transaction, we need to lock + # the transaction until the transaction ID is updated. + is_inline_begin = False + + if self._transaction_id is None: + is_inline_begin = True + self._lock.acquire() + + iterator = _restart_on_unavailable( + method=method, + request=request, + session=session, + metadata=metadata, + trace_name=trace_name, + attributes=trace_attributes, + transaction=self, + observability_options=getattr(database, "observability_options", None), + request_id_manager=database, + ) + + if is_inline_begin: + self._lock.release() + if is_execute_sql_request: + self._execute_sql_request_count += 1 self._read_request_count += 1 - self._execute_sql_count += 1 + + streamed_result_set_args = { + "response_iterator": iterator, + "column_info": column_info, + "lazy_decode": lazy_decode, + } if self._multi_use: - return StreamedResultSet(iterator, source=self) - else: - return StreamedResultSet(iterator) + streamed_result_set_args["source"] = self + + return StreamedResultSet(**streamed_result_set_args) def partition_read( self, @@ -261,8 +671,11 @@ def partition_read( index="", partition_size_bytes=None, max_partitions=None, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ): - """Perform a ``ParitionRead`` API request for rows in a table. + """Perform a ``PartitionRead`` API request for rows in a table. :type table: str :param table: name of the table from which to fetch data @@ -288,38 +701,83 @@ def partition_read( service uses this as a hint, the actual number of partitions may differ. + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + :rtype: iterable of bytes :returns: a sequence of partition tokens - :raises ValueError: - for single-use snapshots, or if a transaction ID is - already associtated with the snapshot. + :raises ValueError: if the transaction has not begun or is single-use. """ - if not self._multi_use: - raise ValueError("Cannot use single-use snapshot.") if self._transaction_id is None: - raise ValueError("Transaction not started.") + raise ValueError("Transaction has not begun.") + if not self._multi_use: + raise ValueError("Cannot partition a single-use transaction.") - database = self._session._database + session = self._session + database = session._database api = database.spanner_api + metadata = _metadata_with_prefix(database.name) - transaction = self._make_txn_selector() + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + transaction = self._build_transaction_selector_pb() partition_options = PartitionOptions( partition_size_bytes=partition_size_bytes, max_partitions=max_partitions ) - response = api.partition_read( - session=self._session.name, + partition_read_request = PartitionReadRequest( + session=session.name, table=table, columns=columns, key_set=keyset._to_pb(), transaction=transaction, index=index, partition_options=partition_options, - metadata=metadata, ) + trace_attributes = {"table_id": table, "columns": columns} + can_include_index = (index != "") and (index is not None) + if can_include_index: + trace_attributes["index"] = index + + with trace_call( + f"CloudSpanner.{type(self).__name__}.partition_read", + session, + extra_attributes=trace_attributes, + observability_options=getattr(database, "observability_options", None), + metadata=metadata, + ) as span, MetricsCapture(): + nth_request = getattr(database, "_next_nth_request", 0) + attempt = AtomicCounter() + + def attempt_tracking_method(): + all_metadata = database.metadata_with_request_id( + nth_request, + attempt.increment(), + metadata, + span, + ) + partition_read_method = functools.partial( + api.partition_read, + request=partition_read_request, + metadata=all_metadata, + retry=retry, + timeout=timeout, + ) + return partition_read_method() + + response = _retry( + attempt_tracking_method, + allowed_exceptions={InternalServerError: _check_rst_stream_error}, + ) + return [partition.partition_token for partition in response.partitions] def partition_query( @@ -329,8 +787,11 @@ def partition_query( param_types=None, partition_size_bytes=None, max_partitions=None, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ): - """Perform a ``ParitionQuery`` API request. + """Perform a ``PartitionQuery`` API request. :type sql: str :param sql: SQL query statement @@ -355,48 +816,254 @@ def partition_query( service uses this as a hint, the actual number of partitions may differ. + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + :rtype: iterable of bytes :returns: a sequence of partition tokens - :raises ValueError: - for single-use snapshots, or if a transaction ID is - already associtated with the snapshot. + :raises ValueError: if the transaction has not begun or is single-use. """ - if not self._multi_use: - raise ValueError("Cannot use single-use snapshot.") if self._transaction_id is None: - raise ValueError("Transaction not started.") + raise ValueError("Transaction has not begun.") + if not self._multi_use: + raise ValueError("Cannot partition a single-use transaction.") if params is not None: - if param_types is None: - raise ValueError("Specify 'param_types' when passing 'params'.") params_pb = Struct( - fields={key: _make_value_pb(value) for key, value in params.items()} + fields={key: _make_value_pb(value) for (key, value) in params.items()} ) else: - params_pb = None + params_pb = Struct() - database = self._session._database + session = self._session + database = session._database api = database.spanner_api + metadata = _metadata_with_prefix(database.name) - transaction = self._make_txn_selector() + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + transaction = self._build_transaction_selector_pb() partition_options = PartitionOptions( partition_size_bytes=partition_size_bytes, max_partitions=max_partitions ) - response = api.partition_query( - session=self._session.name, + partition_query_request = PartitionQueryRequest( + session=session.name, sql=sql, transaction=transaction, params=params_pb, param_types=param_types, partition_options=partition_options, - metadata=metadata, ) + trace_attributes = {"db.statement": sql} + with trace_call( + f"CloudSpanner.{type(self).__name__}.partition_query", + session, + trace_attributes, + observability_options=getattr(database, "observability_options", None), + metadata=metadata, + ) as span, MetricsCapture(): + nth_request = getattr(database, "_next_nth_request", 0) + attempt = AtomicCounter() + + def attempt_tracking_method(): + all_metadata = database.metadata_with_request_id( + nth_request, + attempt.increment(), + metadata, + span, + ) + partition_query_method = functools.partial( + api.partition_query, + request=partition_query_request, + metadata=all_metadata, + retry=retry, + timeout=timeout, + ) + return partition_query_method() + + response = _retry( + attempt_tracking_method, + allowed_exceptions={InternalServerError: _check_rst_stream_error}, + ) + return [partition.partition_token for partition in response.partitions] + def _begin_transaction(self, mutation: Mutation = None) -> bytes: + """Begins a transaction on the database. + + :type mutation: :class:`~google.cloud.spanner_v1.mutation.Mutation` + :param mutation: (Optional) Mutation to include in the begin transaction + request. Required for mutation-only transactions with multiplexed sessions. + + :rtype: bytes + :returns: identifier for the transaction. + + :raises ValueError: if the transaction has already begun or is single-use. + """ + + if self._transaction_id is not None: + raise ValueError("Transaction has already begun.") + if not self._multi_use: + raise ValueError("Cannot begin a single-use transaction.") + if self._read_request_count > 0: + raise ValueError("Read-only transaction already pending") + + session = self._session + database = session._database + api = database.spanner_api + + metadata = _metadata_with_prefix(database.name) + if not self._read_only and database._route_to_leader_enabled: + metadata.append( + (_metadata_with_leader_aware_routing(database._route_to_leader_enabled)) + ) + + with trace_call( + name=f"CloudSpanner.{type(self).__name__}.begin", + session=session, + observability_options=getattr(database, "observability_options", None), + metadata=metadata, + ) as span, MetricsCapture(): + nth_request = getattr(database, "_next_nth_request", 0) + attempt = AtomicCounter() + + def wrapped_method(): + begin_transaction_request = BeginTransactionRequest( + session=session.name, + options=self._build_transaction_selector_pb().begin, + mutation_key=mutation, + ) + begin_transaction_method = functools.partial( + api.begin_transaction, + request=begin_transaction_request, + metadata=database.metadata_with_request_id( + nth_request, + attempt.increment(), + metadata, + span, + ), + ) + return begin_transaction_method() + + def before_next_retry(nth_retry, delay_in_seconds): + add_span_event( + span=span, + event_name="Transaction Begin Attempt Failed. Retrying", + event_attributes={ + "attempt": nth_retry, + "sleep_seconds": delay_in_seconds, + }, + ) + + # An aborted transaction may be raised by a mutations-only + # transaction with a multiplexed session. + transaction_pb: Transaction = _retry( + wrapped_method, + before_next_retry=before_next_retry, + allowed_exceptions={ + InternalServerError: _check_rst_stream_error, + Aborted: None, + }, + ) + + self._update_for_transaction_pb(transaction_pb) + return self._transaction_id + + def _build_transaction_options_pb(self) -> TransactionOptions: + """Builds and returns the transaction options for this snapshot. + + :rtype: :class:`transaction_pb2.TransactionOptions` + :returns: the transaction options for this snapshot. + """ + raise NotImplementedError + + def _build_transaction_selector_pb(self) -> TransactionSelector: + """Builds and returns a transaction selector for this snapshot. + + :rtype: :class:`transaction_pb2.TransactionSelector` + :returns: a transaction selector for this snapshot. + """ + + # Select a previously begun transaction. + if self._transaction_id is not None: + return TransactionSelector(id=self._transaction_id) + + options = self._build_transaction_options_pb() + + # Select a single-use transaction. + if not self._multi_use: + return TransactionSelector(single_use=options) + + # Select a new, multi-use transaction. + return TransactionSelector(begin=options) + + def _update_for_result_set_pb( + self, result_set_pb: Union[ResultSet, PartialResultSet] + ) -> None: + """Updates the snapshot for the given result set. + + :type result_set_pb: :class:`~google.cloud.spanner_v1.ResultSet` or + :class:`~google.cloud.spanner_v1.PartialResultSet` + :param result_set_pb: The result set to update the snapshot with. + """ + + if result_set_pb.metadata and result_set_pb.metadata.transaction: + self._update_for_transaction_pb(result_set_pb.metadata.transaction) + + def _update_for_transaction_pb(self, transaction_pb: Transaction) -> None: + """Updates the snapshot for the given transaction. + + :type transaction_pb: :class:`~google.cloud.spanner_v1.Transaction` + :param transaction_pb: The transaction to update the snapshot with. + """ + + # The transaction ID should only be updated when the transaction is + # begun: either explicitly with a begin transaction request, or implicitly + # with read, execute SQL, batch update, or execute update requests. The + # caller is responsible for locking until the transaction ID is updated. + if self._transaction_id is None and transaction_pb.id: + self._transaction_id = transaction_pb.id + + if transaction_pb._pb.HasField("precommit_token"): + self._update_for_precommit_token_pb_unsafe(transaction_pb.precommit_token) + + def _update_for_precommit_token_pb( + self, precommit_token_pb: MultiplexedSessionPrecommitToken + ) -> None: + """Updates the snapshot for the given multiplexed session precommit token. + :type precommit_token_pb: :class:`~google.cloud.spanner_v1.MultiplexedSessionPrecommitToken` + :param precommit_token_pb: The multiplexed session precommit token to update the snapshot with. + """ + + # Because multiple threads can be used to perform operations within a + # transaction, we need to use a lock when updating the precommit token. + with self._lock: + self._update_for_precommit_token_pb_unsafe(precommit_token_pb) + + def _update_for_precommit_token_pb_unsafe( + self, precommit_token_pb: MultiplexedSessionPrecommitToken + ) -> None: + """Updates the snapshot for the given multiplexed session precommit token. + This method is unsafe because it does not acquire a lock before updating + the precommit token. It should only be used when the caller has already + acquired the lock. + :type precommit_token_pb: :class:`~google.cloud.spanner_v1.MultiplexedSessionPrecommitToken` + :param precommit_token_pb: The multiplexed session precommit token to update the snapshot with. + """ + if self._precommit_token is None or ( + precommit_token_pb.seq_num > self._precommit_token.seq_num + ): + self._precommit_token = precommit_token_pb + class Snapshot(_SnapshotBase): """Allow a set of reads / SQL statements with shared staleness. @@ -408,7 +1075,7 @@ class Snapshot(_SnapshotBase): at a timestamp where all previously committed transactions are visible. :type session: :class:`~google.cloud.spanner_v1.session.Session` - :param session: the session used to perform the commit. + :param session: The session used to perform the commit. :type read_timestamp: :class:`datetime.datetime` :param read_timestamp: Execute all reads at the given timestamp. @@ -426,7 +1093,7 @@ class Snapshot(_SnapshotBase): ``exact_staleness`` old. :type multi_use: :class:`bool` - :param multi_use: If true, multipl :meth:`read` / :meth:`execute_sql` + :param multi_use: If true, multiple :meth:`read` / :meth:`execute_sql` calls can be performed with the snapshot in the context of a read-only transaction, used to ensure isolation / consistency. Incompatible with @@ -441,6 +1108,7 @@ def __init__( max_staleness=None, exact_staleness=None, multi_use=False, + transaction_id=None, ): super(Snapshot, self).__init__(session) opts = [read_timestamp, min_read_timestamp, max_staleness, exact_staleness] @@ -456,67 +1124,46 @@ def __init__( "'min_read_timestamp' / 'max_staleness'" ) + self._transaction_read_timestamp = None self._strong = len(flagged) == 0 self._read_timestamp = read_timestamp self._min_read_timestamp = min_read_timestamp self._max_staleness = max_staleness self._exact_staleness = exact_staleness self._multi_use = multi_use + self._transaction_id = transaction_id - def _make_txn_selector(self): - """Helper for :meth:`read`.""" - if self._transaction_id is not None: - return TransactionSelector(id=self._transaction_id) + def _build_transaction_options_pb(self) -> TransactionOptions: + """Builds and returns transaction options for this snapshot. + + :rtype: :class:`transaction_pb2.TransactionOptions` + :returns: transaction options for this snapshot. + """ + + read_only_pb_args = dict(return_read_timestamp=True) if self._read_timestamp: - key = "read_timestamp" - value = _datetime_to_pb_timestamp(self._read_timestamp) + read_only_pb_args["read_timestamp"] = self._read_timestamp elif self._min_read_timestamp: - key = "min_read_timestamp" - value = _datetime_to_pb_timestamp(self._min_read_timestamp) + read_only_pb_args["min_read_timestamp"] = self._min_read_timestamp elif self._max_staleness: - key = "max_staleness" - value = _timedelta_to_duration_pb(self._max_staleness) + read_only_pb_args["max_staleness"] = self._max_staleness elif self._exact_staleness: - key = "exact_staleness" - value = _timedelta_to_duration_pb(self._exact_staleness) + read_only_pb_args["exact_staleness"] = self._exact_staleness else: - key = "strong" - value = True - - options = TransactionOptions( - read_only=TransactionOptions.ReadOnly(**{key: value}) - ) + read_only_pb_args["strong"] = True - if self._multi_use: - return TransactionSelector(begin=options) - else: - return TransactionSelector(single_use=options) + read_only_pb = TransactionOptions.ReadOnly(**read_only_pb_args) + return TransactionOptions(read_only=read_only_pb) - def begin(self): - """Begin a read-only transaction on the database. + def _update_for_transaction_pb(self, transaction_pb: Transaction) -> None: + """Updates the snapshot for the given transaction. - :rtype: bytes - :returns: the ID for the newly-begun transaction. - - :raises ValueError: - if the transaction is already begun, committed, or rolled back. + :type transaction_pb: :class:`~google.cloud.spanner_v1.Transaction` + :param transaction_pb: The transaction to update the snapshot with. """ - if not self._multi_use: - raise ValueError("Cannot call 'begin' on single-use snapshots") - if self._transaction_id is not None: - raise ValueError("Read-only transaction already begun") + super(Snapshot, self)._update_for_transaction_pb(transaction_pb) - if self._read_request_count > 0: - raise ValueError("Read-only transaction already pending") - - database = self._session._database - api = database.spanner_api - metadata = _metadata_with_prefix(database.name) - txn_selector = self._make_txn_selector() - response = api.begin_transaction( - self._session.name, txn_selector.begin, metadata=metadata - ) - self._transaction_id = response.id - return self._transaction_id + if transaction_pb.read_timestamp is not None: + self._transaction_read_timestamp = transaction_pb.read_timestamp diff --git a/google/cloud/spanner_v1/streamed.py b/google/cloud/spanner_v1/streamed.py index dbb4e0dbc0..c41e65d39f 100644 --- a/google/cloud/spanner_v1/streamed.py +++ b/google/cloud/spanner_v1/streamed.py @@ -14,16 +14,14 @@ """Wrapper for streaming results.""" +from google.cloud import exceptions from google.protobuf.struct_pb2 import ListValue from google.protobuf.struct_pb2 import Value -from google.cloud import exceptions -from google.cloud.spanner_v1.proto import type_pb2 -import six - -# pylint: disable=ungrouped-imports -from google.cloud.spanner_v1._helpers import _parse_value_pb -# pylint: enable=ungrouped-imports +from google.cloud.spanner_v1 import PartialResultSet +from google.cloud.spanner_v1 import ResultSetMetadata +from google.cloud.spanner_v1 import TypeCode +from google.cloud.spanner_v1._helpers import _get_type_decoder, _parse_nullable class StreamedResultSet(object): @@ -32,28 +30,36 @@ class StreamedResultSet(object): :type response_iterator: :param response_iterator: Iterator yielding - :class:`~google.cloud.spanner_v1.proto.result_set_pb2.PartialResultSet` + :class:`~google.cloud.spanner_v1.types.PartialResultSet` instances. :type source: :class:`~google.cloud.spanner_v1.snapshot.Snapshot` - :param source: Snapshot from which the result set was fetched. + :param source: Deprecated. Snapshot from which the result set was fetched. """ - def __init__(self, response_iterator, source=None): + def __init__( + self, + response_iterator, + source=None, + column_info=None, + lazy_decode: bool = False, + ): self._response_iterator = response_iterator self._rows = [] # Fully-processed rows - self._counter = 0 # Counter for processed responses self._metadata = None # Until set from first PRS self._stats = None # Until set from last PRS self._current_row = [] # Accumulated values for incomplete row self._pending_chunk = None # Incomplete value - self._source = source # Source snapshot + self._column_info = column_info # Column information + self._field_decoders = None + self._lazy_decode = lazy_decode # Return protobuf values + self._done = False @property def fields(self): """Field descriptors for result set columns. - :rtype: list of :class:`~google.cloud.spanner_v1.proto.type_pb2.Field` + :rtype: list of :class:`~google.cloud.spanner_v1.types.StructType.Field` :returns: list of fields describing column names / types. """ return self._metadata.row_type.fields @@ -62,21 +68,34 @@ def fields(self): def metadata(self): """Result set metadata - :rtype: :class:`~.result_set_pb2.ResultSetMetadata` + :rtype: :class:`~google.cloud.spanner_v1.types.ResultSetMetadata` :returns: structure describing the results """ - return self._metadata + if self._metadata: + return ResultSetMetadata.wrap(self._metadata) + return None @property def stats(self): """Result set statistics :rtype: - :class:`~google.cloud.spanner_v1.proto.result_set_pb2.ResultSetStats` + :class:`~google.cloud.spanner_v1.types.ResultSetStats` :returns: structure describing status about the response """ return self._stats + @property + def _decoders(self): + if self._field_decoders is None: + if self._metadata is None: + raise ValueError("iterator not started") + self._field_decoders = [ + _get_type_decoder(field.type_, field.name, self._column_info) + for field in self.fields + ] + return self._field_decoders + def _merge_chunk(self, value): """Merge pending chunk with next value. @@ -89,7 +108,7 @@ def _merge_chunk(self, value): """ current_column = len(self._current_row) field = self.fields[current_column] - merged = _merge_by_type(self._pending_chunk, value, field.type) + merged = _merge_by_type(self._pending_chunk, value, field.type_) self._pending_chunk = None return merged @@ -99,53 +118,85 @@ def _merge_values(self, values): :type values: list of :class:`~google.protobuf.struct_pb2.Value` :param values: non-chunked values from partial result set. """ + decoders = self._decoders width = len(self.fields) + index = len(self._current_row) for value in values: - index = len(self._current_row) - field = self.fields[index] - self._current_row.append(_parse_value_pb(value, field.type)) - if len(self._current_row) == width: + if self._lazy_decode: + self._current_row.append(value) + else: + self._current_row.append(_parse_nullable(value, decoders[index])) + index += 1 + if index == width: self._rows.append(self._current_row) self._current_row = [] + index = 0 def _consume_next(self): """Consume the next partial result set from the stream. Parse the result set into new/existing rows in :attr:`_rows` """ - response = six.next(self._response_iterator) - self._counter += 1 + response = next(self._response_iterator) + response_pb = PartialResultSet.pb(response) if self._metadata is None: # first response - metadata = self._metadata = response.metadata + self._metadata = response_pb.metadata - source = self._source - if source is not None and source._transaction_id is None: - source._transaction_id = metadata.transaction.id - - if response.HasField("stats"): # last response + if response_pb.HasField("stats"): # last response self._stats = response.stats - values = list(response.values) + values = list(response_pb.values) if self._pending_chunk is not None: values[0] = self._merge_chunk(values[0]) - if response.chunked_value: + if response_pb.chunked_value: self._pending_chunk = values.pop() self._merge_values(values) + if response_pb.last: + self._done = True + def __iter__(self): - iter_rows, self._rows[:] = self._rows[:], () while True: - if not iter_rows: - try: - self._consume_next() - except StopIteration: - return - iter_rows, self._rows[:] = self._rows[:], () + iter_rows, self._rows[:] = self._rows[:], () while iter_rows: yield iter_rows.pop(0) + if self._done: + return + try: + self._consume_next() + except StopIteration: + return + + def decode_row(self, row: []) -> []: + """Decodes a row from protobuf values to Python objects. This function + should only be called for result sets that use ``lazy_decoding=True``. + The array that is returned by this function is the same as the array + that would have been returned by the rows iterator if ``lazy_decoding=False``. + + :returns: an array containing the decoded values of all the columns in the given row + """ + if not hasattr(row, "__len__"): + raise TypeError("row", "row must be an array of protobuf values") + decoders = self._decoders + return [ + _parse_nullable(row[index], decoders[index]) for index in range(len(row)) + ] + + def decode_column(self, row: [], column_index: int): + """Decodes a column from a protobuf value to a Python object. This function + should only be called for result sets that use ``lazy_decoding=True``. + The object that is returned by this function is the same as the object + that would have been returned by the rows iterator if ``lazy_decoding=False``. + + :returns: the decoded column value + """ + if not hasattr(row, "__len__"): + raise TypeError("row", "row must be an array of protobuf values") + decoders = self._decoders + return _parse_nullable(row[column_index], decoders[column_index]) def one(self): """Return exactly one result, or raise an exception. @@ -191,6 +242,27 @@ def one_or_none(self): except StopIteration: return answer + def to_dict_list(self): + """Return the result of a query as a list of dictionaries. + In each dictionary the key is the column name and the value is the + value of the that column in a given row. + + :rtype: + :class:`list of dict` + :returns: result rows as a list of dictionaries + """ + rows = [] + for row in self: + rows.append( + { + column: value + for column, value in zip( + [column.name for column in self._metadata.row_type.fields], row + ) + } + ) + return rows + class Unmergeable(ValueError): """Unable to merge two values. @@ -201,13 +273,13 @@ class Unmergeable(ValueError): :type rhs: :class:`~google.protobuf.struct_pb2.Value` :param rhs: remaining value to be merged - :type type_: :class:`~google.cloud.spanner_v1.proto.type_pb2.Type` + :type type_: :class:`~google.cloud.spanner_v1.types.Type` :param type_: field type of values being merged """ def __init__(self, lhs, rhs, type_): message = "Cannot merge %s values: %s %s" % ( - type_pb2.TypeCode.Name(type_.code), + TypeCode(type_.code), lhs, rhs, ) @@ -219,7 +291,7 @@ def _unmergeable(lhs, rhs, type_): raise Unmergeable(lhs, rhs, type_) -def _merge_float64(lhs, rhs, type_): # pylint: disable=unused-argument +def _merge_float64(lhs, rhs, type_): """Helper for '_merge_by_type'.""" lhs_kind = lhs.WhichOneof("kind") if lhs_kind == "string_value": @@ -235,12 +307,12 @@ def _merge_float64(lhs, rhs, type_): # pylint: disable=unused-argument raise Unmergeable(lhs, rhs, type_) -def _merge_string(lhs, rhs, type_): # pylint: disable=unused-argument +def _merge_string(lhs, rhs, type_): """Helper for '_merge_by_type'.""" return Value(string_value=lhs.string_value + rhs.string_value) -_UNMERGEABLE_TYPES = (type_pb2.BOOL,) +_UNMERGEABLE_TYPES = (TypeCode.BOOL,) def _merge_array(lhs, rhs, type_): @@ -262,13 +334,17 @@ def _merge_array(lhs, rhs, type_): lhs.append(first) else: last = lhs.pop() - try: - merged = _merge_by_type(last, first, element_type) - except Unmergeable: + if last.HasField("null_value"): lhs.append(last) lhs.append(first) else: - lhs.append(merged) + try: + merged = _merge_by_type(last, first, element_type) + except Unmergeable: + lhs.append(last) + lhs.append(first) + else: + lhs.append(merged) return Value(list_value=ListValue(values=(lhs + rhs))) @@ -282,26 +358,42 @@ def _merge_struct(lhs, rhs, type_): if not len(lhs) or not len(rhs): return Value(list_value=ListValue(values=(lhs + rhs))) - candidate_type = fields[len(lhs) - 1].type + candidate_type = fields[len(lhs) - 1].type_ first = rhs.pop(0) if first.HasField("null_value") or candidate_type.code in _UNMERGEABLE_TYPES: lhs.append(first) else: last = lhs.pop() - lhs.append(_merge_by_type(last, first, candidate_type)) + if last.HasField("null_value"): + lhs.append(last) + lhs.append(first) + else: + try: + merged = _merge_by_type(last, first, candidate_type) + except Unmergeable: + lhs.append(last) + lhs.append(first) + else: + lhs.append(merged) return Value(list_value=ListValue(values=lhs + rhs)) _MERGE_BY_TYPE = { - type_pb2.ARRAY: _merge_array, - type_pb2.BOOL: _unmergeable, - type_pb2.BYTES: _merge_string, - type_pb2.DATE: _merge_string, - type_pb2.FLOAT64: _merge_float64, - type_pb2.INT64: _merge_string, - type_pb2.STRING: _merge_string, - type_pb2.STRUCT: _merge_struct, - type_pb2.TIMESTAMP: _merge_string, + TypeCode.ARRAY: _merge_array, + TypeCode.BOOL: _unmergeable, + TypeCode.BYTES: _merge_string, + TypeCode.DATE: _merge_string, + TypeCode.FLOAT64: _merge_float64, + TypeCode.FLOAT32: _merge_float64, + TypeCode.INT64: _merge_string, + TypeCode.STRING: _merge_string, + TypeCode.STRUCT: _merge_struct, + TypeCode.TIMESTAMP: _merge_string, + TypeCode.NUMERIC: _merge_string, + TypeCode.JSON: _merge_string, + TypeCode.PROTO: _merge_string, + TypeCode.INTERVAL: _merge_string, + TypeCode.ENUM: _merge_string, } diff --git a/google/cloud/spanner_v1/table.py b/google/cloud/spanner_v1/table.py new file mode 100644 index 0000000000..c072775f43 --- /dev/null +++ b/google/cloud/spanner_v1/table.py @@ -0,0 +1,181 @@ +# Copyright 2021 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""User friendly container for Cloud Spanner Table.""" + +from google.cloud.exceptions import NotFound + +from google.cloud.spanner_admin_database_v1 import DatabaseDialect +from google.cloud.spanner_v1.types import ( + Type, + TypeCode, +) + + +_EXISTS_TEMPLATE = """ +SELECT EXISTS( + SELECT TABLE_NAME + FROM INFORMATION_SCHEMA.TABLES + {} +) +""" +_GET_SCHEMA_TEMPLATE = "SELECT * FROM {} LIMIT 0" + + +class Table(object): + """Representation of a Cloud Spanner Table. + + :type table_id: str + :param table_id: The ID of the table. + + :type database: :class:`~google.cloud.spanner_v1.database.Database` + :param database: The database that owns the table. + """ + + def __init__(self, table_id, database, schema_name=None): + if schema_name is None: + self._schema_name = database.default_schema_name + else: + self._schema_name = schema_name + self._table_id = table_id + self._database = database + + # Calculated properties. + self._schema = None + + @property + def schema_name(self): + """The schema name of the table used in SQL. + + :rtype: str + :returns: The table schema name. + """ + return self._schema_name + + @property + def table_id(self): + """The ID of the table used in SQL. + + :rtype: str + :returns: The table ID. + """ + return self._table_id + + @property + def qualified_table_name(self): + """The qualified name of the table used in SQL. + + :rtype: str + :returns: The qualified table name. + """ + if self.schema_name == self._database.default_schema_name: + return self._quote_identifier(self.table_id) + return "{}.{}".format( + self._quote_identifier(self.schema_name), + self._quote_identifier(self.table_id), + ) + + def _quote_identifier(self, identifier): + """Quotes the given identifier using the rules of the dialect of the database of this table. + + :rtype: str + :returns: The quoted identifier. + """ + if self._database.database_dialect == DatabaseDialect.POSTGRESQL: + return '"{}"'.format(identifier) + return "`{}`".format(identifier) + + def exists(self): + """Test whether this table exists. + + :rtype: bool + :returns: True if the table exists, else false. + """ + with self._database.snapshot() as snapshot: + return self._exists(snapshot) + + def _exists(self, snapshot): + """Query to check that the table exists. + + :type snapshot: :class:`~google.cloud.spanner_v1.snapshot.Snapshot` + :param snapshot: snapshot to use for database queries + + :rtype: bool + :returns: True if the table exists, else false. + """ + if self._database.database_dialect == DatabaseDialect.POSTGRESQL: + results = snapshot.execute_sql( + sql=_EXISTS_TEMPLATE.format( + "WHERE TABLE_SCHEMA=$1 AND TABLE_NAME = $2" + ), + params={"p1": self.schema_name, "p2": self.table_id}, + param_types={ + "p1": Type(code=TypeCode.STRING), + "p2": Type(code=TypeCode.STRING), + }, + ) + else: + results = snapshot.execute_sql( + sql=_EXISTS_TEMPLATE.format( + "WHERE TABLE_SCHEMA = @schema_name AND TABLE_NAME = @table_id" + ), + params={"schema_name": self.schema_name, "table_id": self.table_id}, + param_types={ + "schema_name": Type(code=TypeCode.STRING), + "table_id": Type(code=TypeCode.STRING), + }, + ) + return next(iter(results))[0] + + @property + def schema(self): + """The schema of this table. + + :rtype: list of :class:`~google.cloud.spanner_v1.types.StructType.Field` + :returns: The table schema. + """ + if self._schema is None: + with self._database.snapshot() as snapshot: + self._schema = self._get_schema(snapshot) + return self._schema + + def _get_schema(self, snapshot): + """Get the schema of this table. + + :type snapshot: :class:`~google.cloud.spanner_v1.snapshot.Snapshot` + :param snapshot: snapshot to use for database queries + + :rtype: list of :class:`~google.cloud.spanner_v1.types.StructType.Field` + :returns: The table schema. + """ + query = _GET_SCHEMA_TEMPLATE.format(self.qualified_table_name) + results = snapshot.execute_sql(query) + # Start iterating to force the schema to download. + try: + next(iter(results)) + except StopIteration: + pass + return list(results.fields) + + def reload(self): + """Reload this table. + + Refresh any configured schema into :attr:`schema`. + + :raises NotFound: if the table does not exist + """ + with self._database.snapshot() as snapshot: + if not self._exists(snapshot): + raise NotFound("table '{}' does not exist".format(self.table_id)) + self._schema = self._get_schema(snapshot) diff --git a/google/cloud/spanner_admin_instance_v1/gapic/transports/__init__.py b/google/cloud/spanner_v1/testing/__init__.py similarity index 100% rename from google/cloud/spanner_admin_instance_v1/gapic/transports/__init__.py rename to google/cloud/spanner_v1/testing/__init__.py diff --git a/google/cloud/spanner_v1/testing/database_test.py b/google/cloud/spanner_v1/testing/database_test.py new file mode 100644 index 0000000000..5af89fea42 --- /dev/null +++ b/google/cloud/spanner_v1/testing/database_test.py @@ -0,0 +1,121 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import grpc + +from google.api_core import grpc_helpers +import google.auth.credentials +from google.cloud.spanner_admin_database_v1 import DatabaseDialect +from google.cloud.spanner_v1 import SpannerClient +from google.cloud.spanner_v1.database import Database, SPANNER_DATA_SCOPE +from google.cloud.spanner_v1.services.spanner.transports import ( + SpannerGrpcTransport, + SpannerTransport, +) +from google.cloud.spanner_v1.testing.interceptors import ( + MethodCountInterceptor, + MethodAbortInterceptor, + XGoogRequestIDHeaderInterceptor, +) + + +class TestDatabase(Database): + """Representation of a Cloud Spanner Database. This class is only used for + system testing as there is no support for interceptors in grpc client + currently, and we don't want to make changes in the Database class for + testing purpose as this is a hack to use interceptors in tests.""" + + _interceptors = [] + + def __init__( + self, + database_id, + instance, + ddl_statements=(), + pool=None, + logger=None, + encryption_config=None, + database_dialect=DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED, + database_role=None, + enable_drop_protection=False, + ): + super().__init__( + database_id, + instance, + ddl_statements, + pool, + logger, + encryption_config, + database_dialect, + database_role, + enable_drop_protection, + ) + + self._method_count_interceptor = MethodCountInterceptor() + self._method_abort_interceptor = MethodAbortInterceptor() + self._interceptors = [ + self._method_count_interceptor, + self._method_abort_interceptor, + ] + + @property + def spanner_api(self): + """Helper for session-related API calls.""" + if self._spanner_api is None: + client = self._instance._client + client_info = client._client_info + client_options = client._client_options + if self._instance.emulator_host is not None: + channel = grpc.insecure_channel(self._instance.emulator_host) + self._x_goog_request_id_interceptor = XGoogRequestIDHeaderInterceptor() + self._interceptors.append(self._x_goog_request_id_interceptor) + channel = grpc.intercept_channel(channel, *self._interceptors) + transport = SpannerGrpcTransport(channel=channel) + self._spanner_api = SpannerClient( + client_info=client_info, + transport=transport, + ) + return self._spanner_api + credentials = client.credentials + if isinstance(credentials, google.auth.credentials.Scoped): + credentials = credentials.with_scopes((SPANNER_DATA_SCOPE,)) + self._spanner_api = self._create_spanner_client_for_tests( + client_options, + credentials, + ) + return self._spanner_api + + def _create_spanner_client_for_tests(self, client_options, credentials): + ( + api_endpoint, + client_cert_source_func, + ) = SpannerClient.get_mtls_endpoint_and_cert_source(client_options) + channel = grpc_helpers.create_channel( + api_endpoint, + credentials=credentials, + credentials_file=client_options.credentials_file, + quota_project_id=client_options.quota_project_id, + default_scopes=SpannerTransport.AUTH_SCOPES, + scopes=client_options.scopes, + default_host=SpannerTransport.DEFAULT_HOST, + ) + channel = grpc.intercept_channel(channel, *self._interceptors) + transport = SpannerGrpcTransport(channel=channel) + return SpannerClient( + client_options=client_options, + transport=transport, + ) + + def reset(self): + if self._x_goog_request_id_interceptor: + self._x_goog_request_id_interceptor.reset() diff --git a/google/cloud/spanner_v1/testing/interceptors.py b/google/cloud/spanner_v1/testing/interceptors.py new file mode 100644 index 0000000000..fd05a6d4b3 --- /dev/null +++ b/google/cloud/spanner_v1/testing/interceptors.py @@ -0,0 +1,118 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from collections import defaultdict +import threading + +from grpc_interceptor import ClientInterceptor +from google.api_core.exceptions import Aborted +from google.cloud.spanner_v1.request_id_header import parse_request_id + + +class MethodCountInterceptor(ClientInterceptor): + """Test interceptor that counts number of times a method is being called.""" + + def __init__(self): + self._counts = defaultdict(int) + + def intercept(self, method, request_or_iterator, call_details): + """Count number of times a method is being called.""" + self._counts[call_details.method] += 1 + return method(request_or_iterator, call_details) + + def reset(self): + self._counts = defaultdict(int) + + +class MethodAbortInterceptor(ClientInterceptor): + """Test interceptor that throws Aborted exception for a specific method.""" + + def __init__(self): + self._method_to_abort = None + self._count = 0 + self._max_raise_count = 1 + self._connection = None + + def intercept(self, method, request_or_iterator, call_details): + if ( + self._count < self._max_raise_count + and call_details.method == self._method_to_abort + ): + self._count += 1 + if self._connection is not None: + self._connection._transaction.rollback() + raise Aborted("Thrown from ClientInterceptor for testing") + return method(request_or_iterator, call_details) + + def set_method_to_abort(self, method_to_abort, connection=None, max_raise_count=1): + self._method_to_abort = method_to_abort + self._count = 0 + self._max_raise_count = max_raise_count + self._connection = connection + + def reset(self): + """Reset the interceptor to the original state.""" + self._method_to_abort = None + self._count = 0 + self._connection = None + + +X_GOOG_REQUEST_ID = "x-goog-spanner-request-id" + + +class XGoogRequestIDHeaderInterceptor(ClientInterceptor): + def __init__(self): + self._unary_req_segments = [] + self._stream_req_segments = [] + self.__lock = threading.Lock() + + def intercept(self, method, request_or_iterator, call_details): + metadata = call_details.metadata + x_goog_request_id = None + for key, value in metadata: + if key == X_GOOG_REQUEST_ID: + x_goog_request_id = value + break + + if not x_goog_request_id: + raise Exception( + f"Missing {X_GOOG_REQUEST_ID} header in {call_details.method}" + ) + + response_or_iterator = method(request_or_iterator, call_details) + streaming = getattr(response_or_iterator, "__iter__", None) is not None + + with self.__lock: + if streaming: + self._stream_req_segments.append( + (call_details.method, parse_request_id(x_goog_request_id)) + ) + else: + self._unary_req_segments.append( + (call_details.method, parse_request_id(x_goog_request_id)) + ) + + return response_or_iterator + + @property + def unary_request_ids(self): + return self._unary_req_segments + + @property + def stream_request_ids(self): + return self._stream_req_segments + + def reset(self): + self._stream_req_segments.clear() + self._unary_req_segments.clear() diff --git a/google/cloud/spanner_v1/testing/mock_database_admin.py b/google/cloud/spanner_v1/testing/mock_database_admin.py new file mode 100644 index 0000000000..a9b4eb6392 --- /dev/null +++ b/google/cloud/spanner_v1/testing/mock_database_admin.py @@ -0,0 +1,38 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.longrunning import operations_pb2 as operations_pb2 +from google.protobuf import empty_pb2 +import google.cloud.spanner_v1.testing.spanner_database_admin_pb2_grpc as database_admin_grpc + + +# An in-memory mock DatabaseAdmin server that can be used for testing. +class DatabaseAdminServicer(database_admin_grpc.DatabaseAdminServicer): + def __init__(self): + self._requests = [] + + @property + def requests(self): + return self._requests + + def clear_requests(self): + self._requests = [] + + def UpdateDatabaseDdl(self, request, context): + self._requests.append(request) + operation = operations_pb2.Operation() + operation.done = True + operation.name = "projects/test-project/operations/test-operation" + operation.response.Pack(empty_pb2.Empty()) + return operation diff --git a/google/cloud/spanner_v1/testing/mock_spanner.py b/google/cloud/spanner_v1/testing/mock_spanner.py new file mode 100644 index 0000000000..e3c2198d68 --- /dev/null +++ b/google/cloud/spanner_v1/testing/mock_spanner.py @@ -0,0 +1,277 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import base64 +import inspect +import grpc +from concurrent import futures + +from google.protobuf import empty_pb2 +from grpc_status.rpc_status import _Status + +from google.cloud.spanner_v1 import ( + TransactionOptions, + ResultSetMetadata, +) +from google.cloud.spanner_v1.testing.mock_database_admin import DatabaseAdminServicer +import google.cloud.spanner_v1.testing.spanner_database_admin_pb2_grpc as database_admin_grpc +import google.cloud.spanner_v1.testing.spanner_pb2_grpc as spanner_grpc +import google.cloud.spanner_v1.types.commit_response as commit +import google.cloud.spanner_v1.types.result_set as result_set +import google.cloud.spanner_v1.types.spanner as spanner +import google.cloud.spanner_v1.types.transaction as transaction + + +class MockSpanner: + def __init__(self): + self.results = {} + self.execute_streaming_sql_results = {} + self.errors = {} + + def add_result(self, sql: str, result: result_set.ResultSet): + self.results[sql.lower().strip()] = result + + def add_execute_streaming_sql_results( + self, sql: str, partial_result_sets: list[result_set.PartialResultSet] + ): + self.execute_streaming_sql_results[sql.lower().strip()] = partial_result_sets + + def get_result(self, sql: str) -> result_set.ResultSet: + result = self.results.get(sql.lower().strip()) + if result is None: + raise ValueError(f"No result found for {sql}") + return result + + def add_error(self, method: str, error: _Status): + self.errors[method] = error + + def pop_error(self, context): + name = inspect.currentframe().f_back.f_code.co_name + error: _Status | None = self.errors.pop(name, None) + if error: + context.abort_with_status(error) + + def get_execute_streaming_sql_results( + self, sql: str, started_transaction: transaction.Transaction + ) -> list[result_set.PartialResultSet]: + if self.execute_streaming_sql_results.get(sql.lower().strip()): + partials = self.execute_streaming_sql_results[sql.lower().strip()] + else: + partials = self.get_result_as_partial_result_sets(sql) + if started_transaction: + partials[0].metadata.transaction = started_transaction + return partials + + def get_result_as_partial_result_sets( + self, sql: str + ) -> list[result_set.PartialResultSet]: + result: result_set.ResultSet = self.get_result(sql) + partials = [] + first = True + if len(result.rows) == 0: + partial = result_set.PartialResultSet() + partial.metadata = ResultSetMetadata(result.metadata) + partials.append(partial) + else: + for row in result.rows: + partial = result_set.PartialResultSet() + if first: + partial.metadata = ResultSetMetadata(result.metadata) + first = False + partial.values.extend(row) + partials.append(partial) + partials[len(partials) - 1].stats = result.stats + return partials + + +# An in-memory mock Spanner server that can be used for testing. +class SpannerServicer(spanner_grpc.SpannerServicer): + def __init__(self): + self._requests = [] + self.session_counter = 0 + self.sessions = {} + self.transaction_counter = 0 + self.transactions = {} + self._mock_spanner = MockSpanner() + + @property + def mock_spanner(self): + return self._mock_spanner + + @property + def requests(self): + return self._requests + + def clear_requests(self): + self._requests = [] + + def CreateSession(self, request, context): + self._requests.append(request) + return self.__create_session(request.database, request.session) + + def BatchCreateSessions(self, request, context): + self._requests.append(request) + self.mock_spanner.pop_error(context) + sessions = [] + for i in range(request.session_count): + sessions.append( + self.__create_session(request.database, request.session_template) + ) + return spanner.BatchCreateSessionsResponse(dict(session=sessions)) + + def __create_session(self, database: str, session_template: spanner.Session): + self.session_counter += 1 + session = spanner.Session() + session.name = database + "/sessions/" + str(self.session_counter) + session.multiplexed = session_template.multiplexed + session.labels.MergeFrom(session_template.labels) + session.creator_role = session_template.creator_role + self.sessions[session.name] = session + return session + + def GetSession(self, request, context): + self._requests.append(request) + return spanner.Session() + + def ListSessions(self, request, context): + self._requests.append(request) + return [spanner.Session()] + + def DeleteSession(self, request, context): + self._requests.append(request) + return empty_pb2.Empty() + + def ExecuteSql(self, request, context): + self._requests.append(request) + self.mock_spanner.pop_error(context) + started_transaction = self.__maybe_create_transaction(request) + result: result_set.ResultSet = self.mock_spanner.get_result(request.sql) + if started_transaction: + result.metadata = ResultSetMetadata(result.metadata) + result.metadata.transaction = started_transaction + return result + + def ExecuteStreamingSql(self, request, context): + self._requests.append(request) + self.mock_spanner.pop_error(context) + started_transaction = self.__maybe_create_transaction(request) + partials = self.mock_spanner.get_execute_streaming_sql_results( + request.sql, started_transaction + ) + for result in partials: + yield result + + def ExecuteBatchDml(self, request, context): + self._requests.append(request) + self.mock_spanner.pop_error(context) + response = spanner.ExecuteBatchDmlResponse() + started_transaction = self.__maybe_create_transaction(request) + first = True + for statement in request.statements: + result = self.mock_spanner.get_result(statement.sql) + if first and started_transaction is not None: + result = result_set.ResultSet( + self.mock_spanner.get_result(statement.sql) + ) + result.metadata = result_set.ResultSetMetadata(result.metadata) + result.metadata.transaction = started_transaction + response.result_sets.append(result) + return response + + def Read(self, request, context): + self._requests.append(request) + return result_set.ResultSet() + + def StreamingRead(self, request, context): + self._requests.append(request) + for result in [result_set.PartialResultSet(), result_set.PartialResultSet()]: + yield result + + def BeginTransaction(self, request, context): + self._requests.append(request) + return self.__create_transaction(request.session, request.options) + + def __maybe_create_transaction(self, request): + started_transaction = None + if not request.transaction.begin == TransactionOptions(): + started_transaction = self.__create_transaction( + request.session, request.transaction.begin + ) + return started_transaction + + def __create_transaction( + self, session: str, options: transaction.TransactionOptions + ) -> transaction.Transaction: + session = self.sessions[session] + if session is None: + raise ValueError(f"Session not found: {session}") + self.transaction_counter += 1 + id_bytes = bytes( + f"{session.name}/transactions/{self.transaction_counter}", "UTF-8" + ) + transaction_id = base64.urlsafe_b64encode(id_bytes) + self.transactions[transaction_id] = options + return transaction.Transaction(dict(id=transaction_id)) + + def Commit(self, request, context): + self._requests.append(request) + self.mock_spanner.pop_error(context) + if not request.transaction_id == b"": + tx = self.transactions[request.transaction_id] + if tx is None: + raise ValueError(f"Transaction not found: {request.transaction_id}") + tx_id = request.transaction_id + elif not request.single_use_transaction == TransactionOptions(): + tx = self.__create_transaction( + request.session, request.single_use_transaction + ) + tx_id = tx.id + else: + raise ValueError("Unsupported transaction type") + del self.transactions[tx_id] + return commit.CommitResponse() + + def Rollback(self, request, context): + self._requests.append(request) + return empty_pb2.Empty() + + def PartitionQuery(self, request, context): + self._requests.append(request) + return spanner.PartitionResponse() + + def PartitionRead(self, request, context): + self._requests.append(request) + return spanner.PartitionResponse() + + def BatchWrite(self, request, context): + self._requests.append(request) + for result in [spanner.BatchWriteResponse(), spanner.BatchWriteResponse()]: + yield result + + +def start_mock_server() -> (grpc.Server, SpannerServicer, DatabaseAdminServicer, int): + # Create a gRPC server. + spanner_server = grpc.server(futures.ThreadPoolExecutor(max_workers=10)) + + # Add the Spanner services to the gRPC server. + spanner_servicer = SpannerServicer() + spanner_grpc.add_SpannerServicer_to_server(spanner_servicer, spanner_server) + database_admin_servicer = DatabaseAdminServicer() + database_admin_grpc.add_DatabaseAdminServicer_to_server( + database_admin_servicer, spanner_server + ) + + # Start the server on a random port. + port = spanner_server.add_insecure_port("[::]:0") + spanner_server.start() + return spanner_server, spanner_servicer, database_admin_servicer, port diff --git a/google/cloud/spanner_v1/testing/spanner_database_admin_pb2_grpc.py b/google/cloud/spanner_v1/testing/spanner_database_admin_pb2_grpc.py new file mode 100644 index 0000000000..fdc26b30ad --- /dev/null +++ b/google/cloud/spanner_v1/testing/spanner_database_admin_pb2_grpc.py @@ -0,0 +1,1267 @@ +# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! + + +# Generated with the following commands: +# +# pip install grpcio-tools +# git clone git@github.com:googleapis/googleapis.git +# cd googleapis +# python -m grpc_tools.protoc \ +# -I . \ +# --python_out=. --pyi_out=. --grpc_python_out=. \ +# ./google/spanner/admin/database/v1/*.proto + +"""Client and server classes corresponding to protobuf-defined services.""" + +import grpc +from google.iam.v1 import iam_policy_pb2 as google_dot_iam_dot_v1_dot_iam__policy__pb2 +from google.iam.v1 import policy_pb2 as google_dot_iam_dot_v1_dot_policy__pb2 +from google.longrunning import ( + operations_pb2 as google_dot_longrunning_dot_operations__pb2, +) +from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2 +from google.cloud.spanner_admin_database_v1.types import ( + backup as google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2, +) +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2, +) +from google.cloud.spanner_admin_database_v1.types import ( + spanner_database_admin as google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2, +) + +GRPC_GENERATED_VERSION = "1.67.0" +GRPC_VERSION = grpc.__version__ +_version_not_supported = False + +try: + from grpc._utilities import first_version_is_lower + + _version_not_supported = first_version_is_lower( + GRPC_VERSION, GRPC_GENERATED_VERSION + ) +except ImportError: + _version_not_supported = True + +if _version_not_supported: + raise RuntimeError( + f"The grpc package installed is at version {GRPC_VERSION}," + + " but the generated code in google/spanner/admin/database/v1/spanner_database_admin_pb2_grpc.py depends on" + + f" grpcio>={GRPC_GENERATED_VERSION}." + + f" Please upgrade your grpc module to grpcio>={GRPC_GENERATED_VERSION}" + + f" or downgrade your generated code using grpcio-tools<={GRPC_VERSION}." + ) + + +class DatabaseAdminServicer(object): + """Cloud Spanner Database Admin API + + The Cloud Spanner Database Admin API can be used to: + * create, drop, and list databases + * update the schema of pre-existing databases + * create, delete, copy and list backups for a database + * restore a database from an existing backup + """ + + def ListDatabases(self, request, context): + """Lists Cloud Spanner databases.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def CreateDatabase(self, request, context): + """Creates a new Cloud Spanner database and starts to prepare it for serving. + The returned [long-running operation][google.longrunning.Operation] will + have a name of the format `/operations/` and + can be used to track preparation of the database. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateDatabaseMetadata][google.spanner.admin.database.v1.CreateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type is + [Database][google.spanner.admin.database.v1.Database], if successful. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def GetDatabase(self, request, context): + """Gets the state of a Cloud Spanner database.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def UpdateDatabase(self, request, context): + """Updates a Cloud Spanner database. The returned + [long-running operation][google.longrunning.Operation] can be used to track + the progress of updating the database. If the named database does not + exist, returns `NOT_FOUND`. + + While the operation is pending: + + * The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field is set to true. + * Cancelling the operation is best-effort. If the cancellation succeeds, + the operation metadata's + [cancel_time][google.spanner.admin.database.v1.UpdateDatabaseMetadata.cancel_time] + is set, the updates are reverted, and the operation terminates with a + `CANCELLED` status. + * New UpdateDatabase requests will return a `FAILED_PRECONDITION` error + until the pending operation is done (returns successfully or with + error). + * Reading the database via the API continues to give the pre-request + values. + + Upon completion of the returned operation: + + * The new values are in effect and readable via the API. + * The database's + [reconciling][google.spanner.admin.database.v1.Database.reconciling] + field becomes false. + + The returned [long-running operation][google.longrunning.Operation] will + have a name of the format + `projects//instances//databases//operations/` + and can be used to track the database modification. The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseMetadata][google.spanner.admin.database.v1.UpdateDatabaseMetadata]. + The [response][google.longrunning.Operation.response] field type is + [Database][google.spanner.admin.database.v1.Database], if successful. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def UpdateDatabaseDdl(self, request, context): + """Updates the schema of a Cloud Spanner database by + creating/altering/dropping tables, columns, indexes, etc. The returned + [long-running operation][google.longrunning.Operation] will have a name of + the format `/operations/` and can be used to + track execution of the schema change(s). The + [metadata][google.longrunning.Operation.metadata] field type is + [UpdateDatabaseDdlMetadata][google.spanner.admin.database.v1.UpdateDatabaseDdlMetadata]. + The operation has no response. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def DropDatabase(self, request, context): + """Drops (aka deletes) a Cloud Spanner database. + Completed backups for the database will be retained according to their + `expire_time`. + Note: Cloud Spanner might continue to accept requests for a few seconds + after the database has been deleted. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def GetDatabaseDdl(self, request, context): + """Returns the schema of a Cloud Spanner database as a list of formatted + DDL statements. This method does not show pending schema updates, those may + be queried using the [Operations][google.longrunning.Operations] API. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def SetIamPolicy(self, request, context): + """Sets the access control policy on a database or backup resource. + Replaces any existing policy. + + Authorization requires `spanner.databases.setIamPolicy` + permission on [resource][google.iam.v1.SetIamPolicyRequest.resource]. + For backups, authorization requires `spanner.backups.setIamPolicy` + permission on [resource][google.iam.v1.SetIamPolicyRequest.resource]. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def GetIamPolicy(self, request, context): + """Gets the access control policy for a database or backup resource. + Returns an empty policy if a database or backup exists but does not have a + policy set. + + Authorization requires `spanner.databases.getIamPolicy` permission on + [resource][google.iam.v1.GetIamPolicyRequest.resource]. + For backups, authorization requires `spanner.backups.getIamPolicy` + permission on [resource][google.iam.v1.GetIamPolicyRequest.resource]. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def TestIamPermissions(self, request, context): + """Returns permissions that the caller has on the specified database or backup + resource. + + Attempting this RPC on a non-existent Cloud Spanner database will + result in a NOT_FOUND error if the user has + `spanner.databases.list` permission on the containing Cloud + Spanner instance. Otherwise returns an empty set of permissions. + Calling this method on a backup that does not exist will + result in a NOT_FOUND error if the user has + `spanner.backups.list` permission on the containing instance. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def CreateBackup(self, request, context): + """Starts creating a new Cloud Spanner Backup. + The returned backup [long-running operation][google.longrunning.Operation] + will have a name of the format + `projects//instances//backups//operations/` + and can be used to track creation of the backup. The + [metadata][google.longrunning.Operation.metadata] field type is + [CreateBackupMetadata][google.spanner.admin.database.v1.CreateBackupMetadata]. + The [response][google.longrunning.Operation.response] field type is + [Backup][google.spanner.admin.database.v1.Backup], if successful. + Cancelling the returned operation will stop the creation and delete the + backup. There can be only one pending backup creation per database. Backup + creation of different databases can run concurrently. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def CopyBackup(self, request, context): + """Starts copying a Cloud Spanner Backup. + The returned backup [long-running operation][google.longrunning.Operation] + will have a name of the format + `projects//instances//backups//operations/` + and can be used to track copying of the backup. The operation is associated + with the destination backup. + The [metadata][google.longrunning.Operation.metadata] field type is + [CopyBackupMetadata][google.spanner.admin.database.v1.CopyBackupMetadata]. + The [response][google.longrunning.Operation.response] field type is + [Backup][google.spanner.admin.database.v1.Backup], if successful. + Cancelling the returned operation will stop the copying and delete the + destination backup. Concurrent CopyBackup requests can run on the same + source backup. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def GetBackup(self, request, context): + """Gets metadata on a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def UpdateBackup(self, request, context): + """Updates a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def DeleteBackup(self, request, context): + """Deletes a pending or completed + [Backup][google.spanner.admin.database.v1.Backup]. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ListBackups(self, request, context): + """Lists completed and pending backups. + Backups returned are ordered by `create_time` in descending order, + starting from the most recent `create_time`. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def RestoreDatabase(self, request, context): + """Create a new database by restoring from a completed backup. The new + database must be in the same project and in an instance with the same + instance configuration as the instance containing + the backup. The returned database [long-running + operation][google.longrunning.Operation] has a name of the format + `projects//instances//databases//operations/`, + and can be used to track the progress of the operation, and to cancel it. + The [metadata][google.longrunning.Operation.metadata] field type is + [RestoreDatabaseMetadata][google.spanner.admin.database.v1.RestoreDatabaseMetadata]. + The [response][google.longrunning.Operation.response] type + is [Database][google.spanner.admin.database.v1.Database], if + successful. Cancelling the returned operation will stop the restore and + delete the database. + There can be only one database being restored into an instance at a time. + Once the restore operation completes, a new restore operation can be + initiated, without waiting for the optimize operation associated with the + first restore to complete. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ListDatabaseOperations(self, request, context): + """Lists database [longrunning-operations][google.longrunning.Operation]. + A database operation has a name of the form + `projects//instances//databases//operations/`. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + `metadata.type_url` describes the type of the metadata. Operations returned + include those that have completed/failed/canceled within the last 7 days, + and pending operations. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ListBackupOperations(self, request, context): + """Lists the backup [long-running operations][google.longrunning.Operation] in + the given instance. A backup operation has a name of the form + `projects//instances//backups//operations/`. + The long-running operation + [metadata][google.longrunning.Operation.metadata] field type + `metadata.type_url` describes the type of the metadata. Operations returned + include those that have completed/failed/canceled within the last 7 days, + and pending operations. Operations returned are ordered by + `operation.metadata.value.progress.start_time` in descending order starting + from the most recently started operation. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ListDatabaseRoles(self, request, context): + """Lists Cloud Spanner database roles.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def CreateBackupSchedule(self, request, context): + """Creates a new backup schedule.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def GetBackupSchedule(self, request, context): + """Gets backup schedule for the input schedule name.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def UpdateBackupSchedule(self, request, context): + """Updates a backup schedule.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def DeleteBackupSchedule(self, request, context): + """Deletes a backup schedule.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ListBackupSchedules(self, request, context): + """Lists all the backup schedules for the database.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + +def add_DatabaseAdminServicer_to_server(servicer, server): + rpc_method_handlers = { + "ListDatabases": grpc.unary_unary_rpc_method_handler( + servicer.ListDatabases, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabasesRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabasesResponse.serialize, + ), + "CreateDatabase": grpc.unary_unary_rpc_method_handler( + servicer.CreateDatabase, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.CreateDatabaseRequest.deserialize, + response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, + ), + "GetDatabase": grpc.unary_unary_rpc_method_handler( + servicer.GetDatabase, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.GetDatabaseRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.Database.serialize, + ), + "UpdateDatabase": grpc.unary_unary_rpc_method_handler( + servicer.UpdateDatabase, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.UpdateDatabaseRequest.deserialize, + response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, + ), + "UpdateDatabaseDdl": grpc.unary_unary_rpc_method_handler( + servicer.UpdateDatabaseDdl, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.UpdateDatabaseDdlRequest.deserialize, + response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, + ), + "DropDatabase": grpc.unary_unary_rpc_method_handler( + servicer.DropDatabase, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.DropDatabaseRequest.deserialize, + response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, + ), + "GetDatabaseDdl": grpc.unary_unary_rpc_method_handler( + servicer.GetDatabaseDdl, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.GetDatabaseDdlRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.GetDatabaseDdlResponse.serialize, + ), + "SetIamPolicy": grpc.unary_unary_rpc_method_handler( + servicer.SetIamPolicy, + request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.SetIamPolicyRequest.FromString, + response_serializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.SerializeToString, + ), + "GetIamPolicy": grpc.unary_unary_rpc_method_handler( + servicer.GetIamPolicy, + request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.GetIamPolicyRequest.FromString, + response_serializer=google_dot_iam_dot_v1_dot_policy__pb2.Policy.SerializeToString, + ), + "TestIamPermissions": grpc.unary_unary_rpc_method_handler( + servicer.TestIamPermissions, + request_deserializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsRequest.FromString, + response_serializer=google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsResponse.SerializeToString, + ), + "CreateBackup": grpc.unary_unary_rpc_method_handler( + servicer.CreateBackup, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.CreateBackupRequest.deserialize, + response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, + ), + "CopyBackup": grpc.unary_unary_rpc_method_handler( + servicer.CopyBackup, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.CopyBackupRequest.deserialize, + response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, + ), + "GetBackup": grpc.unary_unary_rpc_method_handler( + servicer.GetBackup, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.GetBackupRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.Backup.serialize, + ), + "UpdateBackup": grpc.unary_unary_rpc_method_handler( + servicer.UpdateBackup, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.UpdateBackupRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.Backup.serialize, + ), + "DeleteBackup": grpc.unary_unary_rpc_method_handler( + servicer.DeleteBackup, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.DeleteBackupRequest.deserialize, + response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, + ), + "ListBackups": grpc.unary_unary_rpc_method_handler( + servicer.ListBackups, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.ListBackupsRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.ListBackupsResponse.serialize, + ), + "RestoreDatabase": grpc.unary_unary_rpc_method_handler( + servicer.RestoreDatabase, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.RestoreDatabaseRequest.deserialize, + response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, + ), + "ListDatabaseOperations": grpc.unary_unary_rpc_method_handler( + servicer.ListDatabaseOperations, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabaseOperationsRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabaseOperationsResponse.serialize, + ), + "ListBackupOperations": grpc.unary_unary_rpc_method_handler( + servicer.ListBackupOperations, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.ListBackupOperationsRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.ListBackupOperationsResponse.serialize, + ), + "ListDatabaseRoles": grpc.unary_unary_rpc_method_handler( + servicer.ListDatabaseRoles, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabaseRolesRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabaseRolesResponse.serialize, + ), + "CreateBackupSchedule": grpc.unary_unary_rpc_method_handler( + servicer.CreateBackupSchedule, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.CreateBackupScheduleRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.BackupSchedule.serialize, + ), + "GetBackupSchedule": grpc.unary_unary_rpc_method_handler( + servicer.GetBackupSchedule, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.GetBackupScheduleRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.BackupSchedule.serialize, + ), + "UpdateBackupSchedule": grpc.unary_unary_rpc_method_handler( + servicer.UpdateBackupSchedule, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.UpdateBackupScheduleRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.BackupSchedule.serialize, + ), + "DeleteBackupSchedule": grpc.unary_unary_rpc_method_handler( + servicer.DeleteBackupSchedule, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.DeleteBackupScheduleRequest.deserialize, + response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, + ), + "ListBackupSchedules": grpc.unary_unary_rpc_method_handler( + servicer.ListBackupSchedules, + request_deserializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.ListBackupSchedulesRequest.deserialize, + response_serializer=google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.ListBackupSchedulesResponse.serialize, + ), + } + generic_handler = grpc.method_handlers_generic_handler( + "google.spanner.admin.database.v1.DatabaseAdmin", rpc_method_handlers + ) + server.add_generic_rpc_handlers((generic_handler,)) + server.add_registered_method_handlers( + "google.spanner.admin.database.v1.DatabaseAdmin", rpc_method_handlers + ) + + +# This class is part of an EXPERIMENTAL API. +class DatabaseAdmin(object): + """Cloud Spanner Database Admin API + + The Cloud Spanner Database Admin API can be used to: + * create, drop, and list databases + * update the schema of pre-existing databases + * create, delete, copy and list backups for a database + * restore a database from an existing backup + """ + + @staticmethod + def ListDatabases( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabases", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabasesRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabasesResponse.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def CreateDatabase( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateDatabase", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.CreateDatabaseRequest.SerializeToString, + google_dot_longrunning_dot_operations__pb2.Operation.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def GetDatabase( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/GetDatabase", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.GetDatabaseRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.Database.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def UpdateDatabase( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateDatabase", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.UpdateDatabaseRequest.SerializeToString, + google_dot_longrunning_dot_operations__pb2.Operation.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def UpdateDatabaseDdl( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateDatabaseDdl", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.UpdateDatabaseDdlRequest.SerializeToString, + google_dot_longrunning_dot_operations__pb2.Operation.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def DropDatabase( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/DropDatabase", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.DropDatabaseRequest.SerializeToString, + google_dot_protobuf_dot_empty__pb2.Empty.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def GetDatabaseDdl( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/GetDatabaseDdl", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.GetDatabaseDdlRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.GetDatabaseDdlResponse.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def SetIamPolicy( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/SetIamPolicy", + google_dot_iam_dot_v1_dot_iam__policy__pb2.SetIamPolicyRequest.SerializeToString, + google_dot_iam_dot_v1_dot_policy__pb2.Policy.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def GetIamPolicy( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/GetIamPolicy", + google_dot_iam_dot_v1_dot_iam__policy__pb2.GetIamPolicyRequest.SerializeToString, + google_dot_iam_dot_v1_dot_policy__pb2.Policy.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def TestIamPermissions( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/TestIamPermissions", + google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsRequest.SerializeToString, + google_dot_iam_dot_v1_dot_iam__policy__pb2.TestIamPermissionsResponse.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def CreateBackup( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateBackup", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.CreateBackupRequest.SerializeToString, + google_dot_longrunning_dot_operations__pb2.Operation.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def CopyBackup( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/CopyBackup", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.CopyBackupRequest.SerializeToString, + google_dot_longrunning_dot_operations__pb2.Operation.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def GetBackup( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/GetBackup", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.GetBackupRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.Backup.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def UpdateBackup( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateBackup", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.UpdateBackupRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.Backup.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def DeleteBackup( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/DeleteBackup", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.DeleteBackupRequest.SerializeToString, + google_dot_protobuf_dot_empty__pb2.Empty.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ListBackups( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackups", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.ListBackupsRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.ListBackupsResponse.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def RestoreDatabase( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/RestoreDatabase", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.RestoreDatabaseRequest.SerializeToString, + google_dot_longrunning_dot_operations__pb2.Operation.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ListDatabaseOperations( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabaseOperations", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabaseOperationsRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabaseOperationsResponse.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ListBackupOperations( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackupOperations", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.ListBackupOperationsRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__pb2.ListBackupOperationsResponse.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ListDatabaseRoles( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/ListDatabaseRoles", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabaseRolesRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_spanner__database__admin__pb2.ListDatabaseRolesResponse.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def CreateBackupSchedule( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/CreateBackupSchedule", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.CreateBackupScheduleRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.BackupSchedule.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def GetBackupSchedule( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/GetBackupSchedule", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.GetBackupScheduleRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.BackupSchedule.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def UpdateBackupSchedule( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/UpdateBackupSchedule", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.UpdateBackupScheduleRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.BackupSchedule.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def DeleteBackupSchedule( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/DeleteBackupSchedule", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.DeleteBackupScheduleRequest.SerializeToString, + google_dot_protobuf_dot_empty__pb2.Empty.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ListBackupSchedules( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.admin.database.v1.DatabaseAdmin/ListBackupSchedules", + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.ListBackupSchedulesRequest.SerializeToString, + google_dot_spanner_dot_admin_dot_database_dot_v1_dot_backup__schedule__pb2.ListBackupSchedulesResponse.FromString, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) diff --git a/google/cloud/spanner_v1/testing/spanner_pb2_grpc.py b/google/cloud/spanner_v1/testing/spanner_pb2_grpc.py new file mode 100644 index 0000000000..c4622a6a34 --- /dev/null +++ b/google/cloud/spanner_v1/testing/spanner_pb2_grpc.py @@ -0,0 +1,882 @@ +# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! + +# Generated with the following commands: +# +# pip install grpcio-tools +# git clone git@github.com:googleapis/googleapis.git +# cd googleapis +# python -m grpc_tools.protoc \ +# -I . \ +# --python_out=. --pyi_out=. --grpc_python_out=. \ +# ./google/spanner/v1/*.proto + +"""Client and server classes corresponding to protobuf-defined services.""" + +import grpc +from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2 +from google.cloud.spanner_v1.types import ( + commit_response as google_dot_spanner_dot_v1_dot_commit__response__pb2, +) +from google.cloud.spanner_v1.types import ( + result_set as google_dot_spanner_dot_v1_dot_result__set__pb2, +) +from google.cloud.spanner_v1.types import ( + spanner as google_dot_spanner_dot_v1_dot_spanner__pb2, +) +from google.cloud.spanner_v1.types import ( + transaction as google_dot_spanner_dot_v1_dot_transaction__pb2, +) + +GRPC_GENERATED_VERSION = "1.67.0" +GRPC_VERSION = grpc.__version__ +_version_not_supported = False + +try: + from grpc._utilities import first_version_is_lower + + _version_not_supported = first_version_is_lower( + GRPC_VERSION, GRPC_GENERATED_VERSION + ) +except ImportError: + _version_not_supported = True + +if _version_not_supported: + raise RuntimeError( + f"The grpc package installed is at version {GRPC_VERSION}," + + " but the generated code in google/spanner/v1/spanner_pb2_grpc.py depends on" + + f" grpcio>={GRPC_GENERATED_VERSION}." + + f" Please upgrade your grpc module to grpcio>={GRPC_GENERATED_VERSION}" + + f" or downgrade your generated code using grpcio-tools<={GRPC_VERSION}." + ) + + +class SpannerServicer(object): + """Cloud Spanner API + + The Cloud Spanner API can be used to manage sessions and execute + transactions on data stored in Cloud Spanner databases. + """ + + def CreateSession(self, request, context): + """Creates a new session. A session can be used to perform + transactions that read and/or modify data in a Cloud Spanner database. + Sessions are meant to be reused for many consecutive + transactions. + + Sessions can only execute one transaction at a time. To execute + multiple concurrent read-write/write-only transactions, create + multiple sessions. Note that standalone reads and queries use a + transaction internally, and count toward the one transaction + limit. + + Active sessions use additional server resources, so it is a good idea to + delete idle and unneeded sessions. + Aside from explicit deletes, Cloud Spanner may delete sessions for which no + operations are sent for more than an hour. If a session is deleted, + requests to it return `NOT_FOUND`. + + Idle sessions can be kept alive by sending a trivial SQL query + periodically, e.g., `"SELECT 1"`. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def BatchCreateSessions(self, request, context): + """Creates multiple new sessions. + + This API can be used to initialize a session cache on the clients. + See https://goo.gl/TgSFN2 for best practices on session cache management. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def GetSession(self, request, context): + """Gets a session. Returns `NOT_FOUND` if the session does not exist. + This is mainly useful for determining whether a session is still + alive. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ListSessions(self, request, context): + """Lists all sessions in a given database.""" + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def DeleteSession(self, request, context): + """Ends a session, releasing server resources associated with it. This will + asynchronously trigger cancellation of any operations that are running with + this session. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ExecuteSql(self, request, context): + """Executes an SQL statement, returning all results in a single reply. This + method cannot be used to return a result set larger than 10 MiB; + if the query yields more data than that, the query fails with + a `FAILED_PRECONDITION` error. + + Operations inside read-write transactions might return `ABORTED`. If + this occurs, the application should restart the transaction from + the beginning. See [Transaction][google.spanner.v1.Transaction] for more + details. + + Larger result sets can be fetched in streaming fashion by calling + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] + instead. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ExecuteStreamingSql(self, request, context): + """Like [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], except returns the + result set as a stream. Unlike + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], there is no limit on + the size of the returned result set. However, no individual row in the + result set can exceed 100 MiB, and no column value can exceed 10 MiB. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def ExecuteBatchDml(self, request, context): + """Executes a batch of SQL DML statements. This method allows many statements + to be run with lower latency than submitting them sequentially with + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + Statements are executed in sequential order. A request can succeed even if + a statement fails. The + [ExecuteBatchDmlResponse.status][google.spanner.v1.ExecuteBatchDmlResponse.status] + field in the response provides information about the statement that failed. + Clients must inspect this field to determine whether an error occurred. + + Execution stops after the first failed statement; the remaining statements + are not executed. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def Read(self, request, context): + """Reads rows from the database using key lookups and scans, as a + simple key/value style alternative to + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. This method cannot be + used to return a result set larger than 10 MiB; if the read matches more + data than that, the read fails with a `FAILED_PRECONDITION` + error. + + Reads inside read-write transactions might return `ABORTED`. If + this occurs, the application should restart the transaction from + the beginning. See [Transaction][google.spanner.v1.Transaction] for more + details. + + Larger result sets can be yielded in streaming fashion by calling + [StreamingRead][google.spanner.v1.Spanner.StreamingRead] instead. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def StreamingRead(self, request, context): + """Like [Read][google.spanner.v1.Spanner.Read], except returns the result set + as a stream. Unlike [Read][google.spanner.v1.Spanner.Read], there is no + limit on the size of the returned result set. However, no individual row in + the result set can exceed 100 MiB, and no column value can exceed + 10 MiB. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def BeginTransaction(self, request, context): + """Begins a new transaction. This step can often be skipped: + [Read][google.spanner.v1.Spanner.Read], + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] and + [Commit][google.spanner.v1.Spanner.Commit] can begin a new transaction as a + side-effect. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def Commit(self, request, context): + """Commits a transaction. The request includes the mutations to be + applied to rows in the database. + + `Commit` might return an `ABORTED` error. This can occur at any time; + commonly, the cause is conflicts with concurrent + transactions. However, it can also happen for a variety of other + reasons. If `Commit` returns `ABORTED`, the caller should re-attempt + the transaction from the beginning, re-using the same session. + + On very rare occasions, `Commit` might return `UNKNOWN`. This can happen, + for example, if the client job experiences a 1+ hour networking failure. + At that point, Cloud Spanner has lost track of the transaction outcome and + we recommend that you perform another read from the database to see the + state of things as they are now. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def Rollback(self, request, context): + """Rolls back a transaction, releasing any locks it holds. It is a good + idea to call this for any transaction that includes one or more + [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] requests and ultimately + decides not to commit. + + `Rollback` returns `OK` if it successfully aborts the transaction, the + transaction was already aborted, or the transaction is not + found. `Rollback` never returns `ABORTED`. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def PartitionQuery(self, request, context): + """Creates a set of partition tokens that can be used to execute a query + operation in parallel. Each of the returned partition tokens can be used + by [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql] to + specify a subset of the query result to read. The same session and + read-only transaction must be used by the PartitionQueryRequest used to + create the partition tokens and the ExecuteSqlRequests that use the + partition tokens. + + Partition tokens become invalid when the session used to create them + is deleted, is idle for too long, begins a new transaction, or becomes too + old. When any of these happen, it is not possible to resume the query, and + the whole operation must be restarted from the beginning. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def PartitionRead(self, request, context): + """Creates a set of partition tokens that can be used to execute a read + operation in parallel. Each of the returned partition tokens can be used + by [StreamingRead][google.spanner.v1.Spanner.StreamingRead] to specify a + subset of the read result to read. The same session and read-only + transaction must be used by the PartitionReadRequest used to create the + partition tokens and the ReadRequests that use the partition tokens. There + are no ordering guarantees on rows returned among the returned partition + tokens, or even within each individual StreamingRead call issued with a + partition_token. + + Partition tokens become invalid when the session used to create them + is deleted, is idle for too long, begins a new transaction, or becomes too + old. When any of these happen, it is not possible to resume the read, and + the whole operation must be restarted from the beginning. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + def BatchWrite(self, request, context): + """Batches the supplied mutation groups in a collection of efficient + transactions. All mutations in a group are committed atomically. However, + mutations across groups can be committed non-atomically in an unspecified + order and thus, they must be independent of each other. Partial failure is + possible, i.e., some groups may have been committed successfully, while + some may have failed. The results of individual batches are streamed into + the response as the batches are applied. + + BatchWrite requests are not replay protected, meaning that each mutation + group may be applied more than once. Replays of non-idempotent mutations + may have undesirable effects. For example, replays of an insert mutation + may produce an already exists error or if you use generated or commit + timestamp-based keys, it may result in additional rows being added to the + mutation's table. We recommend structuring your mutation groups to be + idempotent to avoid this issue. + """ + context.set_code(grpc.StatusCode.UNIMPLEMENTED) + context.set_details("Method not implemented!") + raise NotImplementedError("Method not implemented!") + + +def add_SpannerServicer_to_server(servicer, server): + rpc_method_handlers = { + "CreateSession": grpc.unary_unary_rpc_method_handler( + servicer.CreateSession, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.CreateSessionRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_spanner__pb2.Session.serialize, + ), + "BatchCreateSessions": grpc.unary_unary_rpc_method_handler( + servicer.BatchCreateSessions, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.BatchCreateSessionsRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_spanner__pb2.BatchCreateSessionsResponse.serialize, + ), + "GetSession": grpc.unary_unary_rpc_method_handler( + servicer.GetSession, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.GetSessionRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_spanner__pb2.Session.serialize, + ), + "ListSessions": grpc.unary_unary_rpc_method_handler( + servicer.ListSessions, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.ListSessionsRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_spanner__pb2.ListSessionsResponse.serialize, + ), + "DeleteSession": grpc.unary_unary_rpc_method_handler( + servicer.DeleteSession, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.DeleteSessionRequest.deserialize, + response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, + ), + "ExecuteSql": grpc.unary_unary_rpc_method_handler( + servicer.ExecuteSql, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.ExecuteSqlRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_result__set__pb2.ResultSet.serialize, + ), + "ExecuteStreamingSql": grpc.unary_stream_rpc_method_handler( + servicer.ExecuteStreamingSql, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.ExecuteSqlRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_result__set__pb2.PartialResultSet.serialize, + ), + "ExecuteBatchDml": grpc.unary_unary_rpc_method_handler( + servicer.ExecuteBatchDml, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.ExecuteBatchDmlRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_spanner__pb2.ExecuteBatchDmlResponse.serialize, + ), + "Read": grpc.unary_unary_rpc_method_handler( + servicer.Read, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.ReadRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_result__set__pb2.ResultSet.serialize, + ), + "StreamingRead": grpc.unary_stream_rpc_method_handler( + servicer.StreamingRead, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.ReadRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_result__set__pb2.PartialResultSet.serialize, + ), + "BeginTransaction": grpc.unary_unary_rpc_method_handler( + servicer.BeginTransaction, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.BeginTransactionRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_transaction__pb2.Transaction.serialize, + ), + "Commit": grpc.unary_unary_rpc_method_handler( + servicer.Commit, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.CommitRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_commit__response__pb2.CommitResponse.serialize, + ), + "Rollback": grpc.unary_unary_rpc_method_handler( + servicer.Rollback, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.RollbackRequest.deserialize, + response_serializer=google_dot_protobuf_dot_empty__pb2.Empty.SerializeToString, + ), + "PartitionQuery": grpc.unary_unary_rpc_method_handler( + servicer.PartitionQuery, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.PartitionQueryRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_spanner__pb2.PartitionResponse.serialize, + ), + "PartitionRead": grpc.unary_unary_rpc_method_handler( + servicer.PartitionRead, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.PartitionReadRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_spanner__pb2.PartitionResponse.serialize, + ), + "BatchWrite": grpc.unary_stream_rpc_method_handler( + servicer.BatchWrite, + request_deserializer=google_dot_spanner_dot_v1_dot_spanner__pb2.BatchWriteRequest.deserialize, + response_serializer=google_dot_spanner_dot_v1_dot_spanner__pb2.BatchWriteResponse.serialize, + ), + } + generic_handler = grpc.method_handlers_generic_handler( + "google.spanner.v1.Spanner", rpc_method_handlers + ) + server.add_generic_rpc_handlers((generic_handler,)) + server.add_registered_method_handlers( + "google.spanner.v1.Spanner", rpc_method_handlers + ) + + +# This class is part of an EXPERIMENTAL API. +class Spanner(object): + """Cloud Spanner API + + The Cloud Spanner API can be used to manage sessions and execute + transactions on data stored in Cloud Spanner databases. + """ + + @staticmethod + def CreateSession( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/CreateSession", + google_dot_spanner_dot_v1_dot_spanner__pb2.CreateSessionRequest.to_json, + google_dot_spanner_dot_v1_dot_spanner__pb2.Session.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def BatchCreateSessions( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/BatchCreateSessions", + google_dot_spanner_dot_v1_dot_spanner__pb2.BatchCreateSessionsRequest.to_json, + google_dot_spanner_dot_v1_dot_spanner__pb2.BatchCreateSessionsResponse.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def GetSession( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/GetSession", + google_dot_spanner_dot_v1_dot_spanner__pb2.GetSessionRequest.to_json, + google_dot_spanner_dot_v1_dot_spanner__pb2.Session.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ListSessions( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/ListSessions", + google_dot_spanner_dot_v1_dot_spanner__pb2.ListSessionsRequest.to_json, + google_dot_spanner_dot_v1_dot_spanner__pb2.ListSessionsResponse.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def DeleteSession( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/DeleteSession", + google_dot_spanner_dot_v1_dot_spanner__pb2.DeleteSessionRequest.to_json, + google_dot_protobuf_dot_empty__pb2.Empty.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ExecuteSql( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/ExecuteSql", + google_dot_spanner_dot_v1_dot_spanner__pb2.ExecuteSqlRequest.to_json, + google_dot_spanner_dot_v1_dot_result__set__pb2.ResultSet.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ExecuteStreamingSql( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_stream( + request, + target, + "/google.spanner.v1.Spanner/ExecuteStreamingSql", + google_dot_spanner_dot_v1_dot_spanner__pb2.ExecuteSqlRequest.to_json, + google_dot_spanner_dot_v1_dot_result__set__pb2.PartialResultSet.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def ExecuteBatchDml( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/ExecuteBatchDml", + google_dot_spanner_dot_v1_dot_spanner__pb2.ExecuteBatchDmlRequest.to_json, + google_dot_spanner_dot_v1_dot_spanner__pb2.ExecuteBatchDmlResponse.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def Read( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/Read", + google_dot_spanner_dot_v1_dot_spanner__pb2.ReadRequest.to_json, + google_dot_spanner_dot_v1_dot_result__set__pb2.ResultSet.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def StreamingRead( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_stream( + request, + target, + "/google.spanner.v1.Spanner/StreamingRead", + google_dot_spanner_dot_v1_dot_spanner__pb2.ReadRequest.to_json, + google_dot_spanner_dot_v1_dot_result__set__pb2.PartialResultSet.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def BeginTransaction( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/BeginTransaction", + google_dot_spanner_dot_v1_dot_spanner__pb2.BeginTransactionRequest.to_json, + google_dot_spanner_dot_v1_dot_transaction__pb2.Transaction.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def Commit( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/Commit", + google_dot_spanner_dot_v1_dot_spanner__pb2.CommitRequest.to_json, + google_dot_spanner_dot_v1_dot_commit__response__pb2.CommitResponse.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def Rollback( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/Rollback", + google_dot_spanner_dot_v1_dot_spanner__pb2.RollbackRequest.to_json, + google_dot_protobuf_dot_empty__pb2.Empty.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def PartitionQuery( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/PartitionQuery", + google_dot_spanner_dot_v1_dot_spanner__pb2.PartitionQueryRequest.to_json, + google_dot_spanner_dot_v1_dot_spanner__pb2.PartitionResponse.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def PartitionRead( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_unary( + request, + target, + "/google.spanner.v1.Spanner/PartitionRead", + google_dot_spanner_dot_v1_dot_spanner__pb2.PartitionReadRequest.to_json, + google_dot_spanner_dot_v1_dot_spanner__pb2.PartitionResponse.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) + + @staticmethod + def BatchWrite( + request, + target, + options=(), + channel_credentials=None, + call_credentials=None, + insecure=False, + compression=None, + wait_for_ready=None, + timeout=None, + metadata=None, + ): + return grpc.experimental.unary_stream( + request, + target, + "/google.spanner.v1.Spanner/BatchWrite", + google_dot_spanner_dot_v1_dot_spanner__pb2.BatchWriteRequest.to_json, + google_dot_spanner_dot_v1_dot_spanner__pb2.BatchWriteResponse.from_json, + options, + channel_credentials, + insecure, + call_credentials, + compression, + wait_for_ready, + timeout, + metadata, + _registered_method=True, + ) diff --git a/google/cloud/spanner_v1/transaction.py b/google/cloud/spanner_v1/transaction.py index 3c1abc7326..5dd54eafe1 100644 --- a/google/cloud/spanner_v1/transaction.py +++ b/google/cloud/spanner_v1/transaction.py @@ -13,19 +13,39 @@ # limitations under the License. """Spanner read-write transaction support.""" - +import functools from google.protobuf.struct_pb2 import Struct +from typing import Optional -from google.cloud._helpers import _pb_timestamp_to_datetime from google.cloud.spanner_v1._helpers import ( _make_value_pb, _merge_query_options, _metadata_with_prefix, + _metadata_with_leader_aware_routing, + _retry, + _check_rst_stream_error, + _merge_Transaction_Options, +) +from google.cloud.spanner_v1 import ( + CommitRequest, + CommitResponse, + ResultSet, + ExecuteBatchDmlResponse, + Mutation, ) -from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionSelector -from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionOptions +from google.cloud.spanner_v1 import ExecuteBatchDmlRequest +from google.cloud.spanner_v1 import ExecuteSqlRequest +from google.cloud.spanner_v1 import TransactionOptions +from google.cloud.spanner_v1._helpers import AtomicCounter from google.cloud.spanner_v1.snapshot import _SnapshotBase from google.cloud.spanner_v1.batch import _BatchBase +from google.cloud.spanner_v1._opentelemetry_tracing import add_span_event, trace_call +from google.cloud.spanner_v1 import RequestOptions +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture +from google.api_core import gapic_v1 +from google.api_core.exceptions import InternalServerError +from dataclasses import dataclass, field +from typing import Any class Transaction(_SnapshotBase, _BatchBase): @@ -37,100 +57,303 @@ class Transaction(_SnapshotBase, _BatchBase): :raises ValueError: if session has an existing transaction """ - committed = None - """Timestamp at which the transaction was successfully committed.""" - rolled_back = False - _multi_use = True - _execute_sql_count = 0 + exclude_txn_from_change_streams: bool = False + isolation_level: TransactionOptions.IsolationLevel = ( + TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED + ) + read_lock_mode: TransactionOptions.ReadWrite.ReadLockMode = ( + TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED + ) - def __init__(self, session): - if session._transaction is not None: - raise ValueError("Session has existing transaction.") + # Override defaults from _SnapshotBase. + _multi_use: bool = True + _read_only: bool = False + def __init__(self, session): super(Transaction, self).__init__(session) + self.rolled_back: bool = False - def _check_state(self): - """Helper for :meth:`commit` et al. + # If this transaction is used to retry a previous aborted transaction with a + # multiplexed session, the identifier for that transaction is used to increase + # the lock order of the new transaction (see :meth:`_build_transaction_options_pb`). + # This attribute should only be set by :meth:`~google.cloud.spanner_v1.session.Session.run_in_transaction`. + self._multiplexed_session_previous_transaction_id: Optional[bytes] = None - :raises: :exc:`ValueError` if the object's state is invalid for making - API requests. + def _build_transaction_options_pb(self) -> TransactionOptions: + """Builds and returns transaction options for this transaction. + + :rtype: :class:`~.transaction_pb2.TransactionOptions` + :returns: transaction options for this transaction. """ - if self._transaction_id is None: - raise ValueError("Transaction is not begun") - if self.committed is not None: - raise ValueError("Transaction is already committed") + default_transaction_options = ( + self._session._database.default_transaction_options.default_read_write_transaction_options + ) - if self.rolled_back: - raise ValueError("Transaction is already rolled back") + merge_transaction_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + multiplexed_session_previous_transaction_id=self._multiplexed_session_previous_transaction_id, + read_lock_mode=self.read_lock_mode, + ), + exclude_txn_from_change_streams=self.exclude_txn_from_change_streams, + isolation_level=self.isolation_level, + ) - def _make_txn_selector(self): - """Helper for :meth:`read`. + return _merge_Transaction_Options( + defaultTransactionOptions=default_transaction_options, + mergeTransactionOptions=merge_transaction_options, + ) - :rtype: - :class:`~.transaction_pb2.TransactionSelector` - :returns: a selector configured for read-write transaction semantics. - """ - self._check_state() - return TransactionSelector(id=self._transaction_id) + def _execute_request( + self, + method, + request, + metadata, + trace_name=None, + attributes=None, + ): + """Helper method to execute request after fetching transaction selector. - def begin(self): - """Begin a transaction on the database. + :type method: callable + :param method: function returning iterator - :rtype: bytes - :returns: the ID for the newly-begun transaction. - :raises ValueError: - if the transaction is already begun, committed, or rolled back. + :type request: proto + :param request: request proto to call the method with + + :raises: ValueError: if the transaction is not ready to update. """ - if self._transaction_id is not None: - raise ValueError("Transaction already begun") if self.committed is not None: - raise ValueError("Transaction already committed") + raise ValueError("Transaction already committed.") + if self.rolled_back: + raise ValueError("Transaction already rolled back.") + + session = self._session + transaction = self._build_transaction_selector_pb() + request.transaction = transaction + + with trace_call( + trace_name, + session, + attributes, + observability_options=getattr( + session._database, "observability_options", None + ), + metadata=metadata, + ), MetricsCapture(): + method = functools.partial(method, request=request) + response = _retry( + method, + allowed_exceptions={InternalServerError: _check_rst_stream_error}, + ) + + return response + def rollback(self) -> None: + """Roll back a transaction on the database. + + :raises: ValueError: if the transaction is not ready to roll back. + """ + + if self.committed is not None: + raise ValueError("Transaction already committed.") if self.rolled_back: - raise ValueError("Transaction is already rolled back") + raise ValueError("Transaction already rolled back.") - database = self._session._database - api = database.spanner_api - metadata = _metadata_with_prefix(database.name) - txn_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) - response = api.begin_transaction( - self._session.name, txn_options, metadata=metadata - ) - self._transaction_id = response.id - return self._transaction_id + if self._transaction_id is not None: + session = self._session + database = session._database + api = database.spanner_api + + metadata = _metadata_with_prefix(database.name) + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing( + database._route_to_leader_enabled + ) + ) + + observability_options = getattr(database, "observability_options", None) + with trace_call( + f"CloudSpanner.{type(self).__name__}.rollback", + session, + observability_options=observability_options, + metadata=metadata, + ) as span, MetricsCapture(): + attempt = AtomicCounter(0) + nth_request = database._next_nth_request + + def wrapped_method(*args, **kwargs): + attempt.increment() + rollback_method = functools.partial( + api.rollback, + session=session.name, + transaction_id=self._transaction_id, + metadata=database.metadata_with_request_id( + nth_request, + attempt.value, + metadata, + span, + ), + ) + return rollback_method(*args, **kwargs) + + _retry( + wrapped_method, + allowed_exceptions={InternalServerError: _check_rst_stream_error}, + ) - def rollback(self): - """Roll back a transaction on the database.""" - self._check_state() - database = self._session._database - api = database.spanner_api - metadata = _metadata_with_prefix(database.name) - api.rollback(self._session.name, self._transaction_id, metadata=metadata) self.rolled_back = True - del self._session._transaction - def commit(self): + def commit( + self, return_commit_stats=False, request_options=None, max_commit_delay=None + ): """Commit mutations to the database. + :type return_commit_stats: bool + :param return_commit_stats: + If true, the response will return commit stats which can be accessed though commit_stats. + + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type max_commit_delay: :class:`datetime.timedelta` + :param max_commit_delay: + (Optional) The amount of latency this request is willing to incur + in order to improve throughput. + :class:`~google.cloud.spanner_v1.types.MaxCommitDelay`. + :rtype: datetime :returns: timestamp of the committed changes. - :raises ValueError: if there are no mutations to commit. + + :raises: ValueError: if the transaction is not ready to commit. """ - self._check_state() - database = self._session._database + mutations = self._mutations + num_mutations = len(mutations) + + session = self._session + database = session._database api = database.spanner_api + metadata = _metadata_with_prefix(database.name) - response = api.commit( - self._session.name, - mutations=self._mutations, - transaction_id=self._transaction_id, + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) + + with trace_call( + name=f"CloudSpanner.{type(self).__name__}.commit", + session=session, + extra_attributes={"num_mutations": num_mutations}, + observability_options=getattr(database, "observability_options", None), metadata=metadata, - ) - self.committed = _pb_timestamp_to_datetime(response.commit_timestamp) - del self._session._transaction + ) as span, MetricsCapture(): + if self.committed is not None: + raise ValueError("Transaction already committed.") + if self.rolled_back: + raise ValueError("Transaction already rolled back.") + + if self._transaction_id is None: + if num_mutations > 0: + self._begin_mutations_only_transaction() + else: + raise ValueError("Transaction has not begun.") + + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + if self.transaction_tag is not None: + request_options.transaction_tag = self.transaction_tag + + # Request tags are not supported for commit requests. + request_options.request_tag = None + + common_commit_request_args = { + "session": session.name, + "transaction_id": self._transaction_id, + "return_commit_stats": return_commit_stats, + "max_commit_delay": max_commit_delay, + "request_options": request_options, + } + + add_span_event(span, "Starting Commit") + + attempt = AtomicCounter(0) + nth_request = database._next_nth_request + + def wrapped_method(*args, **kwargs): + attempt.increment() + commit_request_args = { + "mutations": mutations, + **common_commit_request_args, + } + # Check if session is multiplexed (safely handle mock sessions) + is_multiplexed = getattr(self._session, "is_multiplexed", False) + if is_multiplexed and self._precommit_token is not None: + commit_request_args["precommit_token"] = self._precommit_token + + commit_method = functools.partial( + api.commit, + request=CommitRequest(**commit_request_args), + metadata=database.metadata_with_request_id( + nth_request, + attempt.value, + metadata, + span, + ), + ) + return commit_method(*args, **kwargs) + + commit_retry_event_name = "Transaction Commit Attempt Failed. Retrying" + + def before_next_retry(nth_retry, delay_in_seconds): + add_span_event( + span=span, + event_name=commit_retry_event_name, + event_attributes={ + "attempt": nth_retry, + "sleep_seconds": delay_in_seconds, + }, + ) + + commit_response_pb: CommitResponse = _retry( + wrapped_method, + allowed_exceptions={InternalServerError: _check_rst_stream_error}, + before_next_retry=before_next_retry, + ) + + # If the response contains a precommit token, the transaction did not + # successfully commit, and must be retried with the new precommit token. + # The mutations should not be included in the new request, and no further + # retries or exception handling should be performed. + if commit_response_pb._pb.HasField("precommit_token"): + add_span_event(span, commit_retry_event_name) + nth_request = database._next_nth_request + commit_response_pb = api.commit( + request=CommitRequest( + precommit_token=commit_response_pb.precommit_token, + **common_commit_request_args, + ), + metadata=database.metadata_with_request_id( + nth_request, + 1, + metadata, + span, + ), + ) + + add_span_event(span, "Commit Done") + + self.committed = commit_response_pb.commit_timestamp + if return_commit_stats: + self.commit_stats = commit_response_pb.commit_stats + return self.committed @staticmethod @@ -153,20 +376,25 @@ def _make_params_pb(params, param_types): :raises ValueError: If ``params`` is None but ``param_types`` is not None. """ - if params is not None: - if param_types is None: - raise ValueError("Specify 'param_types' when passing 'params'.") + if params: return Struct( fields={key: _make_value_pb(value) for key, value in params.items()} ) - else: - if param_types is not None: - raise ValueError("Specify 'params' when passing 'param_types'.") - return None + return {} def execute_update( - self, dml, params=None, param_types=None, query_mode=None, query_options=None + self, + dml, + params=None, + param_types=None, + query_mode=None, + query_options=None, + request_options=None, + last_statement=False, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ): """Perform an ``ExecuteSql`` API request with DML. @@ -183,28 +411,61 @@ def execute_update( required if parameters are passed. :type query_mode: - :class:`~google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryMode` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryMode` :param query_mode: Mode governing return of results / query plan. See: `QueryMode `_. :type query_options: - :class:`~google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryOptions` + :class:`~google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions` or :class:`dict` :param query_options: (Optional) Options that are provided for query plan stability. + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type last_statement: bool + :param last_statement: + If set to true, this option marks the end of the transaction. The + transaction should be committed or aborted after this statement + executes, and attempts to execute any other requests against this + transaction (including reads and queries) will be rejected. Mixing + mutations with statements that are marked as the last statement is + not allowed. + For DML statements, setting this option may cause some error + reporting to be deferred until commit time (e.g. validation of + unique constraints). Given this, successful execution of a DML + statement should not be assumed until the transaction commits. + + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + :rtype: int :returns: Count of rows affected by the DML statement. """ + + session = self._session + database = session._database + api = database.spanner_api + params_pb = self._make_params_pb(params, param_types) - database = self._session._database + metadata = _metadata_with_prefix(database.name) - transaction = self._make_txn_selector() - api = database.spanner_api + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) - seqno, self._execute_sql_count = ( - self._execute_sql_count, - self._execute_sql_count + 1, + seqno, self._execute_sql_request_count = ( + self._execute_sql_request_count, + self._execute_sql_request_count + 1, ) # Query-level options have higher precedence than client-level and @@ -212,20 +473,78 @@ def execute_update( default_query_options = database._instance._client._query_options query_options = _merge_query_options(default_query_options, query_options) - response = api.execute_sql( - self._session.name, - dml, - transaction=transaction, + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + request_options.transaction_tag = self.transaction_tag + + trace_attributes = {"db.statement": dml} + + # If this request begins the transaction, we need to lock + # the transaction until the transaction ID is updated. + is_inline_begin = False + + if self._transaction_id is None: + is_inline_begin = True + self._lock.acquire() + + execute_sql_request = ExecuteSqlRequest( + session=session.name, + transaction=self._build_transaction_selector_pb(), + sql=dml, params=params_pb, param_types=param_types, query_mode=query_mode, query_options=query_options, seqno=seqno, - metadata=metadata, + request_options=request_options, + last_statement=last_statement, + ) + + nth_request = database._next_nth_request + attempt = AtomicCounter(0) + + def wrapped_method(*args, **kwargs): + attempt.increment() + execute_sql_method = functools.partial( + api.execute_sql, + request=execute_sql_request, + metadata=database.metadata_with_request_id( + nth_request, attempt.value, metadata + ), + retry=retry, + timeout=timeout, + ) + return execute_sql_method(*args, **kwargs) + + result_set_pb: ResultSet = self._execute_request( + wrapped_method, + execute_sql_request, + metadata, + f"CloudSpanner.{type(self).__name__}.execute_update", + trace_attributes, ) - return response.stats.row_count_exact - def batch_update(self, statements): + self._update_for_result_set_pb(result_set_pb) + + if is_inline_begin: + self._lock.release() + + if result_set_pb._pb.HasField("precommit_token"): + self._update_for_precommit_token_pb(result_set_pb.precommit_token) + + return result_set_pb.stats.row_count_exact + + def batch_update( + self, + statements, + request_options=None, + last_statement=False, + *, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ): """Perform a batch of DML statements via an ``ExecuteBatchDml`` request. :type statements: @@ -239,50 +558,216 @@ def batch_update(self, statements): must also be passed, as a dict mapping names to the type of value passed in 'params'. + :type request_options: + :class:`google.cloud.spanner_v1.types.RequestOptions` + :param request_options: + (Optional) Common options for this request. + If a dict is provided, it must be of the same form as the protobuf + message :class:`~google.cloud.spanner_v1.types.RequestOptions`. + + :type last_statement: bool + :param last_statement: + If set to true, this option marks the end of the transaction. The + transaction should be committed or aborted after this statement + executes, and attempts to execute any other requests against this + transaction (including reads and queries) will be rejected. Mixing + mutations with statements that are marked as the last statement is + not allowed. + For DML statements, setting this option may cause some error + reporting to be deferred until commit time (e.g. validation of + unique constraints). Given this, successful execution of a DML + statement should not be assumed until the transaction commits. + + :type retry: :class:`~google.api_core.retry.Retry` + :param retry: (Optional) The retry settings for this request. + + :type timeout: float + :param timeout: (Optional) The timeout for this request. + :rtype: Tuple(status, Sequence[int]) :returns: Status code, plus counts of rows affected by each completed DML - statement. Note that if the staus code is not ``OK``, the + statement. Note that if the status code is not ``OK``, the statement triggering the error will not have an entry in the list, nor will any statements following that one. """ + + session = self._session + database = session._database + api = database.spanner_api + parsed = [] for statement in statements: if isinstance(statement, str): - parsed.append({"sql": statement}) + parsed.append(ExecuteBatchDmlRequest.Statement(sql=statement)) else: dml, params, param_types = statement params_pb = self._make_params_pb(params, param_types) parsed.append( - {"sql": dml, "params": params_pb, "param_types": param_types} + ExecuteBatchDmlRequest.Statement( + sql=dml, params=params_pb, param_types=param_types + ) ) - database = self._session._database metadata = _metadata_with_prefix(database.name) - transaction = self._make_txn_selector() - api = database.spanner_api + if database._route_to_leader_enabled: + metadata.append( + _metadata_with_leader_aware_routing(database._route_to_leader_enabled) + ) - seqno, self._execute_sql_count = ( - self._execute_sql_count, - self._execute_sql_count + 1, + seqno, self._execute_sql_request_count = ( + self._execute_sql_request_count, + self._execute_sql_request_count + 1, ) - response = api.execute_batch_dml( - session=self._session.name, - transaction=transaction, + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + request_options.transaction_tag = self.transaction_tag + + trace_attributes = { + # Get just the queries from the DML statement batch + "db.statement": ";".join([statement.sql for statement in parsed]) + } + + # If this request begins the transaction, we need to lock + # the transaction until the transaction ID is updated. + is_inline_begin = False + + if self._transaction_id is None: + is_inline_begin = True + self._lock.acquire() + + execute_batch_dml_request = ExecuteBatchDmlRequest( + session=session.name, + transaction=self._build_transaction_selector_pb(), statements=parsed, seqno=seqno, - metadata=metadata, + request_options=request_options, + last_statements=last_statement, + ) + + nth_request = database._next_nth_request + attempt = AtomicCounter(0) + + def wrapped_method(*args, **kwargs): + attempt.increment() + execute_batch_dml_method = functools.partial( + api.execute_batch_dml, + request=execute_batch_dml_request, + metadata=database.metadata_with_request_id( + nth_request, attempt.value, metadata + ), + retry=retry, + timeout=timeout, + ) + return execute_batch_dml_method(*args, **kwargs) + + response_pb: ExecuteBatchDmlResponse = self._execute_request( + wrapped_method, + execute_batch_dml_request, + metadata, + "CloudSpanner.DMLTransaction", + trace_attributes, ) + + self._update_for_execute_batch_dml_response_pb(response_pb) + + if is_inline_begin: + self._lock.release() + + if ( + len(response_pb.result_sets) > 0 + and response_pb.result_sets[0].precommit_token + ): + self._update_for_precommit_token_pb( + response_pb.result_sets[0].precommit_token + ) + row_counts = [ - result_set.stats.row_count_exact for result_set in response.result_sets + result_set.stats.row_count_exact for result_set in response_pb.result_sets ] - return response.status, row_counts + + return response_pb.status, row_counts + + def _begin_transaction(self, mutation: Mutation = None) -> bytes: + """Begins a transaction on the database. + + :type mutation: :class:`~google.cloud.spanner_v1.mutation.Mutation` + :param mutation: (Optional) Mutation to include in the begin transaction + request. Required for mutation-only transactions with multiplexed sessions. + + :rtype: bytes + :returns: identifier for the transaction. + + :raises ValueError: if the transaction has already begun or is single-use. + """ + + if self.committed is not None: + raise ValueError("Transaction is already committed") + if self.rolled_back: + raise ValueError("Transaction is already rolled back") + + return super(Transaction, self)._begin_transaction(mutation=mutation) + + def _begin_mutations_only_transaction(self) -> None: + """Begins a mutations-only transaction on the database.""" + + mutation = self._get_mutation_for_begin_mutations_only_transaction() + self._begin_transaction(mutation=mutation) + + def _get_mutation_for_begin_mutations_only_transaction(self) -> Optional[Mutation]: + """Returns a mutation to use for beginning a mutations-only transaction. + Returns None if a mutation does not need to be included. + + :rtype: :class:`~google.cloud.spanner_v1.types.Mutation` + :returns: A mutation to use for beginning a mutations-only transaction. + """ + + # A mutation only needs to be included + # for transaction with multiplexed sessions. + if not self._session.is_multiplexed: + return None + + mutations: list[Mutation] = self._mutations + + # If there are multiple mutations, select the mutation as follows: + # 1. Choose a delete, update, or replace mutation instead + # of an insert mutation (since inserts could involve an auto- + # generated column and the client doesn't have that information). + # 2. If there are no delete, update, or replace mutations, choose + # the insert mutation that includes the largest number of values. + + insert_mutation: Mutation = None + max_insert_values: int = -1 + + for mut in mutations: + if mut.insert: + num_values = len(mut.insert.values) + if num_values > max_insert_values: + insert_mutation = mut + max_insert_values = num_values + else: + return mut + + return insert_mutation + + def _update_for_execute_batch_dml_response_pb( + self, response_pb: ExecuteBatchDmlResponse + ) -> None: + """Update the transaction for the given execute batch DML response. + + :type response_pb: :class:`~google.cloud.spanner_v1.types.ExecuteBatchDmlResponse` + :param response_pb: The execute batch DML response to update the transaction with. + """ + # Only the first result set contains the result set metadata. + if len(response_pb.result_sets) > 0: + self._update_for_result_set_pb(response_pb.result_sets[0]) def __enter__(self): """Begin ``with`` block.""" - self.begin() return self def __exit__(self, exc_type, exc_val, exc_tb): @@ -291,3 +776,35 @@ def __exit__(self, exc_type, exc_val, exc_tb): self.commit() else: self.rollback() + + +@dataclass +class BatchTransactionId: + transaction_id: str + session_id: str + read_timestamp: Any + + +@dataclass +class DefaultTransactionOptions: + isolation_level: str = TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED + read_lock_mode: str = ( + TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED + ) + _defaultReadWriteTransactionOptions: Optional[TransactionOptions] = field( + init=False, repr=False + ) + + def __post_init__(self): + """Initialize _defaultReadWriteTransactionOptions automatically""" + self._defaultReadWriteTransactionOptions = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=self.read_lock_mode, + ), + isolation_level=self.isolation_level, + ) + + @property + def default_read_write_transaction_options(self) -> TransactionOptions: + """Public accessor for _defaultReadWriteTransactionOptions""" + return self._defaultReadWriteTransactionOptions diff --git a/google/cloud/spanner_v1/types.py b/google/cloud/spanner_v1/types.py deleted file mode 100644 index 07c94ba871..0000000000 --- a/google/cloud/spanner_v1/types.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright 2017 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from __future__ import absolute_import -import sys - -from google.api import http_pb2 -from google.protobuf import descriptor_pb2 -from google.protobuf import duration_pb2 -from google.protobuf import empty_pb2 -from google.protobuf import struct_pb2 -from google.protobuf import timestamp_pb2 - -from google.api_core.protobuf_helpers import get_messages -from google.cloud.spanner_v1.proto import keys_pb2 -from google.cloud.spanner_v1.proto import mutation_pb2 -from google.cloud.spanner_v1.proto import query_plan_pb2 -from google.cloud.spanner_v1.proto import result_set_pb2 -from google.cloud.spanner_v1.proto import spanner_pb2 -from google.cloud.spanner_v1.proto import transaction_pb2 -from google.cloud.spanner_v1.proto import type_pb2 - - -_shared_modules = [ - http_pb2, - descriptor_pb2, - duration_pb2, - empty_pb2, - struct_pb2, - timestamp_pb2, -] - -_local_modules = [ - keys_pb2, - mutation_pb2, - query_plan_pb2, - result_set_pb2, - spanner_pb2, - transaction_pb2, - type_pb2, -] - -names = [] - -for module in _shared_modules: - for name, message in get_messages(module).items(): - setattr(sys.modules[__name__], name, message) - names.append(name) - -for module in _local_modules: - for name, message in get_messages(module).items(): - message.__module__ = "google.cloud.spanner_v1.types" - setattr(sys.modules[__name__], name, message) - names.append(name) - -__all__ = tuple(sorted(names)) diff --git a/google/cloud/spanner_v1/types/__init__.py b/google/cloud/spanner_v1/types/__init__.py new file mode 100644 index 0000000000..e2f87d65da --- /dev/null +++ b/google/cloud/spanner_v1/types/__init__.py @@ -0,0 +1,122 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from .change_stream import ( + ChangeStreamRecord, +) +from .commit_response import ( + CommitResponse, +) +from .keys import ( + KeyRange, + KeySet, +) +from .mutation import ( + Mutation, +) +from .query_plan import ( + PlanNode, + QueryPlan, +) +from .result_set import ( + PartialResultSet, + ResultSet, + ResultSetMetadata, + ResultSetStats, +) +from .spanner import ( + BatchCreateSessionsRequest, + BatchCreateSessionsResponse, + BatchWriteRequest, + BatchWriteResponse, + BeginTransactionRequest, + CommitRequest, + CreateSessionRequest, + DeleteSessionRequest, + DirectedReadOptions, + ExecuteBatchDmlRequest, + ExecuteBatchDmlResponse, + ExecuteSqlRequest, + GetSessionRequest, + ListSessionsRequest, + ListSessionsResponse, + Partition, + PartitionOptions, + PartitionQueryRequest, + PartitionReadRequest, + PartitionResponse, + ReadRequest, + RequestOptions, + RollbackRequest, + Session, +) +from .transaction import ( + MultiplexedSessionPrecommitToken, + Transaction, + TransactionOptions, + TransactionSelector, +) +from .type import ( + StructType, + Type, + TypeAnnotationCode, + TypeCode, +) + +__all__ = ( + "ChangeStreamRecord", + "CommitResponse", + "KeyRange", + "KeySet", + "Mutation", + "PlanNode", + "QueryPlan", + "PartialResultSet", + "ResultSet", + "ResultSetMetadata", + "ResultSetStats", + "BatchCreateSessionsRequest", + "BatchCreateSessionsResponse", + "BatchWriteRequest", + "BatchWriteResponse", + "BeginTransactionRequest", + "CommitRequest", + "CreateSessionRequest", + "DeleteSessionRequest", + "DirectedReadOptions", + "ExecuteBatchDmlRequest", + "ExecuteBatchDmlResponse", + "ExecuteSqlRequest", + "GetSessionRequest", + "ListSessionsRequest", + "ListSessionsResponse", + "Partition", + "PartitionOptions", + "PartitionQueryRequest", + "PartitionReadRequest", + "PartitionResponse", + "ReadRequest", + "RequestOptions", + "RollbackRequest", + "Session", + "MultiplexedSessionPrecommitToken", + "Transaction", + "TransactionOptions", + "TransactionSelector", + "StructType", + "Type", + "TypeAnnotationCode", + "TypeCode", +) diff --git a/google/cloud/spanner_v1/types/change_stream.py b/google/cloud/spanner_v1/types/change_stream.py new file mode 100644 index 0000000000..762fc6a5d5 --- /dev/null +++ b/google/cloud/spanner_v1/types/change_stream.py @@ -0,0 +1,700 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_v1.types import type as gs_type +from google.protobuf import struct_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "ChangeStreamRecord", + }, +) + + +class ChangeStreamRecord(proto.Message): + r"""Spanner Change Streams enable customers to capture and stream out + changes to their Spanner databases in real-time. A change stream can + be created with option partition_mode='IMMUTABLE_KEY_RANGE' or + partition_mode='MUTABLE_KEY_RANGE'. + + This message is only used in Change Streams created with the option + partition_mode='MUTABLE_KEY_RANGE'. Spanner automatically creates a + special Table-Valued Function (TVF) along with each Change Streams. + The function provides access to the change stream's records. The + function is named READ\_ (where + is the name of the change stream), and it + returns a table with only one column called ChangeRecord. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + data_change_record (google.cloud.spanner_v1.types.ChangeStreamRecord.DataChangeRecord): + Data change record describing a data change + for a change stream partition. + + This field is a member of `oneof`_ ``record``. + heartbeat_record (google.cloud.spanner_v1.types.ChangeStreamRecord.HeartbeatRecord): + Heartbeat record describing a heartbeat for a + change stream partition. + + This field is a member of `oneof`_ ``record``. + partition_start_record (google.cloud.spanner_v1.types.ChangeStreamRecord.PartitionStartRecord): + Partition start record describing a new + change stream partition. + + This field is a member of `oneof`_ ``record``. + partition_end_record (google.cloud.spanner_v1.types.ChangeStreamRecord.PartitionEndRecord): + Partition end record describing a terminated + change stream partition. + + This field is a member of `oneof`_ ``record``. + partition_event_record (google.cloud.spanner_v1.types.ChangeStreamRecord.PartitionEventRecord): + Partition event record describing key range + changes for a change stream partition. + + This field is a member of `oneof`_ ``record``. + """ + + class DataChangeRecord(proto.Message): + r"""A data change record contains a set of changes to a table + with the same modification type (insert, update, or delete) + committed at the same commit timestamp in one change stream + partition for the same transaction. Multiple data change records + can be returned for the same transaction across multiple change + stream partitions. + + Attributes: + commit_timestamp (google.protobuf.timestamp_pb2.Timestamp): + Indicates the timestamp in which the change was committed. + DataChangeRecord.commit_timestamps, + PartitionStartRecord.start_timestamps, + PartitionEventRecord.commit_timestamps, and + PartitionEndRecord.end_timestamps can have the same value in + the same partition. + record_sequence (str): + Record sequence numbers are unique and monotonically + increasing (but not necessarily contiguous) for a specific + timestamp across record types in the same partition. To + guarantee ordered processing, the reader should process + records (of potentially different types) in record_sequence + order for a specific timestamp in the same partition. + + The record sequence number ordering across partitions is + only meaningful in the context of a specific transaction. + Record sequence numbers are unique across partitions for a + specific transaction. Sort the DataChangeRecords for the + same + [server_transaction_id][google.spanner.v1.ChangeStreamRecord.DataChangeRecord.server_transaction_id] + by + [record_sequence][google.spanner.v1.ChangeStreamRecord.DataChangeRecord.record_sequence] + to reconstruct the ordering of the changes within the + transaction. + server_transaction_id (str): + Provides a globally unique string that represents the + transaction in which the change was committed. Multiple + transactions can have the same commit timestamp, but each + transaction has a unique server_transaction_id. + is_last_record_in_transaction_in_partition (bool): + Indicates whether this is the last record for + a transaction in the current partition. Clients + can use this field to determine when all + records for a transaction in the current + partition have been received. + table (str): + Name of the table affected by the change. + column_metadata (MutableSequence[google.cloud.spanner_v1.types.ChangeStreamRecord.DataChangeRecord.ColumnMetadata]): + Provides metadata describing the columns associated with the + [mods][google.spanner.v1.ChangeStreamRecord.DataChangeRecord.mods] + listed below. + mods (MutableSequence[google.cloud.spanner_v1.types.ChangeStreamRecord.DataChangeRecord.Mod]): + Describes the changes that were made. + mod_type (google.cloud.spanner_v1.types.ChangeStreamRecord.DataChangeRecord.ModType): + Describes the type of change. + value_capture_type (google.cloud.spanner_v1.types.ChangeStreamRecord.DataChangeRecord.ValueCaptureType): + Describes the value capture type that was + specified in the change stream configuration + when this change was captured. + number_of_records_in_transaction (int): + Indicates the number of data change records + that are part of this transaction across all + change stream partitions. This value can be used + to assemble all the records associated with a + particular transaction. + number_of_partitions_in_transaction (int): + Indicates the number of partitions that + return data change records for this transaction. + This value can be helpful in assembling all + records associated with a particular + transaction. + transaction_tag (str): + Indicates the transaction tag associated with + this transaction. + is_system_transaction (bool): + Indicates whether the transaction is a system + transaction. System transactions include those + issued by time-to-live (TTL), column backfill, + etc. + """ + + class ModType(proto.Enum): + r"""Mod type describes the type of change Spanner applied to the data. + For example, if the client submits an INSERT_OR_UPDATE request, + Spanner will perform an insert if there is no existing row and + return ModType INSERT. Alternatively, if there is an existing row, + Spanner will perform an update and return ModType UPDATE. + + Values: + MOD_TYPE_UNSPECIFIED (0): + Not specified. + INSERT (10): + Indicates data was inserted. + UPDATE (20): + Indicates existing data was updated. + DELETE (30): + Indicates existing data was deleted. + """ + MOD_TYPE_UNSPECIFIED = 0 + INSERT = 10 + UPDATE = 20 + DELETE = 30 + + class ValueCaptureType(proto.Enum): + r"""Value capture type describes which values are recorded in the + data change record. + + Values: + VALUE_CAPTURE_TYPE_UNSPECIFIED (0): + Not specified. + OLD_AND_NEW_VALUES (10): + Records both old and new values of the + modified watched columns. + NEW_VALUES (20): + Records only new values of the modified + watched columns. + NEW_ROW (30): + Records new values of all watched columns, + including modified and unmodified columns. + NEW_ROW_AND_OLD_VALUES (40): + Records the new values of all watched + columns, including modified and unmodified + columns. Also records the old values of the + modified columns. + """ + VALUE_CAPTURE_TYPE_UNSPECIFIED = 0 + OLD_AND_NEW_VALUES = 10 + NEW_VALUES = 20 + NEW_ROW = 30 + NEW_ROW_AND_OLD_VALUES = 40 + + class ColumnMetadata(proto.Message): + r"""Metadata for a column. + + Attributes: + name (str): + Name of the column. + type_ (google.cloud.spanner_v1.types.Type): + Type of the column. + is_primary_key (bool): + Indicates whether the column is a primary key + column. + ordinal_position (int): + Ordinal position of the column based on the + original table definition in the schema starting + with a value of 1. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + type_: gs_type.Type = proto.Field( + proto.MESSAGE, + number=2, + message=gs_type.Type, + ) + is_primary_key: bool = proto.Field( + proto.BOOL, + number=3, + ) + ordinal_position: int = proto.Field( + proto.INT64, + number=4, + ) + + class ModValue(proto.Message): + r"""Returns the value and associated metadata for a particular field of + the + [Mod][google.spanner.v1.ChangeStreamRecord.DataChangeRecord.Mod]. + + Attributes: + column_metadata_index (int): + Index within the repeated + [column_metadata][google.spanner.v1.ChangeStreamRecord.DataChangeRecord.column_metadata] + field, to obtain the column metadata for the column that was + modified. + value (google.protobuf.struct_pb2.Value): + The value of the column. + """ + + column_metadata_index: int = proto.Field( + proto.INT32, + number=1, + ) + value: struct_pb2.Value = proto.Field( + proto.MESSAGE, + number=2, + message=struct_pb2.Value, + ) + + class Mod(proto.Message): + r"""A mod describes all data changes in a watched table row. + + Attributes: + keys (MutableSequence[google.cloud.spanner_v1.types.ChangeStreamRecord.DataChangeRecord.ModValue]): + Returns the value of the primary key of the + modified row. + old_values (MutableSequence[google.cloud.spanner_v1.types.ChangeStreamRecord.DataChangeRecord.ModValue]): + Returns the old values before the change for the modified + columns. Always empty for + [INSERT][google.spanner.v1.ChangeStreamRecord.DataChangeRecord.ModType.INSERT], + or if old values are not being captured specified by + [value_capture_type][google.spanner.v1.ChangeStreamRecord.DataChangeRecord.ValueCaptureType]. + new_values (MutableSequence[google.cloud.spanner_v1.types.ChangeStreamRecord.DataChangeRecord.ModValue]): + Returns the new values after the change for the modified + columns. Always empty for + [DELETE][google.spanner.v1.ChangeStreamRecord.DataChangeRecord.ModType.DELETE]. + """ + + keys: MutableSequence[ + "ChangeStreamRecord.DataChangeRecord.ModValue" + ] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="ChangeStreamRecord.DataChangeRecord.ModValue", + ) + old_values: MutableSequence[ + "ChangeStreamRecord.DataChangeRecord.ModValue" + ] = proto.RepeatedField( + proto.MESSAGE, + number=2, + message="ChangeStreamRecord.DataChangeRecord.ModValue", + ) + new_values: MutableSequence[ + "ChangeStreamRecord.DataChangeRecord.ModValue" + ] = proto.RepeatedField( + proto.MESSAGE, + number=3, + message="ChangeStreamRecord.DataChangeRecord.ModValue", + ) + + commit_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=1, + message=timestamp_pb2.Timestamp, + ) + record_sequence: str = proto.Field( + proto.STRING, + number=2, + ) + server_transaction_id: str = proto.Field( + proto.STRING, + number=3, + ) + is_last_record_in_transaction_in_partition: bool = proto.Field( + proto.BOOL, + number=4, + ) + table: str = proto.Field( + proto.STRING, + number=5, + ) + column_metadata: MutableSequence[ + "ChangeStreamRecord.DataChangeRecord.ColumnMetadata" + ] = proto.RepeatedField( + proto.MESSAGE, + number=6, + message="ChangeStreamRecord.DataChangeRecord.ColumnMetadata", + ) + mods: MutableSequence[ + "ChangeStreamRecord.DataChangeRecord.Mod" + ] = proto.RepeatedField( + proto.MESSAGE, + number=7, + message="ChangeStreamRecord.DataChangeRecord.Mod", + ) + mod_type: "ChangeStreamRecord.DataChangeRecord.ModType" = proto.Field( + proto.ENUM, + number=8, + enum="ChangeStreamRecord.DataChangeRecord.ModType", + ) + value_capture_type: "ChangeStreamRecord.DataChangeRecord.ValueCaptureType" = ( + proto.Field( + proto.ENUM, + number=9, + enum="ChangeStreamRecord.DataChangeRecord.ValueCaptureType", + ) + ) + number_of_records_in_transaction: int = proto.Field( + proto.INT32, + number=10, + ) + number_of_partitions_in_transaction: int = proto.Field( + proto.INT32, + number=11, + ) + transaction_tag: str = proto.Field( + proto.STRING, + number=12, + ) + is_system_transaction: bool = proto.Field( + proto.BOOL, + number=13, + ) + + class HeartbeatRecord(proto.Message): + r"""A heartbeat record is returned as a progress indicator, when + there are no data changes or any other partition record types in + the change stream partition. + + Attributes: + timestamp (google.protobuf.timestamp_pb2.Timestamp): + Indicates the timestamp at which the query + has returned all the records in the change + stream partition with timestamp <= heartbeat + timestamp. The heartbeat timestamp will not be + the same as the timestamps of other record types + in the same partition. + """ + + timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=1, + message=timestamp_pb2.Timestamp, + ) + + class PartitionStartRecord(proto.Message): + r"""A partition start record serves as a notification that the + client should schedule the partitions to be queried. + PartitionStartRecord returns information about one or more + partitions. + + Attributes: + start_timestamp (google.protobuf.timestamp_pb2.Timestamp): + Start timestamp at which the partitions should be queried to + return change stream records with timestamps >= + start_timestamp. DataChangeRecord.commit_timestamps, + PartitionStartRecord.start_timestamps, + PartitionEventRecord.commit_timestamps, and + PartitionEndRecord.end_timestamps can have the same value in + the same partition. + record_sequence (str): + Record sequence numbers are unique and monotonically + increasing (but not necessarily contiguous) for a specific + timestamp across record types in the same partition. To + guarantee ordered processing, the reader should process + records (of potentially different types) in record_sequence + order for a specific timestamp in the same partition. + partition_tokens (MutableSequence[str]): + Unique partition identifiers to be used in + queries. + """ + + start_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=1, + message=timestamp_pb2.Timestamp, + ) + record_sequence: str = proto.Field( + proto.STRING, + number=2, + ) + partition_tokens: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=3, + ) + + class PartitionEndRecord(proto.Message): + r"""A partition end record serves as a notification that the + client should stop reading the partition. No further records are + expected to be retrieved on it. + + Attributes: + end_timestamp (google.protobuf.timestamp_pb2.Timestamp): + End timestamp at which the change stream partition is + terminated. All changes generated by this partition will + have timestamps <= end_timestamp. + DataChangeRecord.commit_timestamps, + PartitionStartRecord.start_timestamps, + PartitionEventRecord.commit_timestamps, and + PartitionEndRecord.end_timestamps can have the same value in + the same partition. PartitionEndRecord is the last record + returned for a partition. + record_sequence (str): + Record sequence numbers are unique and monotonically + increasing (but not necessarily contiguous) for a specific + timestamp across record types in the same partition. To + guarantee ordered processing, the reader should process + records (of potentially different types) in record_sequence + order for a specific timestamp in the same partition. + partition_token (str): + Unique partition identifier describing the terminated change + stream partition. + [partition_token][google.spanner.v1.ChangeStreamRecord.PartitionEndRecord.partition_token] + is equal to the partition token of the change stream + partition currently queried to return this + PartitionEndRecord. + """ + + end_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=1, + message=timestamp_pb2.Timestamp, + ) + record_sequence: str = proto.Field( + proto.STRING, + number=2, + ) + partition_token: str = proto.Field( + proto.STRING, + number=3, + ) + + class PartitionEventRecord(proto.Message): + r"""A partition event record describes key range changes for a change + stream partition. The changes to a row defined by its primary key + can be captured in one change stream partition for a specific time + range, and then be captured in a different change stream partition + for a different time range. This movement of key ranges across + change stream partitions is a reflection of activities, such as + Spanner's dynamic splitting and load balancing, etc. Processing this + event is needed if users want to guarantee processing of the changes + for any key in timestamp order. If time ordered processing of + changes for a primary key is not needed, this event can be ignored. + To guarantee time ordered processing for each primary key, if the + event describes move-ins, the reader of this partition needs to wait + until the readers of the source partitions have processed all + records with timestamps <= this + PartitionEventRecord.commit_timestamp, before advancing beyond this + PartitionEventRecord. If the event describes move-outs, the reader + can notify the readers of the destination partitions that they can + continue processing. + + Attributes: + commit_timestamp (google.protobuf.timestamp_pb2.Timestamp): + Indicates the commit timestamp at which the key range change + occurred. DataChangeRecord.commit_timestamps, + PartitionStartRecord.start_timestamps, + PartitionEventRecord.commit_timestamps, and + PartitionEndRecord.end_timestamps can have the same value in + the same partition. + record_sequence (str): + Record sequence numbers are unique and monotonically + increasing (but not necessarily contiguous) for a specific + timestamp across record types in the same partition. To + guarantee ordered processing, the reader should process + records (of potentially different types) in record_sequence + order for a specific timestamp in the same partition. + partition_token (str): + Unique partition identifier describing the partition this + event occurred on. + [partition_token][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.partition_token] + is equal to the partition token of the change stream + partition currently queried to return this + PartitionEventRecord. + move_in_events (MutableSequence[google.cloud.spanner_v1.types.ChangeStreamRecord.PartitionEventRecord.MoveInEvent]): + Set when one or more key ranges are moved into the change + stream partition identified by + [partition_token][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.partition_token]. + + Example: Two key ranges are moved into partition (P1) from + partition (P2) and partition (P3) in a single transaction at + timestamp T. + + The PartitionEventRecord returned in P1 will reflect the + move as: + + PartitionEventRecord { commit_timestamp: T partition_token: + "P1" move_in_events { source_partition_token: "P2" } + move_in_events { source_partition_token: "P3" } } + + The PartitionEventRecord returned in P2 will reflect the + move as: + + PartitionEventRecord { commit_timestamp: T partition_token: + "P2" move_out_events { destination_partition_token: "P1" } } + + The PartitionEventRecord returned in P3 will reflect the + move as: + + PartitionEventRecord { commit_timestamp: T partition_token: + "P3" move_out_events { destination_partition_token: "P1" } } + move_out_events (MutableSequence[google.cloud.spanner_v1.types.ChangeStreamRecord.PartitionEventRecord.MoveOutEvent]): + Set when one or more key ranges are moved out of the change + stream partition identified by + [partition_token][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.partition_token]. + + Example: Two key ranges are moved out of partition (P1) to + partition (P2) and partition (P3) in a single transaction at + timestamp T. + + The PartitionEventRecord returned in P1 will reflect the + move as: + + PartitionEventRecord { commit_timestamp: T partition_token: + "P1" move_out_events { destination_partition_token: "P2" } + move_out_events { destination_partition_token: "P3" } } + + The PartitionEventRecord returned in P2 will reflect the + move as: + + PartitionEventRecord { commit_timestamp: T partition_token: + "P2" move_in_events { source_partition_token: "P1" } } + + The PartitionEventRecord returned in P3 will reflect the + move as: + + PartitionEventRecord { commit_timestamp: T partition_token: + "P3" move_in_events { source_partition_token: "P1" } } + """ + + class MoveInEvent(proto.Message): + r"""Describes move-in of the key ranges into the change stream partition + identified by + [partition_token][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.partition_token]. + + To maintain processing the changes for a particular key in timestamp + order, the query processing the change stream partition identified + by + [partition_token][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.partition_token] + should not advance beyond the partition event record commit + timestamp until the queries processing the source change stream + partitions have processed all change stream records with timestamps + <= the partition event record commit timestamp. + + Attributes: + source_partition_token (str): + An unique partition identifier describing the + source change stream partition that recorded + changes for the key range that is moving into + this partition. + """ + + source_partition_token: str = proto.Field( + proto.STRING, + number=1, + ) + + class MoveOutEvent(proto.Message): + r"""Describes move-out of the key ranges out of the change stream + partition identified by + [partition_token][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.partition_token]. + + To maintain processing the changes for a particular key in timestamp + order, the query processing the + [MoveOutEvent][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.MoveOutEvent] + in the partition identified by + [partition_token][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.partition_token] + should inform the queries processing the destination partitions that + they can unblock and proceed processing records past the + [commit_timestamp][google.spanner.v1.ChangeStreamRecord.PartitionEventRecord.commit_timestamp]. + + Attributes: + destination_partition_token (str): + An unique partition identifier describing the + destination change stream partition that will + record changes for the key range that is moving + out of this partition. + """ + + destination_partition_token: str = proto.Field( + proto.STRING, + number=1, + ) + + commit_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=1, + message=timestamp_pb2.Timestamp, + ) + record_sequence: str = proto.Field( + proto.STRING, + number=2, + ) + partition_token: str = proto.Field( + proto.STRING, + number=3, + ) + move_in_events: MutableSequence[ + "ChangeStreamRecord.PartitionEventRecord.MoveInEvent" + ] = proto.RepeatedField( + proto.MESSAGE, + number=4, + message="ChangeStreamRecord.PartitionEventRecord.MoveInEvent", + ) + move_out_events: MutableSequence[ + "ChangeStreamRecord.PartitionEventRecord.MoveOutEvent" + ] = proto.RepeatedField( + proto.MESSAGE, + number=5, + message="ChangeStreamRecord.PartitionEventRecord.MoveOutEvent", + ) + + data_change_record: DataChangeRecord = proto.Field( + proto.MESSAGE, + number=1, + oneof="record", + message=DataChangeRecord, + ) + heartbeat_record: HeartbeatRecord = proto.Field( + proto.MESSAGE, + number=2, + oneof="record", + message=HeartbeatRecord, + ) + partition_start_record: PartitionStartRecord = proto.Field( + proto.MESSAGE, + number=3, + oneof="record", + message=PartitionStartRecord, + ) + partition_end_record: PartitionEndRecord = proto.Field( + proto.MESSAGE, + number=4, + oneof="record", + message=PartitionEndRecord, + ) + partition_event_record: PartitionEventRecord = proto.Field( + proto.MESSAGE, + number=5, + oneof="record", + message=PartitionEventRecord, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_v1/types/commit_response.py b/google/cloud/spanner_v1/types/commit_response.py new file mode 100644 index 0000000000..8214973e5a --- /dev/null +++ b/google/cloud/spanner_v1/types/commit_response.py @@ -0,0 +1,104 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_v1.types import transaction +from google.protobuf import timestamp_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "CommitResponse", + }, +) + + +class CommitResponse(proto.Message): + r"""The response for [Commit][google.spanner.v1.Spanner.Commit]. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + commit_timestamp (google.protobuf.timestamp_pb2.Timestamp): + The Cloud Spanner timestamp at which the + transaction committed. + commit_stats (google.cloud.spanner_v1.types.CommitResponse.CommitStats): + The statistics about this ``Commit``. Not returned by + default. For more information, see + [CommitRequest.return_commit_stats][google.spanner.v1.CommitRequest.return_commit_stats]. + precommit_token (google.cloud.spanner_v1.types.MultiplexedSessionPrecommitToken): + If specified, transaction has not committed + yet. You must retry the commit with the new + precommit token. + + This field is a member of `oneof`_ ``MultiplexedSessionRetry``. + snapshot_timestamp (google.protobuf.timestamp_pb2.Timestamp): + If ``TransactionOptions.isolation_level`` is set to + ``IsolationLevel.REPEATABLE_READ``, then the snapshot + timestamp is the timestamp at which all reads in the + transaction ran. This timestamp is never returned. + """ + + class CommitStats(proto.Message): + r"""Additional statistics about a commit. + + Attributes: + mutation_count (int): + The total number of mutations for the transaction. Knowing + the ``mutation_count`` value can help you maximize the + number of mutations in a transaction and minimize the number + of API round trips. You can also monitor this value to + prevent transactions from exceeding the system + `limit `__. + If the number of mutations exceeds the limit, the server + returns + `INVALID_ARGUMENT `__. + """ + + mutation_count: int = proto.Field( + proto.INT64, + number=1, + ) + + commit_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=1, + message=timestamp_pb2.Timestamp, + ) + commit_stats: CommitStats = proto.Field( + proto.MESSAGE, + number=2, + message=CommitStats, + ) + precommit_token: transaction.MultiplexedSessionPrecommitToken = proto.Field( + proto.MESSAGE, + number=4, + oneof="MultiplexedSessionRetry", + message=transaction.MultiplexedSessionPrecommitToken, + ) + snapshot_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=5, + message=timestamp_pb2.Timestamp, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_v1/types/keys.py b/google/cloud/spanner_v1/types/keys.py new file mode 100644 index 0000000000..15272ab689 --- /dev/null +++ b/google/cloud/spanner_v1/types/keys.py @@ -0,0 +1,248 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.protobuf import struct_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "KeyRange", + "KeySet", + }, +) + + +class KeyRange(proto.Message): + r"""KeyRange represents a range of rows in a table or index. + + A range has a start key and an end key. These keys can be open or + closed, indicating if the range includes rows with that key. + + Keys are represented by lists, where the ith value in the list + corresponds to the ith component of the table or index primary key. + Individual values are encoded as described + [here][google.spanner.v1.TypeCode]. + + For example, consider the following table definition: + + :: + + CREATE TABLE UserEvents ( + UserName STRING(MAX), + EventDate STRING(10) + ) PRIMARY KEY(UserName, EventDate); + + The following keys name rows in this table: + + :: + + ["Bob", "2014-09-23"] + ["Alfred", "2015-06-12"] + + Since the ``UserEvents`` table's ``PRIMARY KEY`` clause names two + columns, each ``UserEvents`` key has two elements; the first is the + ``UserName``, and the second is the ``EventDate``. + + Key ranges with multiple components are interpreted + lexicographically by component using the table or index key's + declared sort order. For example, the following range returns all + events for user ``"Bob"`` that occurred in the year 2015: + + :: + + "start_closed": ["Bob", "2015-01-01"] + "end_closed": ["Bob", "2015-12-31"] + + Start and end keys can omit trailing key components. This affects + the inclusion and exclusion of rows that exactly match the provided + key components: if the key is closed, then rows that exactly match + the provided components are included; if the key is open, then rows + that exactly match are not included. + + For example, the following range includes all events for ``"Bob"`` + that occurred during and after the year 2000: + + :: + + "start_closed": ["Bob", "2000-01-01"] + "end_closed": ["Bob"] + + The next example retrieves all events for ``"Bob"``: + + :: + + "start_closed": ["Bob"] + "end_closed": ["Bob"] + + To retrieve events before the year 2000: + + :: + + "start_closed": ["Bob"] + "end_open": ["Bob", "2000-01-01"] + + The following range includes all rows in the table: + + :: + + "start_closed": [] + "end_closed": [] + + This range returns all users whose ``UserName`` begins with any + character from A to C: + + :: + + "start_closed": ["A"] + "end_open": ["D"] + + This range returns all users whose ``UserName`` begins with B: + + :: + + "start_closed": ["B"] + "end_open": ["C"] + + Key ranges honor column sort order. For example, suppose a table is + defined as follows: + + :: + + CREATE TABLE DescendingSortedTable { + Key INT64, + ... + ) PRIMARY KEY(Key DESC); + + The following range retrieves all rows with key values between 1 and + 100 inclusive: + + :: + + "start_closed": ["100"] + "end_closed": ["1"] + + Note that 100 is passed as the start, and 1 is passed as the end, + because ``Key`` is a descending column in the schema. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + start_closed (google.protobuf.struct_pb2.ListValue): + If the start is closed, then the range includes all rows + whose first ``len(start_closed)`` key columns exactly match + ``start_closed``. + + This field is a member of `oneof`_ ``start_key_type``. + start_open (google.protobuf.struct_pb2.ListValue): + If the start is open, then the range excludes rows whose + first ``len(start_open)`` key columns exactly match + ``start_open``. + + This field is a member of `oneof`_ ``start_key_type``. + end_closed (google.protobuf.struct_pb2.ListValue): + If the end is closed, then the range includes all rows whose + first ``len(end_closed)`` key columns exactly match + ``end_closed``. + + This field is a member of `oneof`_ ``end_key_type``. + end_open (google.protobuf.struct_pb2.ListValue): + If the end is open, then the range excludes rows whose first + ``len(end_open)`` key columns exactly match ``end_open``. + + This field is a member of `oneof`_ ``end_key_type``. + """ + + start_closed: struct_pb2.ListValue = proto.Field( + proto.MESSAGE, + number=1, + oneof="start_key_type", + message=struct_pb2.ListValue, + ) + start_open: struct_pb2.ListValue = proto.Field( + proto.MESSAGE, + number=2, + oneof="start_key_type", + message=struct_pb2.ListValue, + ) + end_closed: struct_pb2.ListValue = proto.Field( + proto.MESSAGE, + number=3, + oneof="end_key_type", + message=struct_pb2.ListValue, + ) + end_open: struct_pb2.ListValue = proto.Field( + proto.MESSAGE, + number=4, + oneof="end_key_type", + message=struct_pb2.ListValue, + ) + + +class KeySet(proto.Message): + r"""``KeySet`` defines a collection of Cloud Spanner keys and/or key + ranges. All the keys are expected to be in the same table or index. + The keys need not be sorted in any particular way. + + If the same key is specified multiple times in the set (for example + if two ranges, two keys, or a key and a range overlap), Cloud + Spanner behaves as if the key were only specified once. + + Attributes: + keys (MutableSequence[google.protobuf.struct_pb2.ListValue]): + A list of specific keys. Entries in ``keys`` should have + exactly as many elements as there are columns in the primary + or index key with which this ``KeySet`` is used. Individual + key values are encoded as described + [here][google.spanner.v1.TypeCode]. + ranges (MutableSequence[google.cloud.spanner_v1.types.KeyRange]): + A list of key ranges. See + [KeyRange][google.spanner.v1.KeyRange] for more information + about key range specifications. + all_ (bool): + For convenience ``all`` can be set to ``true`` to indicate + that this ``KeySet`` matches all keys in the table or index. + Note that any keys specified in ``keys`` or ``ranges`` are + only yielded once. + """ + + keys: MutableSequence[struct_pb2.ListValue] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message=struct_pb2.ListValue, + ) + ranges: MutableSequence["KeyRange"] = proto.RepeatedField( + proto.MESSAGE, + number=2, + message="KeyRange", + ) + all_: bool = proto.Field( + proto.BOOL, + number=3, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_v1/types/mutation.py b/google/cloud/spanner_v1/types/mutation.py new file mode 100644 index 0000000000..8389910fc0 --- /dev/null +++ b/google/cloud/spanner_v1/types/mutation.py @@ -0,0 +1,201 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_v1.types import keys +from google.protobuf import struct_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "Mutation", + }, +) + + +class Mutation(proto.Message): + r"""A modification to one or more Cloud Spanner rows. Mutations can be + applied to a Cloud Spanner database by sending them in a + [Commit][google.spanner.v1.Spanner.Commit] call. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + insert (google.cloud.spanner_v1.types.Mutation.Write): + Insert new rows in a table. If any of the rows already + exist, the write or transaction fails with error + ``ALREADY_EXISTS``. + + This field is a member of `oneof`_ ``operation``. + update (google.cloud.spanner_v1.types.Mutation.Write): + Update existing rows in a table. If any of the rows does not + already exist, the transaction fails with error + ``NOT_FOUND``. + + This field is a member of `oneof`_ ``operation``. + insert_or_update (google.cloud.spanner_v1.types.Mutation.Write): + Like [insert][google.spanner.v1.Mutation.insert], except + that if the row already exists, then its column values are + overwritten with the ones provided. Any column values not + explicitly written are preserved. + + When using + [insert_or_update][google.spanner.v1.Mutation.insert_or_update], + just as when using + [insert][google.spanner.v1.Mutation.insert], all + ``NOT NULL`` columns in the table must be given a value. + This holds true even when the row already exists and will + therefore actually be updated. + + This field is a member of `oneof`_ ``operation``. + replace (google.cloud.spanner_v1.types.Mutation.Write): + Like [insert][google.spanner.v1.Mutation.insert], except + that if the row already exists, it is deleted, and the + column values provided are inserted instead. Unlike + [insert_or_update][google.spanner.v1.Mutation.insert_or_update], + this means any values not explicitly written become + ``NULL``. + + In an interleaved table, if you create the child table with + the ``ON DELETE CASCADE`` annotation, then replacing a + parent row also deletes the child rows. Otherwise, you must + delete the child rows before you replace the parent row. + + This field is a member of `oneof`_ ``operation``. + delete (google.cloud.spanner_v1.types.Mutation.Delete): + Delete rows from a table. Succeeds whether or + not the named rows were present. + + This field is a member of `oneof`_ ``operation``. + """ + + class Write(proto.Message): + r"""Arguments to [insert][google.spanner.v1.Mutation.insert], + [update][google.spanner.v1.Mutation.update], + [insert_or_update][google.spanner.v1.Mutation.insert_or_update], and + [replace][google.spanner.v1.Mutation.replace] operations. + + Attributes: + table (str): + Required. The table whose rows will be + written. + columns (MutableSequence[str]): + The names of the columns in + [table][google.spanner.v1.Mutation.Write.table] to be + written. + + The list of columns must contain enough columns to allow + Cloud Spanner to derive values for all primary key columns + in the row(s) to be modified. + values (MutableSequence[google.protobuf.struct_pb2.ListValue]): + The values to be written. ``values`` can contain more than + one list of values. If it does, then multiple rows are + written, one for each entry in ``values``. Each list in + ``values`` must have exactly as many entries as there are + entries in + [columns][google.spanner.v1.Mutation.Write.columns] above. + Sending multiple lists is equivalent to sending multiple + ``Mutation``\ s, each containing one ``values`` entry and + repeating [table][google.spanner.v1.Mutation.Write.table] + and [columns][google.spanner.v1.Mutation.Write.columns]. + Individual values in each list are encoded as described + [here][google.spanner.v1.TypeCode]. + """ + + table: str = proto.Field( + proto.STRING, + number=1, + ) + columns: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=2, + ) + values: MutableSequence[struct_pb2.ListValue] = proto.RepeatedField( + proto.MESSAGE, + number=3, + message=struct_pb2.ListValue, + ) + + class Delete(proto.Message): + r"""Arguments to [delete][google.spanner.v1.Mutation.delete] operations. + + Attributes: + table (str): + Required. The table whose rows will be + deleted. + key_set (google.cloud.spanner_v1.types.KeySet): + Required. The primary keys of the rows within + [table][google.spanner.v1.Mutation.Delete.table] to delete. + The primary keys must be specified in the order in which + they appear in the ``PRIMARY KEY()`` clause of the table's + equivalent DDL statement (the DDL statement used to create + the table). Delete is idempotent. The transaction will + succeed even if some or all rows do not exist. + """ + + table: str = proto.Field( + proto.STRING, + number=1, + ) + key_set: keys.KeySet = proto.Field( + proto.MESSAGE, + number=2, + message=keys.KeySet, + ) + + insert: Write = proto.Field( + proto.MESSAGE, + number=1, + oneof="operation", + message=Write, + ) + update: Write = proto.Field( + proto.MESSAGE, + number=2, + oneof="operation", + message=Write, + ) + insert_or_update: Write = proto.Field( + proto.MESSAGE, + number=3, + oneof="operation", + message=Write, + ) + replace: Write = proto.Field( + proto.MESSAGE, + number=4, + oneof="operation", + message=Write, + ) + delete: Delete = proto.Field( + proto.MESSAGE, + number=5, + oneof="operation", + message=Delete, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_v1/types/query_plan.py b/google/cloud/spanner_v1/types/query_plan.py new file mode 100644 index 0000000000..d361911f1d --- /dev/null +++ b/google/cloud/spanner_v1/types/query_plan.py @@ -0,0 +1,220 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.protobuf import struct_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "PlanNode", + "QueryPlan", + }, +) + + +class PlanNode(proto.Message): + r"""Node information for nodes appearing in a + [QueryPlan.plan_nodes][google.spanner.v1.QueryPlan.plan_nodes]. + + Attributes: + index (int): + The ``PlanNode``'s index in [node + list][google.spanner.v1.QueryPlan.plan_nodes]. + kind (google.cloud.spanner_v1.types.PlanNode.Kind): + Used to determine the type of node. May be needed for + visualizing different kinds of nodes differently. For + example, If the node is a + [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] node, it + will have a condensed representation which can be used to + directly embed a description of the node in its parent. + display_name (str): + The display name for the node. + child_links (MutableSequence[google.cloud.spanner_v1.types.PlanNode.ChildLink]): + List of child node ``index``\ es and their relationship to + this parent. + short_representation (google.cloud.spanner_v1.types.PlanNode.ShortRepresentation): + Condensed representation for + [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] nodes. + metadata (google.protobuf.struct_pb2.Struct): + Attributes relevant to the node contained in a group of + key-value pairs. For example, a Parameter Reference node + could have the following information in its metadata: + + :: + + { + "parameter_reference": "param1", + "parameter_type": "array" + } + execution_stats (google.protobuf.struct_pb2.Struct): + The execution statistics associated with the + node, contained in a group of key-value pairs. + Only present if the plan was returned as a + result of a profile query. For example, number + of executions, number of rows/time per execution + etc. + """ + + class Kind(proto.Enum): + r"""The kind of [PlanNode][google.spanner.v1.PlanNode]. Distinguishes + between the two different kinds of nodes that can appear in a query + plan. + + Values: + KIND_UNSPECIFIED (0): + Not specified. + RELATIONAL (1): + Denotes a Relational operator node in the expression tree. + Relational operators represent iterative processing of rows + during query execution. For example, a ``TableScan`` + operation that reads rows from a table. + SCALAR (2): + Denotes a Scalar node in the expression tree. + Scalar nodes represent non-iterable entities in + the query plan. For example, constants or + arithmetic operators appearing inside predicate + expressions or references to column names. + """ + KIND_UNSPECIFIED = 0 + RELATIONAL = 1 + SCALAR = 2 + + class ChildLink(proto.Message): + r"""Metadata associated with a parent-child relationship appearing in a + [PlanNode][google.spanner.v1.PlanNode]. + + Attributes: + child_index (int): + The node to which the link points. + type_ (str): + The type of the link. For example, in Hash + Joins this could be used to distinguish between + the build child and the probe child, or in the + case of the child being an output variable, to + represent the tag associated with the output + variable. + variable (str): + Only present if the child node is + [SCALAR][google.spanner.v1.PlanNode.Kind.SCALAR] and + corresponds to an output variable of the parent node. The + field carries the name of the output variable. For example, + a ``TableScan`` operator that reads rows from a table will + have child links to the ``SCALAR`` nodes representing the + output variables created for each column that is read by the + operator. The corresponding ``variable`` fields will be set + to the variable names assigned to the columns. + """ + + child_index: int = proto.Field( + proto.INT32, + number=1, + ) + type_: str = proto.Field( + proto.STRING, + number=2, + ) + variable: str = proto.Field( + proto.STRING, + number=3, + ) + + class ShortRepresentation(proto.Message): + r"""Condensed representation of a node and its subtree. Only present for + ``SCALAR`` [PlanNode(s)][google.spanner.v1.PlanNode]. + + Attributes: + description (str): + A string representation of the expression + subtree rooted at this node. + subqueries (MutableMapping[str, int]): + A mapping of (subquery variable name) -> (subquery node id) + for cases where the ``description`` string of this node + references a ``SCALAR`` subquery contained in the expression + subtree rooted at this node. The referenced ``SCALAR`` + subquery may not necessarily be a direct child of this node. + """ + + description: str = proto.Field( + proto.STRING, + number=1, + ) + subqueries: MutableMapping[str, int] = proto.MapField( + proto.STRING, + proto.INT32, + number=2, + ) + + index: int = proto.Field( + proto.INT32, + number=1, + ) + kind: Kind = proto.Field( + proto.ENUM, + number=2, + enum=Kind, + ) + display_name: str = proto.Field( + proto.STRING, + number=3, + ) + child_links: MutableSequence[ChildLink] = proto.RepeatedField( + proto.MESSAGE, + number=4, + message=ChildLink, + ) + short_representation: ShortRepresentation = proto.Field( + proto.MESSAGE, + number=5, + message=ShortRepresentation, + ) + metadata: struct_pb2.Struct = proto.Field( + proto.MESSAGE, + number=6, + message=struct_pb2.Struct, + ) + execution_stats: struct_pb2.Struct = proto.Field( + proto.MESSAGE, + number=7, + message=struct_pb2.Struct, + ) + + +class QueryPlan(proto.Message): + r"""Contains an ordered list of nodes appearing in the query + plan. + + Attributes: + plan_nodes (MutableSequence[google.cloud.spanner_v1.types.PlanNode]): + The nodes in the query plan. Plan nodes are returned in + pre-order starting with the plan root. Each + [PlanNode][google.spanner.v1.PlanNode]'s ``id`` corresponds + to its index in ``plan_nodes``. + """ + + plan_nodes: MutableSequence["PlanNode"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="PlanNode", + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_v1/types/result_set.py b/google/cloud/spanner_v1/types/result_set.py new file mode 100644 index 0000000000..697d0fd33b --- /dev/null +++ b/google/cloud/spanner_v1/types/result_set.py @@ -0,0 +1,379 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_v1.types import query_plan as gs_query_plan +from google.cloud.spanner_v1.types import transaction as gs_transaction +from google.cloud.spanner_v1.types import type as gs_type +from google.protobuf import struct_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "ResultSet", + "PartialResultSet", + "ResultSetMetadata", + "ResultSetStats", + }, +) + + +class ResultSet(proto.Message): + r"""Results from [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql]. + + Attributes: + metadata (google.cloud.spanner_v1.types.ResultSetMetadata): + Metadata about the result set, such as row + type information. + rows (MutableSequence[google.protobuf.struct_pb2.ListValue]): + Each element in ``rows`` is a row whose format is defined by + [metadata.row_type][google.spanner.v1.ResultSetMetadata.row_type]. + The ith element in each row matches the ith field in + [metadata.row_type][google.spanner.v1.ResultSetMetadata.row_type]. + Elements are encoded based on type as described + [here][google.spanner.v1.TypeCode]. + stats (google.cloud.spanner_v1.types.ResultSetStats): + Query plan and execution statistics for the SQL statement + that produced this result set. These can be requested by + setting + [ExecuteSqlRequest.query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode]. + DML statements always produce stats containing the number of + rows modified, unless executed using the + [ExecuteSqlRequest.QueryMode.PLAN][google.spanner.v1.ExecuteSqlRequest.QueryMode.PLAN] + [ExecuteSqlRequest.query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode]. + Other fields might or might not be populated, based on the + [ExecuteSqlRequest.query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode]. + precommit_token (google.cloud.spanner_v1.types.MultiplexedSessionPrecommitToken): + Optional. A precommit token is included if the read-write + transaction is on a multiplexed session. Pass the precommit + token with the highest sequence number from this transaction + attempt to the [Commit][google.spanner.v1.Spanner.Commit] + request for this transaction. + """ + + metadata: "ResultSetMetadata" = proto.Field( + proto.MESSAGE, + number=1, + message="ResultSetMetadata", + ) + rows: MutableSequence[struct_pb2.ListValue] = proto.RepeatedField( + proto.MESSAGE, + number=2, + message=struct_pb2.ListValue, + ) + stats: "ResultSetStats" = proto.Field( + proto.MESSAGE, + number=3, + message="ResultSetStats", + ) + precommit_token: gs_transaction.MultiplexedSessionPrecommitToken = proto.Field( + proto.MESSAGE, + number=5, + message=gs_transaction.MultiplexedSessionPrecommitToken, + ) + + +class PartialResultSet(proto.Message): + r"""Partial results from a streaming read or SQL query. Streaming + reads and SQL queries better tolerate large result sets, large + rows, and large values, but are a little trickier to consume. + + Attributes: + metadata (google.cloud.spanner_v1.types.ResultSetMetadata): + Metadata about the result set, such as row + type information. Only present in the first + response. + values (MutableSequence[google.protobuf.struct_pb2.Value]): + A streamed result set consists of a stream of values, which + might be split into many ``PartialResultSet`` messages to + accommodate large rows and/or large values. Every N complete + values defines a row, where N is equal to the number of + entries in + [metadata.row_type.fields][google.spanner.v1.StructType.fields]. + + Most values are encoded based on type as described + [here][google.spanner.v1.TypeCode]. + + It's possible that the last value in values is "chunked", + meaning that the rest of the value is sent in subsequent + ``PartialResultSet``\ (s). This is denoted by the + [chunked_value][google.spanner.v1.PartialResultSet.chunked_value] + field. Two or more chunked values can be merged to form a + complete value as follows: + + - ``bool/number/null``: can't be chunked + - ``string``: concatenate the strings + - ``list``: concatenate the lists. If the last element in a + list is a ``string``, ``list``, or ``object``, merge it + with the first element in the next list by applying these + rules recursively. + - ``object``: concatenate the (field name, field value) + pairs. If a field name is duplicated, then apply these + rules recursively to merge the field values. + + Some examples of merging: + + :: + + Strings are concatenated. + "foo", "bar" => "foobar" + + Lists of non-strings are concatenated. + [2, 3], [4] => [2, 3, 4] + + Lists are concatenated, but the last and first elements are merged + because they are strings. + ["a", "b"], ["c", "d"] => ["a", "bc", "d"] + + Lists are concatenated, but the last and first elements are merged + because they are lists. Recursively, the last and first elements + of the inner lists are merged because they are strings. + ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"] + + Non-overlapping object fields are combined. + {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} + + Overlapping object fields are merged. + {"a": "1"}, {"a": "2"} => {"a": "12"} + + Examples of merging objects containing lists of strings. + {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]} + + For a more complete example, suppose a streaming SQL query + is yielding a result set whose rows contain a single string + field. The following ``PartialResultSet``\ s might be + yielded: + + :: + + { + "metadata": { ... } + "values": ["Hello", "W"] + "chunked_value": true + "resume_token": "Af65..." + } + { + "values": ["orl"] + "chunked_value": true + } + { + "values": ["d"] + "resume_token": "Zx1B..." + } + + This sequence of ``PartialResultSet``\ s encodes two rows, + one containing the field value ``"Hello"``, and a second + containing the field value ``"World" = "W" + "orl" + "d"``. + + Not all ``PartialResultSet``\ s contain a ``resume_token``. + Execution can only be resumed from a previously yielded + ``resume_token``. For the above sequence of + ``PartialResultSet``\ s, resuming the query with + ``"resume_token": "Af65..."`` yields results from the + ``PartialResultSet`` with value "orl". + chunked_value (bool): + If true, then the final value in + [values][google.spanner.v1.PartialResultSet.values] is + chunked, and must be combined with more values from + subsequent ``PartialResultSet``\ s to obtain a complete + field value. + resume_token (bytes): + Streaming calls might be interrupted for a variety of + reasons, such as TCP connection loss. If this occurs, the + stream of results can be resumed by re-sending the original + request and including ``resume_token``. Note that executing + any other transaction in the same session invalidates the + token. + stats (google.cloud.spanner_v1.types.ResultSetStats): + Query plan and execution statistics for the statement that + produced this streaming result set. These can be requested + by setting + [ExecuteSqlRequest.query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode] + and are sent only once with the last response in the stream. + This field is also present in the last response for DML + statements. + precommit_token (google.cloud.spanner_v1.types.MultiplexedSessionPrecommitToken): + Optional. A precommit token is included if the read-write + transaction has multiplexed sessions enabled. Pass the + precommit token with the highest sequence number from this + transaction attempt to the + [Commit][google.spanner.v1.Spanner.Commit] request for this + transaction. + last (bool): + Optional. Indicates whether this is the last + ``PartialResultSet`` in the stream. The server might + optionally set this field. Clients shouldn't rely on this + field being set in all cases. + """ + + metadata: "ResultSetMetadata" = proto.Field( + proto.MESSAGE, + number=1, + message="ResultSetMetadata", + ) + values: MutableSequence[struct_pb2.Value] = proto.RepeatedField( + proto.MESSAGE, + number=2, + message=struct_pb2.Value, + ) + chunked_value: bool = proto.Field( + proto.BOOL, + number=3, + ) + resume_token: bytes = proto.Field( + proto.BYTES, + number=4, + ) + stats: "ResultSetStats" = proto.Field( + proto.MESSAGE, + number=5, + message="ResultSetStats", + ) + precommit_token: gs_transaction.MultiplexedSessionPrecommitToken = proto.Field( + proto.MESSAGE, + number=8, + message=gs_transaction.MultiplexedSessionPrecommitToken, + ) + last: bool = proto.Field( + proto.BOOL, + number=9, + ) + + +class ResultSetMetadata(proto.Message): + r"""Metadata about a [ResultSet][google.spanner.v1.ResultSet] or + [PartialResultSet][google.spanner.v1.PartialResultSet]. + + Attributes: + row_type (google.cloud.spanner_v1.types.StructType): + Indicates the field names and types for the rows in the + result set. For example, a SQL query like + ``"SELECT UserId, UserName FROM Users"`` could return a + ``row_type`` value like: + + :: + + "fields": [ + { "name": "UserId", "type": { "code": "INT64" } }, + { "name": "UserName", "type": { "code": "STRING" } }, + ] + transaction (google.cloud.spanner_v1.types.Transaction): + If the read or SQL query began a transaction + as a side-effect, the information about the new + transaction is yielded here. + undeclared_parameters (google.cloud.spanner_v1.types.StructType): + A SQL query can be parameterized. In PLAN mode, these + parameters can be undeclared. This indicates the field names + and types for those undeclared parameters in the SQL query. + For example, a SQL query like + ``"SELECT * FROM Users where UserId = @userId and UserName = @userName "`` + could return a ``undeclared_parameters`` value like: + + :: + + "fields": [ + { "name": "UserId", "type": { "code": "INT64" } }, + { "name": "UserName", "type": { "code": "STRING" } }, + ] + """ + + row_type: gs_type.StructType = proto.Field( + proto.MESSAGE, + number=1, + message=gs_type.StructType, + ) + transaction: gs_transaction.Transaction = proto.Field( + proto.MESSAGE, + number=2, + message=gs_transaction.Transaction, + ) + undeclared_parameters: gs_type.StructType = proto.Field( + proto.MESSAGE, + number=3, + message=gs_type.StructType, + ) + + +class ResultSetStats(proto.Message): + r"""Additional statistics about a + [ResultSet][google.spanner.v1.ResultSet] or + [PartialResultSet][google.spanner.v1.PartialResultSet]. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + query_plan (google.cloud.spanner_v1.types.QueryPlan): + [QueryPlan][google.spanner.v1.QueryPlan] for the query + associated with this result. + query_stats (google.protobuf.struct_pb2.Struct): + Aggregated statistics from the execution of the query. Only + present when the query is profiled. For example, a query + could return the statistics as follows: + + :: + + { + "rows_returned": "3", + "elapsed_time": "1.22 secs", + "cpu_time": "1.19 secs" + } + row_count_exact (int): + Standard DML returns an exact count of rows + that were modified. + + This field is a member of `oneof`_ ``row_count``. + row_count_lower_bound (int): + Partitioned DML doesn't offer exactly-once + semantics, so it returns a lower bound of the + rows modified. + + This field is a member of `oneof`_ ``row_count``. + """ + + query_plan: gs_query_plan.QueryPlan = proto.Field( + proto.MESSAGE, + number=1, + message=gs_query_plan.QueryPlan, + ) + query_stats: struct_pb2.Struct = proto.Field( + proto.MESSAGE, + number=2, + message=struct_pb2.Struct, + ) + row_count_exact: int = proto.Field( + proto.INT64, + number=3, + oneof="row_count", + ) + row_count_lower_bound: int = proto.Field( + proto.INT64, + number=4, + oneof="row_count", + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_v1/types/spanner.py b/google/cloud/spanner_v1/types/spanner.py new file mode 100644 index 0000000000..9e7a477b46 --- /dev/null +++ b/google/cloud/spanner_v1/types/spanner.py @@ -0,0 +1,1790 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.cloud.spanner_v1.types import keys +from google.cloud.spanner_v1.types import mutation +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import transaction as gs_transaction +from google.cloud.spanner_v1.types import type as gs_type +from google.protobuf import duration_pb2 # type: ignore +from google.protobuf import struct_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from google.rpc import status_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "CreateSessionRequest", + "BatchCreateSessionsRequest", + "BatchCreateSessionsResponse", + "Session", + "GetSessionRequest", + "ListSessionsRequest", + "ListSessionsResponse", + "DeleteSessionRequest", + "RequestOptions", + "DirectedReadOptions", + "ExecuteSqlRequest", + "ExecuteBatchDmlRequest", + "ExecuteBatchDmlResponse", + "PartitionOptions", + "PartitionQueryRequest", + "PartitionReadRequest", + "Partition", + "PartitionResponse", + "ReadRequest", + "BeginTransactionRequest", + "CommitRequest", + "RollbackRequest", + "BatchWriteRequest", + "BatchWriteResponse", + }, +) + + +class CreateSessionRequest(proto.Message): + r"""The request for + [CreateSession][google.spanner.v1.Spanner.CreateSession]. + + Attributes: + database (str): + Required. The database in which the new + session is created. + session (google.cloud.spanner_v1.types.Session): + Required. The session to create. + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + session: "Session" = proto.Field( + proto.MESSAGE, + number=2, + message="Session", + ) + + +class BatchCreateSessionsRequest(proto.Message): + r"""The request for + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. + + Attributes: + database (str): + Required. The database in which the new + sessions are created. + session_template (google.cloud.spanner_v1.types.Session): + Parameters to apply to each created session. + session_count (int): + Required. The number of sessions to be created in this batch + call. The API can return fewer than the requested number of + sessions. If a specific number of sessions are desired, the + client can make additional calls to ``BatchCreateSessions`` + (adjusting + [session_count][google.spanner.v1.BatchCreateSessionsRequest.session_count] + as necessary). + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + session_template: "Session" = proto.Field( + proto.MESSAGE, + number=2, + message="Session", + ) + session_count: int = proto.Field( + proto.INT32, + number=3, + ) + + +class BatchCreateSessionsResponse(proto.Message): + r"""The response for + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions]. + + Attributes: + session (MutableSequence[google.cloud.spanner_v1.types.Session]): + The freshly created sessions. + """ + + session: MutableSequence["Session"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="Session", + ) + + +class Session(proto.Message): + r"""A session in the Cloud Spanner API. + + Attributes: + name (str): + Output only. The name of the session. This is + always system-assigned. + labels (MutableMapping[str, str]): + The labels for the session. + + - Label keys must be between 1 and 63 characters long and + must conform to the following regular expression: + ``[a-z]([-a-z0-9]*[a-z0-9])?``. + - Label values must be between 0 and 63 characters long and + must conform to the regular expression + ``([a-z]([-a-z0-9]*[a-z0-9])?)?``. + - No more than 64 labels can be associated with a given + session. + + See https://goo.gl/xmQnxf for more information on and + examples of labels. + create_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The timestamp when the session + is created. + approximate_last_use_time (google.protobuf.timestamp_pb2.Timestamp): + Output only. The approximate timestamp when + the session is last used. It's typically earlier + than the actual last use time. + creator_role (str): + The database role which created this session. + multiplexed (bool): + Optional. If ``true``, specifies a multiplexed session. Use + a multiplexed session for multiple, concurrent read-only + operations. Don't use them for read-write transactions, + partitioned reads, or partitioned queries. Use + [``sessions.create``][google.spanner.v1.Spanner.CreateSession] + to create multiplexed sessions. Don't use + [BatchCreateSessions][google.spanner.v1.Spanner.BatchCreateSessions] + to create a multiplexed session. You can't delete or list + multiplexed sessions. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + labels: MutableMapping[str, str] = proto.MapField( + proto.STRING, + proto.STRING, + number=2, + ) + create_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + approximate_last_use_time: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + message=timestamp_pb2.Timestamp, + ) + creator_role: str = proto.Field( + proto.STRING, + number=5, + ) + multiplexed: bool = proto.Field( + proto.BOOL, + number=6, + ) + + +class GetSessionRequest(proto.Message): + r"""The request for [GetSession][google.spanner.v1.Spanner.GetSession]. + + Attributes: + name (str): + Required. The name of the session to + retrieve. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class ListSessionsRequest(proto.Message): + r"""The request for + [ListSessions][google.spanner.v1.Spanner.ListSessions]. + + Attributes: + database (str): + Required. The database in which to list + sessions. + page_size (int): + Number of sessions to be returned in the + response. If 0 or less, defaults to the server's + maximum allowed page size. + page_token (str): + If non-empty, ``page_token`` should contain a + [next_page_token][google.spanner.v1.ListSessionsResponse.next_page_token] + from a previous + [ListSessionsResponse][google.spanner.v1.ListSessionsResponse]. + filter (str): + An expression for filtering the results of the request. + Filter rules are case insensitive. The fields eligible for + filtering are: + + - ``labels.key`` where key is the name of a label + + Some examples of using filters are: + + - ``labels.env:*`` --> The session has the label "env". + - ``labels.env:dev`` --> The session has the label "env" and + the value of the label contains the string "dev". + """ + + database: str = proto.Field( + proto.STRING, + number=1, + ) + page_size: int = proto.Field( + proto.INT32, + number=2, + ) + page_token: str = proto.Field( + proto.STRING, + number=3, + ) + filter: str = proto.Field( + proto.STRING, + number=4, + ) + + +class ListSessionsResponse(proto.Message): + r"""The response for + [ListSessions][google.spanner.v1.Spanner.ListSessions]. + + Attributes: + sessions (MutableSequence[google.cloud.spanner_v1.types.Session]): + The list of requested sessions. + next_page_token (str): + ``next_page_token`` can be sent in a subsequent + [ListSessions][google.spanner.v1.Spanner.ListSessions] call + to fetch more of the matching sessions. + """ + + @property + def raw_page(self): + return self + + sessions: MutableSequence["Session"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="Session", + ) + next_page_token: str = proto.Field( + proto.STRING, + number=2, + ) + + +class DeleteSessionRequest(proto.Message): + r"""The request for + [DeleteSession][google.spanner.v1.Spanner.DeleteSession]. + + Attributes: + name (str): + Required. The name of the session to delete. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + + +class RequestOptions(proto.Message): + r"""Common request options for various APIs. + + Attributes: + priority (google.cloud.spanner_v1.types.RequestOptions.Priority): + Priority for the request. + request_tag (str): + A per-request tag which can be applied to queries or reads, + used for statistics collection. Both ``request_tag`` and + ``transaction_tag`` can be specified for a read or query + that belongs to a transaction. This field is ignored for + requests where it's not applicable (for example, + ``CommitRequest``). Legal characters for ``request_tag`` + values are all printable characters (ASCII 32 - 126) and the + length of a request_tag is limited to 50 characters. Values + that exceed this limit are truncated. Any leading underscore + (\_) characters are removed from the string. + transaction_tag (str): + A tag used for statistics collection about this transaction. + Both ``request_tag`` and ``transaction_tag`` can be + specified for a read or query that belongs to a transaction. + The value of transaction_tag should be the same for all + requests belonging to the same transaction. If this request + doesn't belong to any transaction, ``transaction_tag`` is + ignored. Legal characters for ``transaction_tag`` values are + all printable characters (ASCII 32 - 126) and the length of + a ``transaction_tag`` is limited to 50 characters. Values + that exceed this limit are truncated. Any leading underscore + (\_) characters are removed from the string. + """ + + class Priority(proto.Enum): + r"""The relative priority for requests. Note that priority isn't + applicable for + [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. + + The priority acts as a hint to the Cloud Spanner scheduler and + doesn't guarantee priority or order of execution. For example: + + - Some parts of a write operation always execute at + ``PRIORITY_HIGH``, regardless of the specified priority. This can + cause you to see an increase in high priority workload even when + executing a low priority request. This can also potentially cause + a priority inversion where a lower priority request is fulfilled + ahead of a higher priority request. + - If a transaction contains multiple operations with different + priorities, Cloud Spanner doesn't guarantee to process the higher + priority operations first. There might be other constraints to + satisfy, such as the order of operations. + + Values: + PRIORITY_UNSPECIFIED (0): + ``PRIORITY_UNSPECIFIED`` is equivalent to ``PRIORITY_HIGH``. + PRIORITY_LOW (1): + This specifies that the request is low + priority. + PRIORITY_MEDIUM (2): + This specifies that the request is medium + priority. + PRIORITY_HIGH (3): + This specifies that the request is high + priority. + """ + PRIORITY_UNSPECIFIED = 0 + PRIORITY_LOW = 1 + PRIORITY_MEDIUM = 2 + PRIORITY_HIGH = 3 + + priority: Priority = proto.Field( + proto.ENUM, + number=1, + enum=Priority, + ) + request_tag: str = proto.Field( + proto.STRING, + number=2, + ) + transaction_tag: str = proto.Field( + proto.STRING, + number=3, + ) + + +class DirectedReadOptions(proto.Message): + r"""The ``DirectedReadOptions`` can be used to indicate which replicas + or regions should be used for non-transactional reads or queries. + + ``DirectedReadOptions`` can only be specified for a read-only + transaction, otherwise the API returns an ``INVALID_ARGUMENT`` + error. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + include_replicas (google.cloud.spanner_v1.types.DirectedReadOptions.IncludeReplicas): + ``Include_replicas`` indicates the order of replicas (as + they appear in this list) to process the request. If + ``auto_failover_disabled`` is set to ``true`` and all + replicas are exhausted without finding a healthy replica, + Spanner waits for a replica in the list to become available, + requests might fail due to ``DEADLINE_EXCEEDED`` errors. + + This field is a member of `oneof`_ ``replicas``. + exclude_replicas (google.cloud.spanner_v1.types.DirectedReadOptions.ExcludeReplicas): + ``Exclude_replicas`` indicates that specified replicas + should be excluded from serving requests. Spanner doesn't + route requests to the replicas in this list. + + This field is a member of `oneof`_ ``replicas``. + """ + + class ReplicaSelection(proto.Message): + r"""The directed read replica selector. Callers must provide one or more + of the following fields for replica selection: + + - ``location`` - The location must be one of the regions within the + multi-region configuration of your database. + - ``type`` - The type of the replica. + + Some examples of using replica_selectors are: + + - ``location:us-east1`` --> The "us-east1" replica(s) of any + available type is used to process the request. + - ``type:READ_ONLY`` --> The "READ_ONLY" type replica(s) in the + nearest available location are used to process the request. + - ``location:us-east1 type:READ_ONLY`` --> The "READ_ONLY" type + replica(s) in location "us-east1" is used to process the request. + + Attributes: + location (str): + The location or region of the serving + requests, for example, "us-east1". + type_ (google.cloud.spanner_v1.types.DirectedReadOptions.ReplicaSelection.Type): + The type of replica. + """ + + class Type(proto.Enum): + r"""Indicates the type of replica. + + Values: + TYPE_UNSPECIFIED (0): + Not specified. + READ_WRITE (1): + Read-write replicas support both reads and + writes. + READ_ONLY (2): + Read-only replicas only support reads (not + writes). + """ + TYPE_UNSPECIFIED = 0 + READ_WRITE = 1 + READ_ONLY = 2 + + location: str = proto.Field( + proto.STRING, + number=1, + ) + type_: "DirectedReadOptions.ReplicaSelection.Type" = proto.Field( + proto.ENUM, + number=2, + enum="DirectedReadOptions.ReplicaSelection.Type", + ) + + class IncludeReplicas(proto.Message): + r"""An ``IncludeReplicas`` contains a repeated set of + ``ReplicaSelection`` which indicates the order in which replicas + should be considered. + + Attributes: + replica_selections (MutableSequence[google.cloud.spanner_v1.types.DirectedReadOptions.ReplicaSelection]): + The directed read replica selector. + auto_failover_disabled (bool): + If ``true``, Spanner doesn't route requests to a replica + outside the <``include_replicas`` list when all of the + specified replicas are unavailable or unhealthy. Default + value is ``false``. + """ + + replica_selections: MutableSequence[ + "DirectedReadOptions.ReplicaSelection" + ] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="DirectedReadOptions.ReplicaSelection", + ) + auto_failover_disabled: bool = proto.Field( + proto.BOOL, + number=2, + ) + + class ExcludeReplicas(proto.Message): + r"""An ExcludeReplicas contains a repeated set of + ReplicaSelection that should be excluded from serving requests. + + Attributes: + replica_selections (MutableSequence[google.cloud.spanner_v1.types.DirectedReadOptions.ReplicaSelection]): + The directed read replica selector. + """ + + replica_selections: MutableSequence[ + "DirectedReadOptions.ReplicaSelection" + ] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="DirectedReadOptions.ReplicaSelection", + ) + + include_replicas: IncludeReplicas = proto.Field( + proto.MESSAGE, + number=1, + oneof="replicas", + message=IncludeReplicas, + ) + exclude_replicas: ExcludeReplicas = proto.Field( + proto.MESSAGE, + number=2, + oneof="replicas", + message=ExcludeReplicas, + ) + + +class ExecuteSqlRequest(proto.Message): + r"""The request for [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] + and + [ExecuteStreamingSql][google.spanner.v1.Spanner.ExecuteStreamingSql]. + + Attributes: + session (str): + Required. The session in which the SQL query + should be performed. + transaction (google.cloud.spanner_v1.types.TransactionSelector): + The transaction to use. + + For queries, if none is provided, the default is + a temporary read-only transaction with strong + concurrency. + + Standard DML statements require a read-write + transaction. To protect against replays, + single-use transactions are not supported. The + caller must either supply an existing + transaction ID or begin a new transaction. + + Partitioned DML requires an existing Partitioned + DML transaction ID. + sql (str): + Required. The SQL string. + params (google.protobuf.struct_pb2.Struct): + Parameter names and values that bind to placeholders in the + SQL string. + + A parameter placeholder consists of the ``@`` character + followed by the parameter name (for example, + ``@firstName``). Parameter names must conform to the naming + requirements of identifiers as specified at + https://cloud.google.com/spanner/docs/lexical#identifiers. + + Parameters can appear anywhere that a literal value is + expected. The same parameter name can be used more than + once, for example: + + ``"WHERE id > @msg_id AND id < @msg_id + 100"`` + + It's an error to execute a SQL statement with unbound + parameters. + param_types (MutableMapping[str, google.cloud.spanner_v1.types.Type]): + It isn't always possible for Cloud Spanner to infer the + right SQL type from a JSON value. For example, values of + type ``BYTES`` and values of type ``STRING`` both appear in + [params][google.spanner.v1.ExecuteSqlRequest.params] as JSON + strings. + + In these cases, you can use ``param_types`` to specify the + exact SQL type for some or all of the SQL statement + parameters. See the definition of + [Type][google.spanner.v1.Type] for more information about + SQL types. + resume_token (bytes): + If this request is resuming a previously interrupted SQL + statement execution, ``resume_token`` should be copied from + the last + [PartialResultSet][google.spanner.v1.PartialResultSet] + yielded before the interruption. Doing this enables the new + SQL statement execution to resume where the last one left + off. The rest of the request parameters must exactly match + the request that yielded this token. + query_mode (google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryMode): + Used to control the amount of debugging information returned + in [ResultSetStats][google.spanner.v1.ResultSetStats]. If + [partition_token][google.spanner.v1.ExecuteSqlRequest.partition_token] + is set, + [query_mode][google.spanner.v1.ExecuteSqlRequest.query_mode] + can only be set to + [QueryMode.NORMAL][google.spanner.v1.ExecuteSqlRequest.QueryMode.NORMAL]. + partition_token (bytes): + If present, results are restricted to the specified + partition previously created using ``PartitionQuery``. There + must be an exact match for the values of fields common to + this message and the ``PartitionQueryRequest`` message used + to create this ``partition_token``. + seqno (int): + A per-transaction sequence number used to + identify this request. This field makes each + request idempotent such that if the request is + received multiple times, at most one succeeds. + + The sequence number must be monotonically + increasing within the transaction. If a request + arrives for the first time with an out-of-order + sequence number, the transaction can be aborted. + Replays of previously handled requests yield the + same response as the first execution. + + Required for DML statements. Ignored for + queries. + query_options (google.cloud.spanner_v1.types.ExecuteSqlRequest.QueryOptions): + Query optimizer configuration to use for the + given query. + request_options (google.cloud.spanner_v1.types.RequestOptions): + Common options for this request. + directed_read_options (google.cloud.spanner_v1.types.DirectedReadOptions): + Directed read options for this request. + data_boost_enabled (bool): + If this is for a partitioned query and this field is set to + ``true``, the request is executed with Spanner Data Boost + independent compute resources. + + If the field is set to ``true`` but the request doesn't set + ``partition_token``, the API returns an ``INVALID_ARGUMENT`` + error. + last_statement (bool): + Optional. If set to ``true``, this statement marks the end + of the transaction. After this statement executes, you must + commit or abort the transaction. Attempts to execute any + other requests against this transaction (including reads and + queries) are rejected. + + For DML statements, setting this option might cause some + error reporting to be deferred until commit time (for + example, validation of unique constraints). Given this, + successful execution of a DML statement shouldn't be assumed + until a subsequent ``Commit`` call completes successfully. + """ + + class QueryMode(proto.Enum): + r"""Mode in which the statement must be processed. + + Values: + NORMAL (0): + The default mode. Only the statement results + are returned. + PLAN (1): + This mode returns only the query plan, + without any results or execution statistics + information. + PROFILE (2): + This mode returns the query plan, overall + execution statistics, operator level execution + statistics along with the results. This has a + performance overhead compared to the other + modes. It isn't recommended to use this mode for + production traffic. + WITH_STATS (3): + This mode returns the overall (but not + operator-level) execution statistics along with + the results. + WITH_PLAN_AND_STATS (4): + This mode returns the query plan, overall + (but not operator-level) execution statistics + along with the results. + """ + NORMAL = 0 + PLAN = 1 + PROFILE = 2 + WITH_STATS = 3 + WITH_PLAN_AND_STATS = 4 + + class QueryOptions(proto.Message): + r"""Query optimizer configuration. + + Attributes: + optimizer_version (str): + An option to control the selection of optimizer version. + + This parameter allows individual queries to pick different + query optimizer versions. + + Specifying ``latest`` as a value instructs Cloud Spanner to + use the latest supported query optimizer version. If not + specified, Cloud Spanner uses the optimizer version set at + the database level options. Any other positive integer (from + the list of supported optimizer versions) overrides the + default optimizer version for query execution. + + The list of supported optimizer versions can be queried from + ``SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS``. + + Executing a SQL statement with an invalid optimizer version + fails with an ``INVALID_ARGUMENT`` error. + + See + https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer + for more information on managing the query optimizer. + + The ``optimizer_version`` statement hint has precedence over + this setting. + optimizer_statistics_package (str): + An option to control the selection of optimizer statistics + package. + + This parameter allows individual queries to use a different + query optimizer statistics package. + + Specifying ``latest`` as a value instructs Cloud Spanner to + use the latest generated statistics package. If not + specified, Cloud Spanner uses the statistics package set at + the database level options, or the latest package if the + database option isn't set. + + The statistics package requested by the query has to be + exempt from garbage collection. This can be achieved with + the following DDL statement: + + .. code:: sql + + ALTER STATISTICS SET OPTIONS (allow_gc=false) + + The list of available statistics packages can be queried + from ``INFORMATION_SCHEMA.SPANNER_STATISTICS``. + + Executing a SQL statement with an invalid optimizer + statistics package or with a statistics package that allows + garbage collection fails with an ``INVALID_ARGUMENT`` error. + """ + + optimizer_version: str = proto.Field( + proto.STRING, + number=1, + ) + optimizer_statistics_package: str = proto.Field( + proto.STRING, + number=2, + ) + + session: str = proto.Field( + proto.STRING, + number=1, + ) + transaction: gs_transaction.TransactionSelector = proto.Field( + proto.MESSAGE, + number=2, + message=gs_transaction.TransactionSelector, + ) + sql: str = proto.Field( + proto.STRING, + number=3, + ) + params: struct_pb2.Struct = proto.Field( + proto.MESSAGE, + number=4, + message=struct_pb2.Struct, + ) + param_types: MutableMapping[str, gs_type.Type] = proto.MapField( + proto.STRING, + proto.MESSAGE, + number=5, + message=gs_type.Type, + ) + resume_token: bytes = proto.Field( + proto.BYTES, + number=6, + ) + query_mode: QueryMode = proto.Field( + proto.ENUM, + number=7, + enum=QueryMode, + ) + partition_token: bytes = proto.Field( + proto.BYTES, + number=8, + ) + seqno: int = proto.Field( + proto.INT64, + number=9, + ) + query_options: QueryOptions = proto.Field( + proto.MESSAGE, + number=10, + message=QueryOptions, + ) + request_options: "RequestOptions" = proto.Field( + proto.MESSAGE, + number=11, + message="RequestOptions", + ) + directed_read_options: "DirectedReadOptions" = proto.Field( + proto.MESSAGE, + number=15, + message="DirectedReadOptions", + ) + data_boost_enabled: bool = proto.Field( + proto.BOOL, + number=16, + ) + last_statement: bool = proto.Field( + proto.BOOL, + number=17, + ) + + +class ExecuteBatchDmlRequest(proto.Message): + r"""The request for + [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. + + Attributes: + session (str): + Required. The session in which the DML + statements should be performed. + transaction (google.cloud.spanner_v1.types.TransactionSelector): + Required. The transaction to use. Must be a + read-write transaction. + To protect against replays, single-use + transactions are not supported. The caller must + either supply an existing transaction ID or + begin a new transaction. + statements (MutableSequence[google.cloud.spanner_v1.types.ExecuteBatchDmlRequest.Statement]): + Required. The list of statements to execute in this batch. + Statements are executed serially, such that the effects of + statement ``i`` are visible to statement ``i+1``. Each + statement must be a DML statement. Execution stops at the + first failed statement; the remaining statements are not + executed. + + Callers must provide at least one statement. + seqno (int): + Required. A per-transaction sequence number + used to identify this request. This field makes + each request idempotent such that if the request + is received multiple times, at most one + succeeds. + + The sequence number must be monotonically + increasing within the transaction. If a request + arrives for the first time with an out-of-order + sequence number, the transaction might be + aborted. Replays of previously handled requests + yield the same response as the first execution. + request_options (google.cloud.spanner_v1.types.RequestOptions): + Common options for this request. + last_statements (bool): + Optional. If set to ``true``, this request marks the end of + the transaction. After these statements execute, you must + commit or abort the transaction. Attempts to execute any + other requests against this transaction (including reads and + queries) are rejected. + + Setting this option might cause some error reporting to be + deferred until commit time (for example, validation of + unique constraints). Given this, successful execution of + statements shouldn't be assumed until a subsequent + ``Commit`` call completes successfully. + """ + + class Statement(proto.Message): + r"""A single DML statement. + + Attributes: + sql (str): + Required. The DML string. + params (google.protobuf.struct_pb2.Struct): + Parameter names and values that bind to placeholders in the + DML string. + + A parameter placeholder consists of the ``@`` character + followed by the parameter name (for example, + ``@firstName``). Parameter names can contain letters, + numbers, and underscores. + + Parameters can appear anywhere that a literal value is + expected. The same parameter name can be used more than + once, for example: + + ``"WHERE id > @msg_id AND id < @msg_id + 100"`` + + It's an error to execute a SQL statement with unbound + parameters. + param_types (MutableMapping[str, google.cloud.spanner_v1.types.Type]): + It isn't always possible for Cloud Spanner to infer the + right SQL type from a JSON value. For example, values of + type ``BYTES`` and values of type ``STRING`` both appear in + [params][google.spanner.v1.ExecuteBatchDmlRequest.Statement.params] + as JSON strings. + + In these cases, ``param_types`` can be used to specify the + exact SQL type for some or all of the SQL statement + parameters. See the definition of + [Type][google.spanner.v1.Type] for more information about + SQL types. + """ + + sql: str = proto.Field( + proto.STRING, + number=1, + ) + params: struct_pb2.Struct = proto.Field( + proto.MESSAGE, + number=2, + message=struct_pb2.Struct, + ) + param_types: MutableMapping[str, gs_type.Type] = proto.MapField( + proto.STRING, + proto.MESSAGE, + number=3, + message=gs_type.Type, + ) + + session: str = proto.Field( + proto.STRING, + number=1, + ) + transaction: gs_transaction.TransactionSelector = proto.Field( + proto.MESSAGE, + number=2, + message=gs_transaction.TransactionSelector, + ) + statements: MutableSequence[Statement] = proto.RepeatedField( + proto.MESSAGE, + number=3, + message=Statement, + ) + seqno: int = proto.Field( + proto.INT64, + number=4, + ) + request_options: "RequestOptions" = proto.Field( + proto.MESSAGE, + number=5, + message="RequestOptions", + ) + last_statements: bool = proto.Field( + proto.BOOL, + number=6, + ) + + +class ExecuteBatchDmlResponse(proto.Message): + r"""The response for + [ExecuteBatchDml][google.spanner.v1.Spanner.ExecuteBatchDml]. + Contains a list of [ResultSet][google.spanner.v1.ResultSet] + messages, one for each DML statement that has successfully executed, + in the same order as the statements in the request. If a statement + fails, the status in the response body identifies the cause of the + failure. + + To check for DML statements that failed, use the following approach: + + 1. Check the status in the response message. The + [google.rpc.Code][google.rpc.Code] enum value ``OK`` indicates + that all statements were executed successfully. + 2. If the status was not ``OK``, check the number of result sets in + the response. If the response contains ``N`` + [ResultSet][google.spanner.v1.ResultSet] messages, then statement + ``N+1`` in the request failed. + + Example 1: + + - Request: 5 DML statements, all executed successfully. + - Response: 5 [ResultSet][google.spanner.v1.ResultSet] messages, + with the status ``OK``. + + Example 2: + + - Request: 5 DML statements. The third statement has a syntax error. + - Response: 2 [ResultSet][google.spanner.v1.ResultSet] messages, and + a syntax error (``INVALID_ARGUMENT``) status. The number of + [ResultSet][google.spanner.v1.ResultSet] messages indicates that + the third statement failed, and the fourth and fifth statements + were not executed. + + Attributes: + result_sets (MutableSequence[google.cloud.spanner_v1.types.ResultSet]): + One [ResultSet][google.spanner.v1.ResultSet] for each + statement in the request that ran successfully, in the same + order as the statements in the request. Each + [ResultSet][google.spanner.v1.ResultSet] does not contain + any rows. The + [ResultSetStats][google.spanner.v1.ResultSetStats] in each + [ResultSet][google.spanner.v1.ResultSet] contain the number + of rows modified by the statement. + + Only the first [ResultSet][google.spanner.v1.ResultSet] in + the response contains valid + [ResultSetMetadata][google.spanner.v1.ResultSetMetadata]. + status (google.rpc.status_pb2.Status): + If all DML statements are executed successfully, the status + is ``OK``. Otherwise, the error status of the first failed + statement. + precommit_token (google.cloud.spanner_v1.types.MultiplexedSessionPrecommitToken): + Optional. A precommit token is included if the read-write + transaction is on a multiplexed session. Pass the precommit + token with the highest sequence number from this transaction + attempt should be passed to the + [Commit][google.spanner.v1.Spanner.Commit] request for this + transaction. + """ + + result_sets: MutableSequence[result_set.ResultSet] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message=result_set.ResultSet, + ) + status: status_pb2.Status = proto.Field( + proto.MESSAGE, + number=2, + message=status_pb2.Status, + ) + precommit_token: gs_transaction.MultiplexedSessionPrecommitToken = proto.Field( + proto.MESSAGE, + number=3, + message=gs_transaction.MultiplexedSessionPrecommitToken, + ) + + +class PartitionOptions(proto.Message): + r"""Options for a ``PartitionQueryRequest`` and + ``PartitionReadRequest``. + + Attributes: + partition_size_bytes (int): + **Note:** This hint is currently ignored by + ``PartitionQuery`` and ``PartitionRead`` requests. + + The desired data size for each partition generated. The + default for this option is currently 1 GiB. This is only a + hint. The actual size of each partition can be smaller or + larger than this size request. + max_partitions (int): + **Note:** This hint is currently ignored by + ``PartitionQuery`` and ``PartitionRead`` requests. + + The desired maximum number of partitions to return. For + example, this might be set to the number of workers + available. The default for this option is currently 10,000. + The maximum value is currently 200,000. This is only a hint. + The actual number of partitions returned can be smaller or + larger than this maximum count request. + """ + + partition_size_bytes: int = proto.Field( + proto.INT64, + number=1, + ) + max_partitions: int = proto.Field( + proto.INT64, + number=2, + ) + + +class PartitionQueryRequest(proto.Message): + r"""The request for + [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] + + Attributes: + session (str): + Required. The session used to create the + partitions. + transaction (google.cloud.spanner_v1.types.TransactionSelector): + Read-only snapshot transactions are + supported, read and write and single-use + transactions are not. + sql (str): + Required. The query request to generate partitions for. The + request fails if the query isn't root partitionable. For a + query to be root partitionable, it needs to satisfy a few + conditions. For example, if the query execution plan + contains a distributed union operator, then it must be the + first operator in the plan. For more information about other + conditions, see `Read data in + parallel `__. + + The query request must not contain DML commands, such as + ``INSERT``, ``UPDATE``, or ``DELETE``. Use + [``ExecuteStreamingSql``][google.spanner.v1.Spanner.ExecuteStreamingSql] + with a ``PartitionedDml`` transaction for large, + partition-friendly DML operations. + params (google.protobuf.struct_pb2.Struct): + Parameter names and values that bind to placeholders in the + SQL string. + + A parameter placeholder consists of the ``@`` character + followed by the parameter name (for example, + ``@firstName``). Parameter names can contain letters, + numbers, and underscores. + + Parameters can appear anywhere that a literal value is + expected. The same parameter name can be used more than + once, for example: + + ``"WHERE id > @msg_id AND id < @msg_id + 100"`` + + It's an error to execute a SQL statement with unbound + parameters. + param_types (MutableMapping[str, google.cloud.spanner_v1.types.Type]): + It isn't always possible for Cloud Spanner to infer the + right SQL type from a JSON value. For example, values of + type ``BYTES`` and values of type ``STRING`` both appear in + [params][google.spanner.v1.PartitionQueryRequest.params] as + JSON strings. + + In these cases, ``param_types`` can be used to specify the + exact SQL type for some or all of the SQL query parameters. + See the definition of [Type][google.spanner.v1.Type] for + more information about SQL types. + partition_options (google.cloud.spanner_v1.types.PartitionOptions): + Additional options that affect how many + partitions are created. + """ + + session: str = proto.Field( + proto.STRING, + number=1, + ) + transaction: gs_transaction.TransactionSelector = proto.Field( + proto.MESSAGE, + number=2, + message=gs_transaction.TransactionSelector, + ) + sql: str = proto.Field( + proto.STRING, + number=3, + ) + params: struct_pb2.Struct = proto.Field( + proto.MESSAGE, + number=4, + message=struct_pb2.Struct, + ) + param_types: MutableMapping[str, gs_type.Type] = proto.MapField( + proto.STRING, + proto.MESSAGE, + number=5, + message=gs_type.Type, + ) + partition_options: "PartitionOptions" = proto.Field( + proto.MESSAGE, + number=6, + message="PartitionOptions", + ) + + +class PartitionReadRequest(proto.Message): + r"""The request for + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + + Attributes: + session (str): + Required. The session used to create the + partitions. + transaction (google.cloud.spanner_v1.types.TransactionSelector): + Read only snapshot transactions are + supported, read/write and single use + transactions are not. + table (str): + Required. The name of the table in the + database to be read. + index (str): + If non-empty, the name of an index on + [table][google.spanner.v1.PartitionReadRequest.table]. This + index is used instead of the table primary key when + interpreting + [key_set][google.spanner.v1.PartitionReadRequest.key_set] + and sorting result rows. See + [key_set][google.spanner.v1.PartitionReadRequest.key_set] + for further information. + columns (MutableSequence[str]): + The columns of + [table][google.spanner.v1.PartitionReadRequest.table] to be + returned for each row matching this request. + key_set (google.cloud.spanner_v1.types.KeySet): + Required. ``key_set`` identifies the rows to be yielded. + ``key_set`` names the primary keys of the rows in + [table][google.spanner.v1.PartitionReadRequest.table] to be + yielded, unless + [index][google.spanner.v1.PartitionReadRequest.index] is + present. If + [index][google.spanner.v1.PartitionReadRequest.index] is + present, then + [key_set][google.spanner.v1.PartitionReadRequest.key_set] + instead names index keys in + [index][google.spanner.v1.PartitionReadRequest.index]. + + It isn't an error for the ``key_set`` to name rows that + don't exist in the database. Read yields nothing for + nonexistent rows. + partition_options (google.cloud.spanner_v1.types.PartitionOptions): + Additional options that affect how many + partitions are created. + """ + + session: str = proto.Field( + proto.STRING, + number=1, + ) + transaction: gs_transaction.TransactionSelector = proto.Field( + proto.MESSAGE, + number=2, + message=gs_transaction.TransactionSelector, + ) + table: str = proto.Field( + proto.STRING, + number=3, + ) + index: str = proto.Field( + proto.STRING, + number=4, + ) + columns: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=5, + ) + key_set: keys.KeySet = proto.Field( + proto.MESSAGE, + number=6, + message=keys.KeySet, + ) + partition_options: "PartitionOptions" = proto.Field( + proto.MESSAGE, + number=9, + message="PartitionOptions", + ) + + +class Partition(proto.Message): + r"""Information returned for each partition returned in a + PartitionResponse. + + Attributes: + partition_token (bytes): + This token can be passed to ``Read``, ``StreamingRead``, + ``ExecuteSql``, or ``ExecuteStreamingSql`` requests to + restrict the results to those identified by this partition + token. + """ + + partition_token: bytes = proto.Field( + proto.BYTES, + number=1, + ) + + +class PartitionResponse(proto.Message): + r"""The response for + [PartitionQuery][google.spanner.v1.Spanner.PartitionQuery] or + [PartitionRead][google.spanner.v1.Spanner.PartitionRead] + + Attributes: + partitions (MutableSequence[google.cloud.spanner_v1.types.Partition]): + Partitions created by this request. + transaction (google.cloud.spanner_v1.types.Transaction): + Transaction created by this request. + """ + + partitions: MutableSequence["Partition"] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message="Partition", + ) + transaction: gs_transaction.Transaction = proto.Field( + proto.MESSAGE, + number=2, + message=gs_transaction.Transaction, + ) + + +class ReadRequest(proto.Message): + r"""The request for [Read][google.spanner.v1.Spanner.Read] and + [StreamingRead][google.spanner.v1.Spanner.StreamingRead]. + + Attributes: + session (str): + Required. The session in which the read + should be performed. + transaction (google.cloud.spanner_v1.types.TransactionSelector): + The transaction to use. If none is provided, + the default is a temporary read-only transaction + with strong concurrency. + table (str): + Required. The name of the table in the + database to be read. + index (str): + If non-empty, the name of an index on + [table][google.spanner.v1.ReadRequest.table]. This index is + used instead of the table primary key when interpreting + [key_set][google.spanner.v1.ReadRequest.key_set] and sorting + result rows. See + [key_set][google.spanner.v1.ReadRequest.key_set] for further + information. + columns (MutableSequence[str]): + Required. The columns of + [table][google.spanner.v1.ReadRequest.table] to be returned + for each row matching this request. + key_set (google.cloud.spanner_v1.types.KeySet): + Required. ``key_set`` identifies the rows to be yielded. + ``key_set`` names the primary keys of the rows in + [table][google.spanner.v1.ReadRequest.table] to be yielded, + unless [index][google.spanner.v1.ReadRequest.index] is + present. If [index][google.spanner.v1.ReadRequest.index] is + present, then + [key_set][google.spanner.v1.ReadRequest.key_set] instead + names index keys in + [index][google.spanner.v1.ReadRequest.index]. + + If the + [partition_token][google.spanner.v1.ReadRequest.partition_token] + field is empty, rows are yielded in table primary key order + (if [index][google.spanner.v1.ReadRequest.index] is empty) + or index key order (if + [index][google.spanner.v1.ReadRequest.index] is non-empty). + If the + [partition_token][google.spanner.v1.ReadRequest.partition_token] + field isn't empty, rows are yielded in an unspecified order. + + It isn't an error for the ``key_set`` to name rows that + don't exist in the database. Read yields nothing for + nonexistent rows. + limit (int): + If greater than zero, only the first ``limit`` rows are + yielded. If ``limit`` is zero, the default is no limit. A + limit can't be specified if ``partition_token`` is set. + resume_token (bytes): + If this request is resuming a previously interrupted read, + ``resume_token`` should be copied from the last + [PartialResultSet][google.spanner.v1.PartialResultSet] + yielded before the interruption. Doing this enables the new + read to resume where the last read left off. The rest of the + request parameters must exactly match the request that + yielded this token. + partition_token (bytes): + If present, results are restricted to the specified + partition previously created using ``PartitionRead``. There + must be an exact match for the values of fields common to + this message and the PartitionReadRequest message used to + create this partition_token. + request_options (google.cloud.spanner_v1.types.RequestOptions): + Common options for this request. + directed_read_options (google.cloud.spanner_v1.types.DirectedReadOptions): + Directed read options for this request. + data_boost_enabled (bool): + If this is for a partitioned read and this field is set to + ``true``, the request is executed with Spanner Data Boost + independent compute resources. + + If the field is set to ``true`` but the request doesn't set + ``partition_token``, the API returns an ``INVALID_ARGUMENT`` + error. + order_by (google.cloud.spanner_v1.types.ReadRequest.OrderBy): + Optional. Order for the returned rows. + + By default, Spanner returns result rows in primary key order + except for PartitionRead requests. For applications that + don't require rows to be returned in primary key + (``ORDER_BY_PRIMARY_KEY``) order, setting + ``ORDER_BY_NO_ORDER`` option allows Spanner to optimize row + retrieval, resulting in lower latencies in certain cases + (for example, bulk point lookups). + lock_hint (google.cloud.spanner_v1.types.ReadRequest.LockHint): + Optional. Lock Hint for the request, it can + only be used with read-write transactions. + """ + + class OrderBy(proto.Enum): + r"""An option to control the order in which rows are returned + from a read. + + Values: + ORDER_BY_UNSPECIFIED (0): + Default value. + + ``ORDER_BY_UNSPECIFIED`` is equivalent to + ``ORDER_BY_PRIMARY_KEY``. + ORDER_BY_PRIMARY_KEY (1): + Read rows are returned in primary key order. + + In the event that this option is used in conjunction with + the ``partition_token`` field, the API returns an + ``INVALID_ARGUMENT`` error. + ORDER_BY_NO_ORDER (2): + Read rows are returned in any order. + """ + ORDER_BY_UNSPECIFIED = 0 + ORDER_BY_PRIMARY_KEY = 1 + ORDER_BY_NO_ORDER = 2 + + class LockHint(proto.Enum): + r"""A lock hint mechanism for reads done within a transaction. + + Values: + LOCK_HINT_UNSPECIFIED (0): + Default value. + + ``LOCK_HINT_UNSPECIFIED`` is equivalent to + ``LOCK_HINT_SHARED``. + LOCK_HINT_SHARED (1): + Acquire shared locks. + + By default when you perform a read as part of a read-write + transaction, Spanner acquires shared read locks, which + allows other reads to still access the data until your + transaction is ready to commit. When your transaction is + committing and writes are being applied, the transaction + attempts to upgrade to an exclusive lock for any data you + are writing. For more information about locks, see `Lock + modes `__. + LOCK_HINT_EXCLUSIVE (2): + Acquire exclusive locks. + + Requesting exclusive locks is beneficial if you observe high + write contention, which means you notice that multiple + transactions are concurrently trying to read and write to + the same data, resulting in a large number of aborts. This + problem occurs when two transactions initially acquire + shared locks and then both try to upgrade to exclusive locks + at the same time. In this situation both transactions are + waiting for the other to give up their lock, resulting in a + deadlocked situation. Spanner is able to detect this + occurring and force one of the transactions to abort. + However, this is a slow and expensive operation and results + in lower performance. In this case it makes sense to acquire + exclusive locks at the start of the transaction because then + when multiple transactions try to act on the same data, they + automatically get serialized. Each transaction waits its + turn to acquire the lock and avoids getting into deadlock + situations. + + Because the exclusive lock hint is just a hint, it shouldn't + be considered equivalent to a mutex. In other words, you + shouldn't use Spanner exclusive locks as a mutual exclusion + mechanism for the execution of code outside of Spanner. + + **Note:** Request exclusive locks judiciously because they + block others from reading that data for the entire + transaction, rather than just when the writes are being + performed. Unless you observe high write contention, you + should use the default of shared read locks so you don't + prematurely block other clients from reading the data that + you're writing to. + """ + LOCK_HINT_UNSPECIFIED = 0 + LOCK_HINT_SHARED = 1 + LOCK_HINT_EXCLUSIVE = 2 + + session: str = proto.Field( + proto.STRING, + number=1, + ) + transaction: gs_transaction.TransactionSelector = proto.Field( + proto.MESSAGE, + number=2, + message=gs_transaction.TransactionSelector, + ) + table: str = proto.Field( + proto.STRING, + number=3, + ) + index: str = proto.Field( + proto.STRING, + number=4, + ) + columns: MutableSequence[str] = proto.RepeatedField( + proto.STRING, + number=5, + ) + key_set: keys.KeySet = proto.Field( + proto.MESSAGE, + number=6, + message=keys.KeySet, + ) + limit: int = proto.Field( + proto.INT64, + number=8, + ) + resume_token: bytes = proto.Field( + proto.BYTES, + number=9, + ) + partition_token: bytes = proto.Field( + proto.BYTES, + number=10, + ) + request_options: "RequestOptions" = proto.Field( + proto.MESSAGE, + number=11, + message="RequestOptions", + ) + directed_read_options: "DirectedReadOptions" = proto.Field( + proto.MESSAGE, + number=14, + message="DirectedReadOptions", + ) + data_boost_enabled: bool = proto.Field( + proto.BOOL, + number=15, + ) + order_by: OrderBy = proto.Field( + proto.ENUM, + number=16, + enum=OrderBy, + ) + lock_hint: LockHint = proto.Field( + proto.ENUM, + number=17, + enum=LockHint, + ) + + +class BeginTransactionRequest(proto.Message): + r"""The request for + [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction]. + + Attributes: + session (str): + Required. The session in which the + transaction runs. + options (google.cloud.spanner_v1.types.TransactionOptions): + Required. Options for the new transaction. + request_options (google.cloud.spanner_v1.types.RequestOptions): + Common options for this request. Priority is ignored for + this request. Setting the priority in this + ``request_options`` struct doesn't do anything. To set the + priority for a transaction, set it on the reads and writes + that are part of this transaction instead. + mutation_key (google.cloud.spanner_v1.types.Mutation): + Optional. Required for read-write + transactions on a multiplexed session that + commit mutations but don't perform any reads or + queries. You must randomly select one of the + mutations from the mutation set and send it as a + part of this request. + """ + + session: str = proto.Field( + proto.STRING, + number=1, + ) + options: gs_transaction.TransactionOptions = proto.Field( + proto.MESSAGE, + number=2, + message=gs_transaction.TransactionOptions, + ) + request_options: "RequestOptions" = proto.Field( + proto.MESSAGE, + number=3, + message="RequestOptions", + ) + mutation_key: mutation.Mutation = proto.Field( + proto.MESSAGE, + number=4, + message=mutation.Mutation, + ) + + +class CommitRequest(proto.Message): + r"""The request for [Commit][google.spanner.v1.Spanner.Commit]. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + session (str): + Required. The session in which the + transaction to be committed is running. + transaction_id (bytes): + Commit a previously-started transaction. + + This field is a member of `oneof`_ ``transaction``. + single_use_transaction (google.cloud.spanner_v1.types.TransactionOptions): + Execute mutations in a temporary transaction. Note that + unlike commit of a previously-started transaction, commit + with a temporary transaction is non-idempotent. That is, if + the ``CommitRequest`` is sent to Cloud Spanner more than + once (for instance, due to retries in the application, or in + the transport library), it's possible that the mutations are + executed more than once. If this is undesirable, use + [BeginTransaction][google.spanner.v1.Spanner.BeginTransaction] + and [Commit][google.spanner.v1.Spanner.Commit] instead. + + This field is a member of `oneof`_ ``transaction``. + mutations (MutableSequence[google.cloud.spanner_v1.types.Mutation]): + The mutations to be executed when this + transaction commits. All mutations are applied + atomically, in the order they appear in this + list. + return_commit_stats (bool): + If ``true``, then statistics related to the transaction is + included in the + [CommitResponse][google.spanner.v1.CommitResponse.commit_stats]. + Default value is ``false``. + max_commit_delay (google.protobuf.duration_pb2.Duration): + Optional. The amount of latency this request + is configured to incur in order to improve + throughput. If this field isn't set, Spanner + assumes requests are relatively latency + sensitive and automatically determines an + appropriate delay time. You can specify a commit + delay value between 0 and 500 ms. + request_options (google.cloud.spanner_v1.types.RequestOptions): + Common options for this request. + precommit_token (google.cloud.spanner_v1.types.MultiplexedSessionPrecommitToken): + Optional. If the read-write transaction was executed on a + multiplexed session, then you must include the precommit + token with the highest sequence number received in this + transaction attempt. Failing to do so results in a + ``FailedPrecondition`` error. + """ + + session: str = proto.Field( + proto.STRING, + number=1, + ) + transaction_id: bytes = proto.Field( + proto.BYTES, + number=2, + oneof="transaction", + ) + single_use_transaction: gs_transaction.TransactionOptions = proto.Field( + proto.MESSAGE, + number=3, + oneof="transaction", + message=gs_transaction.TransactionOptions, + ) + mutations: MutableSequence[mutation.Mutation] = proto.RepeatedField( + proto.MESSAGE, + number=4, + message=mutation.Mutation, + ) + return_commit_stats: bool = proto.Field( + proto.BOOL, + number=5, + ) + max_commit_delay: duration_pb2.Duration = proto.Field( + proto.MESSAGE, + number=8, + message=duration_pb2.Duration, + ) + request_options: "RequestOptions" = proto.Field( + proto.MESSAGE, + number=6, + message="RequestOptions", + ) + precommit_token: gs_transaction.MultiplexedSessionPrecommitToken = proto.Field( + proto.MESSAGE, + number=9, + message=gs_transaction.MultiplexedSessionPrecommitToken, + ) + + +class RollbackRequest(proto.Message): + r"""The request for [Rollback][google.spanner.v1.Spanner.Rollback]. + + Attributes: + session (str): + Required. The session in which the + transaction to roll back is running. + transaction_id (bytes): + Required. The transaction to roll back. + """ + + session: str = proto.Field( + proto.STRING, + number=1, + ) + transaction_id: bytes = proto.Field( + proto.BYTES, + number=2, + ) + + +class BatchWriteRequest(proto.Message): + r"""The request for [BatchWrite][google.spanner.v1.Spanner.BatchWrite]. + + Attributes: + session (str): + Required. The session in which the batch + request is to be run. + request_options (google.cloud.spanner_v1.types.RequestOptions): + Common options for this request. + mutation_groups (MutableSequence[google.cloud.spanner_v1.types.BatchWriteRequest.MutationGroup]): + Required. The groups of mutations to be + applied. + exclude_txn_from_change_streams (bool): + Optional. If you don't set the + ``exclude_txn_from_change_streams`` option or if it's set to + ``false``, then any change streams monitoring columns + modified by transactions will capture the updates made + within that transaction. + """ + + class MutationGroup(proto.Message): + r"""A group of mutations to be committed together. Related + mutations should be placed in a group. For example, two + mutations inserting rows with the same primary key prefix in + both parent and child tables are related. + + Attributes: + mutations (MutableSequence[google.cloud.spanner_v1.types.Mutation]): + Required. The mutations in this group. + """ + + mutations: MutableSequence[mutation.Mutation] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message=mutation.Mutation, + ) + + session: str = proto.Field( + proto.STRING, + number=1, + ) + request_options: "RequestOptions" = proto.Field( + proto.MESSAGE, + number=3, + message="RequestOptions", + ) + mutation_groups: MutableSequence[MutationGroup] = proto.RepeatedField( + proto.MESSAGE, + number=4, + message=MutationGroup, + ) + exclude_txn_from_change_streams: bool = proto.Field( + proto.BOOL, + number=5, + ) + + +class BatchWriteResponse(proto.Message): + r"""The result of applying a batch of mutations. + + Attributes: + indexes (MutableSequence[int]): + The mutation groups applied in this batch. The values index + into the ``mutation_groups`` field in the corresponding + ``BatchWriteRequest``. + status (google.rpc.status_pb2.Status): + An ``OK`` status indicates success. Any other status + indicates a failure. + commit_timestamp (google.protobuf.timestamp_pb2.Timestamp): + The commit timestamp of the transaction that applied this + batch. Present if ``status`` is ``OK``, absent otherwise. + """ + + indexes: MutableSequence[int] = proto.RepeatedField( + proto.INT32, + number=1, + ) + status: status_pb2.Status = proto.Field( + proto.MESSAGE, + number=2, + message=status_pb2.Status, + ) + commit_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=3, + message=timestamp_pb2.Timestamp, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_v1/types/transaction.py b/google/cloud/spanner_v1/types/transaction.py new file mode 100644 index 0000000000..447c310548 --- /dev/null +++ b/google/cloud/spanner_v1/types/transaction.py @@ -0,0 +1,492 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + +from google.protobuf import duration_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "TransactionOptions", + "Transaction", + "TransactionSelector", + "MultiplexedSessionPrecommitToken", + }, +) + + +class TransactionOptions(proto.Message): + r"""Options to use for transactions. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + read_write (google.cloud.spanner_v1.types.TransactionOptions.ReadWrite): + Transaction may write. + + Authorization to begin a read-write transaction requires + ``spanner.databases.beginOrRollbackReadWriteTransaction`` + permission on the ``session`` resource. + + This field is a member of `oneof`_ ``mode``. + partitioned_dml (google.cloud.spanner_v1.types.TransactionOptions.PartitionedDml): + Partitioned DML transaction. + + Authorization to begin a Partitioned DML transaction + requires + ``spanner.databases.beginPartitionedDmlTransaction`` + permission on the ``session`` resource. + + This field is a member of `oneof`_ ``mode``. + read_only (google.cloud.spanner_v1.types.TransactionOptions.ReadOnly): + Transaction does not write. + + Authorization to begin a read-only transaction requires + ``spanner.databases.beginReadOnlyTransaction`` permission on + the ``session`` resource. + + This field is a member of `oneof`_ ``mode``. + exclude_txn_from_change_streams (bool): + When ``exclude_txn_from_change_streams`` is set to ``true``, + it prevents read or write transactions from being tracked in + change streams. + + - If the DDL option ``allow_txn_exclusion`` is set to + ``true``, then the updates made within this transaction + aren't recorded in the change stream. + + - If you don't set the DDL option ``allow_txn_exclusion`` or + if it's set to ``false``, then the updates made within + this transaction are recorded in the change stream. + + When ``exclude_txn_from_change_streams`` is set to ``false`` + or not set, modifications from this transaction are recorded + in all change streams that are tracking columns modified by + these transactions. + + The ``exclude_txn_from_change_streams`` option can only be + specified for read-write or partitioned DML transactions, + otherwise the API returns an ``INVALID_ARGUMENT`` error. + isolation_level (google.cloud.spanner_v1.types.TransactionOptions.IsolationLevel): + Isolation level for the transaction. + """ + + class IsolationLevel(proto.Enum): + r"""``IsolationLevel`` is used when setting ``isolation_level`` for a + transaction. + + Values: + ISOLATION_LEVEL_UNSPECIFIED (0): + Default value. + + If the value is not specified, the ``SERIALIZABLE`` + isolation level is used. + SERIALIZABLE (1): + All transactions appear as if they executed in a serial + order, even if some of the reads, writes, and other + operations of distinct transactions actually occurred in + parallel. Spanner assigns commit timestamps that reflect the + order of committed transactions to implement this property. + Spanner offers a stronger guarantee than serializability + called external consistency. For more information, see + `TrueTime and external + consistency `__. + REPEATABLE_READ (2): + All reads performed during the transaction observe a + consistent snapshot of the database, and the transaction is + only successfully committed in the absence of conflicts + between its updates and any concurrent updates that have + occurred since that snapshot. Consequently, in contrast to + ``SERIALIZABLE`` transactions, only write-write conflicts + are detected in snapshot transactions. + + This isolation level does not support Read-only and + Partitioned DML transactions. + + When ``REPEATABLE_READ`` is specified on a read-write + transaction, the locking semantics default to + ``OPTIMISTIC``. + """ + ISOLATION_LEVEL_UNSPECIFIED = 0 + SERIALIZABLE = 1 + REPEATABLE_READ = 2 + + class ReadWrite(proto.Message): + r"""Message type to initiate a read-write transaction. Currently + this transaction type has no options. + + Attributes: + read_lock_mode (google.cloud.spanner_v1.types.TransactionOptions.ReadWrite.ReadLockMode): + Read lock mode for the transaction. + multiplexed_session_previous_transaction_id (bytes): + Optional. Clients should pass the transaction + ID of the previous transaction attempt that was + aborted if this transaction is being executed on + a multiplexed session. + """ + + class ReadLockMode(proto.Enum): + r"""``ReadLockMode`` is used to set the read lock mode for read-write + transactions. + + Values: + READ_LOCK_MODE_UNSPECIFIED (0): + Default value. + + - If isolation level is + [REPEATABLE_READ][google.spanner.v1.TransactionOptions.IsolationLevel.REPEATABLE_READ], + then it is an error to specify ``read_lock_mode``. Locking + semantics default to ``OPTIMISTIC``. No validation checks + are done for reads, except to validate that the data that + was served at the snapshot time is unchanged at commit + time in the following cases: + + 1. reads done as part of queries that use + ``SELECT FOR UPDATE`` + 2. reads done as part of statements with a + ``LOCK_SCANNED_RANGES`` hint + 3. reads done as part of DML statements + + - At all other isolation levels, if ``read_lock_mode`` is + the default value, then pessimistic read locks are used. + PESSIMISTIC (1): + Pessimistic lock mode. + + Read locks are acquired immediately on read. Semantics + described only applies to + [SERIALIZABLE][google.spanner.v1.TransactionOptions.IsolationLevel.SERIALIZABLE] + isolation. + OPTIMISTIC (2): + Optimistic lock mode. + + Locks for reads within the transaction are not acquired on + read. Instead the locks are acquired on a commit to validate + that read/queried data has not changed since the transaction + started. Semantics described only applies to + [SERIALIZABLE][google.spanner.v1.TransactionOptions.IsolationLevel.SERIALIZABLE] + isolation. + """ + READ_LOCK_MODE_UNSPECIFIED = 0 + PESSIMISTIC = 1 + OPTIMISTIC = 2 + + read_lock_mode: "TransactionOptions.ReadWrite.ReadLockMode" = proto.Field( + proto.ENUM, + number=1, + enum="TransactionOptions.ReadWrite.ReadLockMode", + ) + multiplexed_session_previous_transaction_id: bytes = proto.Field( + proto.BYTES, + number=2, + ) + + class PartitionedDml(proto.Message): + r"""Message type to initiate a Partitioned DML transaction.""" + + class ReadOnly(proto.Message): + r"""Message type to initiate a read-only transaction. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + strong (bool): + Read at a timestamp where all previously + committed transactions are visible. + + This field is a member of `oneof`_ ``timestamp_bound``. + min_read_timestamp (google.protobuf.timestamp_pb2.Timestamp): + Executes all reads at a timestamp >= ``min_read_timestamp``. + + This is useful for requesting fresher data than some + previous read, or data that is fresh enough to observe the + effects of some previously committed transaction whose + timestamp is known. + + Note that this option can only be used in single-use + transactions. + + A timestamp in RFC3339 UTC "Zulu" format, accurate to + nanoseconds. Example: ``"2014-10-02T15:01:23.045123456Z"``. + + This field is a member of `oneof`_ ``timestamp_bound``. + max_staleness (google.protobuf.duration_pb2.Duration): + Read data at a timestamp >= ``NOW - max_staleness`` seconds. + Guarantees that all writes that have committed more than the + specified number of seconds ago are visible. Because Cloud + Spanner chooses the exact timestamp, this mode works even if + the client's local clock is substantially skewed from Cloud + Spanner commit timestamps. + + Useful for reading the freshest data available at a nearby + replica, while bounding the possible staleness if the local + replica has fallen behind. + + Note that this option can only be used in single-use + transactions. + + This field is a member of `oneof`_ ``timestamp_bound``. + read_timestamp (google.protobuf.timestamp_pb2.Timestamp): + Executes all reads at the given timestamp. Unlike other + modes, reads at a specific timestamp are repeatable; the + same read at the same timestamp always returns the same + data. If the timestamp is in the future, the read is blocked + until the specified timestamp, modulo the read's deadline. + + Useful for large scale consistent reads such as mapreduces, + or for coordinating many reads against a consistent snapshot + of the data. + + A timestamp in RFC3339 UTC "Zulu" format, accurate to + nanoseconds. Example: ``"2014-10-02T15:01:23.045123456Z"``. + + This field is a member of `oneof`_ ``timestamp_bound``. + exact_staleness (google.protobuf.duration_pb2.Duration): + Executes all reads at a timestamp that is + ``exact_staleness`` old. The timestamp is chosen soon after + the read is started. + + Guarantees that all writes that have committed more than the + specified number of seconds ago are visible. Because Cloud + Spanner chooses the exact timestamp, this mode works even if + the client's local clock is substantially skewed from Cloud + Spanner commit timestamps. + + Useful for reading at nearby replicas without the + distributed timestamp negotiation overhead of + ``max_staleness``. + + This field is a member of `oneof`_ ``timestamp_bound``. + return_read_timestamp (bool): + If true, the Cloud Spanner-selected read timestamp is + included in the [Transaction][google.spanner.v1.Transaction] + message that describes the transaction. + """ + + strong: bool = proto.Field( + proto.BOOL, + number=1, + oneof="timestamp_bound", + ) + min_read_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + oneof="timestamp_bound", + message=timestamp_pb2.Timestamp, + ) + max_staleness: duration_pb2.Duration = proto.Field( + proto.MESSAGE, + number=3, + oneof="timestamp_bound", + message=duration_pb2.Duration, + ) + read_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=4, + oneof="timestamp_bound", + message=timestamp_pb2.Timestamp, + ) + exact_staleness: duration_pb2.Duration = proto.Field( + proto.MESSAGE, + number=5, + oneof="timestamp_bound", + message=duration_pb2.Duration, + ) + return_read_timestamp: bool = proto.Field( + proto.BOOL, + number=6, + ) + + read_write: ReadWrite = proto.Field( + proto.MESSAGE, + number=1, + oneof="mode", + message=ReadWrite, + ) + partitioned_dml: PartitionedDml = proto.Field( + proto.MESSAGE, + number=3, + oneof="mode", + message=PartitionedDml, + ) + read_only: ReadOnly = proto.Field( + proto.MESSAGE, + number=2, + oneof="mode", + message=ReadOnly, + ) + exclude_txn_from_change_streams: bool = proto.Field( + proto.BOOL, + number=5, + ) + isolation_level: IsolationLevel = proto.Field( + proto.ENUM, + number=6, + enum=IsolationLevel, + ) + + +class Transaction(proto.Message): + r"""A transaction. + + Attributes: + id (bytes): + ``id`` may be used to identify the transaction in subsequent + [Read][google.spanner.v1.Spanner.Read], + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql], + [Commit][google.spanner.v1.Spanner.Commit], or + [Rollback][google.spanner.v1.Spanner.Rollback] calls. + + Single-use read-only transactions do not have IDs, because + single-use transactions do not support multiple requests. + read_timestamp (google.protobuf.timestamp_pb2.Timestamp): + For snapshot read-only transactions, the read timestamp + chosen for the transaction. Not returned by default: see + [TransactionOptions.ReadOnly.return_read_timestamp][google.spanner.v1.TransactionOptions.ReadOnly.return_read_timestamp]. + + A timestamp in RFC3339 UTC "Zulu" format, accurate to + nanoseconds. Example: ``"2014-10-02T15:01:23.045123456Z"``. + precommit_token (google.cloud.spanner_v1.types.MultiplexedSessionPrecommitToken): + A precommit token is included in the response of a + BeginTransaction request if the read-write transaction is on + a multiplexed session and a mutation_key was specified in + the + [BeginTransaction][google.spanner.v1.BeginTransactionRequest]. + The precommit token with the highest sequence number from + this transaction attempt should be passed to the + [Commit][google.spanner.v1.Spanner.Commit] request for this + transaction. + """ + + id: bytes = proto.Field( + proto.BYTES, + number=1, + ) + read_timestamp: timestamp_pb2.Timestamp = proto.Field( + proto.MESSAGE, + number=2, + message=timestamp_pb2.Timestamp, + ) + precommit_token: "MultiplexedSessionPrecommitToken" = proto.Field( + proto.MESSAGE, + number=3, + message="MultiplexedSessionPrecommitToken", + ) + + +class TransactionSelector(proto.Message): + r"""This message is used to select the transaction in which a + [Read][google.spanner.v1.Spanner.Read] or + [ExecuteSql][google.spanner.v1.Spanner.ExecuteSql] call runs. + + See [TransactionOptions][google.spanner.v1.TransactionOptions] for + more information about transactions. + + This message has `oneof`_ fields (mutually exclusive fields). + For each oneof, at most one member field can be set at the same time. + Setting any member of the oneof automatically clears all other + members. + + .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields + + Attributes: + single_use (google.cloud.spanner_v1.types.TransactionOptions): + Execute the read or SQL query in a temporary + transaction. This is the most efficient way to + execute a transaction that consists of a single + SQL query. + + This field is a member of `oneof`_ ``selector``. + id (bytes): + Execute the read or SQL query in a + previously-started transaction. + + This field is a member of `oneof`_ ``selector``. + begin (google.cloud.spanner_v1.types.TransactionOptions): + Begin a new transaction and execute this read or SQL query + in it. The transaction ID of the new transaction is returned + in + [ResultSetMetadata.transaction][google.spanner.v1.ResultSetMetadata.transaction], + which is a [Transaction][google.spanner.v1.Transaction]. + + This field is a member of `oneof`_ ``selector``. + """ + + single_use: "TransactionOptions" = proto.Field( + proto.MESSAGE, + number=1, + oneof="selector", + message="TransactionOptions", + ) + id: bytes = proto.Field( + proto.BYTES, + number=2, + oneof="selector", + ) + begin: "TransactionOptions" = proto.Field( + proto.MESSAGE, + number=3, + oneof="selector", + message="TransactionOptions", + ) + + +class MultiplexedSessionPrecommitToken(proto.Message): + r"""When a read-write transaction is executed on a multiplexed session, + this precommit token is sent back to the client as a part of the + [Transaction][google.spanner.v1.Transaction] message in the + [BeginTransaction][google.spanner.v1.BeginTransactionRequest] + response and also as a part of the + [ResultSet][google.spanner.v1.ResultSet] and + [PartialResultSet][google.spanner.v1.PartialResultSet] responses. + + Attributes: + precommit_token (bytes): + Opaque precommit token. + seq_num (int): + An incrementing seq number is generated on + every precommit token that is returned. Clients + should remember the precommit token with the + highest sequence number from the current + transaction attempt. + """ + + precommit_token: bytes = proto.Field( + proto.BYTES, + number=1, + ) + seq_num: int = proto.Field( + proto.INT32, + number=2, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/google/cloud/spanner_v1/types/type.py b/google/cloud/spanner_v1/types/type.py new file mode 100644 index 0000000000..d6d516569e --- /dev/null +++ b/google/cloud/spanner_v1/types/type.py @@ -0,0 +1,288 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +from __future__ import annotations + +from typing import MutableMapping, MutableSequence + +import proto # type: ignore + + +__protobuf__ = proto.module( + package="google.spanner.v1", + manifest={ + "TypeCode", + "TypeAnnotationCode", + "Type", + "StructType", + }, +) + + +class TypeCode(proto.Enum): + r"""``TypeCode`` is used as part of [Type][google.spanner.v1.Type] to + indicate the type of a Cloud Spanner value. + + Each legal value of a type can be encoded to or decoded from a JSON + value, using the encodings described below. All Cloud Spanner values + can be ``null``, regardless of type; ``null``\ s are always encoded + as a JSON ``null``. + + Values: + TYPE_CODE_UNSPECIFIED (0): + Not specified. + BOOL (1): + Encoded as JSON ``true`` or ``false``. + INT64 (2): + Encoded as ``string``, in decimal format. + FLOAT64 (3): + Encoded as ``number``, or the strings ``"NaN"``, + ``"Infinity"``, or ``"-Infinity"``. + FLOAT32 (15): + Encoded as ``number``, or the strings ``"NaN"``, + ``"Infinity"``, or ``"-Infinity"``. + TIMESTAMP (4): + Encoded as ``string`` in RFC 3339 timestamp format. The time + zone must be present, and must be ``"Z"``. + + If the schema has the column option + ``allow_commit_timestamp=true``, the placeholder string + ``"spanner.commit_timestamp()"`` can be used to instruct the + system to insert the commit timestamp associated with the + transaction commit. + DATE (5): + Encoded as ``string`` in RFC 3339 date format. + STRING (6): + Encoded as ``string``. + BYTES (7): + Encoded as a base64-encoded ``string``, as described in RFC + 4648, section 4. + ARRAY (8): + Encoded as ``list``, where the list elements are represented + according to + [array_element_type][google.spanner.v1.Type.array_element_type]. + STRUCT (9): + Encoded as ``list``, where list element ``i`` is represented + according to + [struct_type.fields[i]][google.spanner.v1.StructType.fields]. + NUMERIC (10): + Encoded as ``string``, in decimal format or scientific + notation format. Decimal format: ``[+-]Digits[.[Digits]]`` + or ``[+-][Digits].Digits`` + + Scientific notation: + ``[+-]Digits[.[Digits]][ExponentIndicator[+-]Digits]`` or + ``[+-][Digits].Digits[ExponentIndicator[+-]Digits]`` + (ExponentIndicator is ``"e"`` or ``"E"``) + JSON (11): + Encoded as a JSON-formatted ``string`` as described in RFC + 7159. The following rules are applied when parsing JSON + input: + + - Whitespace characters are not preserved. + - If a JSON object has duplicate keys, only the first key is + preserved. + - Members of a JSON object are not guaranteed to have their + order preserved. + - JSON array elements will have their order preserved. + PROTO (13): + Encoded as a base64-encoded ``string``, as described in RFC + 4648, section 4. + ENUM (14): + Encoded as ``string``, in decimal format. + INTERVAL (16): + Encoded as ``string``, in ``ISO8601`` duration format - + ``P[n]Y[n]M[n]DT[n]H[n]M[n[.fraction]]S`` where ``n`` is an + integer. For example, ``P1Y2M3DT4H5M6.5S`` represents time + duration of 1 year, 2 months, 3 days, 4 hours, 5 minutes, + and 6.5 seconds. + UUID (17): + Encoded as ``string``, in lower-case hexa-decimal format, as + described in RFC 9562, section 4. + """ + TYPE_CODE_UNSPECIFIED = 0 + BOOL = 1 + INT64 = 2 + FLOAT64 = 3 + FLOAT32 = 15 + TIMESTAMP = 4 + DATE = 5 + STRING = 6 + BYTES = 7 + ARRAY = 8 + STRUCT = 9 + NUMERIC = 10 + JSON = 11 + PROTO = 13 + ENUM = 14 + INTERVAL = 16 + UUID = 17 + + +class TypeAnnotationCode(proto.Enum): + r"""``TypeAnnotationCode`` is used as a part of + [Type][google.spanner.v1.Type] to disambiguate SQL types that should + be used for a given Cloud Spanner value. Disambiguation is needed + because the same Cloud Spanner type can be mapped to different SQL + types depending on SQL dialect. TypeAnnotationCode doesn't affect + the way value is serialized. + + Values: + TYPE_ANNOTATION_CODE_UNSPECIFIED (0): + Not specified. + PG_NUMERIC (2): + PostgreSQL compatible NUMERIC type. This annotation needs to + be applied to [Type][google.spanner.v1.Type] instances + having [NUMERIC][google.spanner.v1.TypeCode.NUMERIC] type + code to specify that values of this type should be treated + as PostgreSQL NUMERIC values. Currently this annotation is + always needed for + [NUMERIC][google.spanner.v1.TypeCode.NUMERIC] when a client + interacts with PostgreSQL-enabled Spanner databases. + PG_JSONB (3): + PostgreSQL compatible JSONB type. This annotation needs to + be applied to [Type][google.spanner.v1.Type] instances + having [JSON][google.spanner.v1.TypeCode.JSON] type code to + specify that values of this type should be treated as + PostgreSQL JSONB values. Currently this annotation is always + needed for [JSON][google.spanner.v1.TypeCode.JSON] when a + client interacts with PostgreSQL-enabled Spanner databases. + PG_OID (4): + PostgreSQL compatible OID type. This + annotation can be used by a client interacting + with PostgreSQL-enabled Spanner database to + specify that a value should be treated using the + semantics of the OID type. + """ + TYPE_ANNOTATION_CODE_UNSPECIFIED = 0 + PG_NUMERIC = 2 + PG_JSONB = 3 + PG_OID = 4 + + +class Type(proto.Message): + r"""``Type`` indicates the type of a Cloud Spanner value, as might be + stored in a table cell or returned from an SQL query. + + Attributes: + code (google.cloud.spanner_v1.types.TypeCode): + Required. The [TypeCode][google.spanner.v1.TypeCode] for + this type. + array_element_type (google.cloud.spanner_v1.types.Type): + If [code][google.spanner.v1.Type.code] == + [ARRAY][google.spanner.v1.TypeCode.ARRAY], then + ``array_element_type`` is the type of the array elements. + struct_type (google.cloud.spanner_v1.types.StructType): + If [code][google.spanner.v1.Type.code] == + [STRUCT][google.spanner.v1.TypeCode.STRUCT], then + ``struct_type`` provides type information for the struct's + fields. + type_annotation (google.cloud.spanner_v1.types.TypeAnnotationCode): + The + [TypeAnnotationCode][google.spanner.v1.TypeAnnotationCode] + that disambiguates SQL type that Spanner will use to + represent values of this type during query processing. This + is necessary for some type codes because a single + [TypeCode][google.spanner.v1.TypeCode] can be mapped to + different SQL types depending on the SQL dialect. + [type_annotation][google.spanner.v1.Type.type_annotation] + typically is not needed to process the content of a value + (it doesn't affect serialization) and clients can ignore it + on the read path. + proto_type_fqn (str): + If [code][google.spanner.v1.Type.code] == + [PROTO][google.spanner.v1.TypeCode.PROTO] or + [code][google.spanner.v1.Type.code] == + [ENUM][google.spanner.v1.TypeCode.ENUM], then + ``proto_type_fqn`` is the fully qualified name of the proto + type representing the proto/enum definition. + """ + + code: "TypeCode" = proto.Field( + proto.ENUM, + number=1, + enum="TypeCode", + ) + array_element_type: "Type" = proto.Field( + proto.MESSAGE, + number=2, + message="Type", + ) + struct_type: "StructType" = proto.Field( + proto.MESSAGE, + number=3, + message="StructType", + ) + type_annotation: "TypeAnnotationCode" = proto.Field( + proto.ENUM, + number=4, + enum="TypeAnnotationCode", + ) + proto_type_fqn: str = proto.Field( + proto.STRING, + number=5, + ) + + +class StructType(proto.Message): + r"""``StructType`` defines the fields of a + [STRUCT][google.spanner.v1.TypeCode.STRUCT] type. + + Attributes: + fields (MutableSequence[google.cloud.spanner_v1.types.StructType.Field]): + The list of fields that make up this struct. Order is + significant, because values of this struct type are + represented as lists, where the order of field values + matches the order of fields in the + [StructType][google.spanner.v1.StructType]. In turn, the + order of fields matches the order of columns in a read + request, or the order of fields in the ``SELECT`` clause of + a query. + """ + + class Field(proto.Message): + r"""Message representing a single field of a struct. + + Attributes: + name (str): + The name of the field. For reads, this is the column name. + For SQL queries, it is the column alias (e.g., ``"Word"`` in + the query ``"SELECT 'hello' AS Word"``), or the column name + (e.g., ``"ColName"`` in the query + ``"SELECT ColName FROM Table"``). Some columns might have an + empty name (e.g., ``"SELECT UPPER(ColName)"``). Note that a + query result can contain multiple fields with the same name. + type_ (google.cloud.spanner_v1.types.Type): + The type of the field. + """ + + name: str = proto.Field( + proto.STRING, + number=1, + ) + type_: "Type" = proto.Field( + proto.MESSAGE, + number=2, + message="Type", + ) + + fields: MutableSequence[Field] = proto.RepeatedField( + proto.MESSAGE, + number=1, + message=Field, + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/noxfile.py b/noxfile.py index 22f328c4af..b101f46b2e 100644 --- a/noxfile.py +++ b/noxfile.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- # -# Copyright 2018 Google LLC +# Copyright 2024 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -17,90 +17,321 @@ # Generated by synthtool. DO NOT EDIT! from __future__ import absolute_import + import os +import pathlib +import re import shutil +from typing import Dict, List +import warnings import nox - -BLACK_VERSION = "black==19.3b0" -BLACK_PATHS = ["docs", "google", "tests", "noxfile.py", "setup.py"] - -if os.path.exists("samples"): - BLACK_PATHS.append("samples") - - -@nox.session(python="3.7") +FLAKE8_VERSION = "flake8==6.1.0" +BLACK_VERSION = "black[jupyter]==23.7.0" +ISORT_VERSION = "isort==5.11.0" +LINT_PATHS = ["docs", "google", "tests", "noxfile.py", "setup.py"] + +DEFAULT_PYTHON_VERSION = "3.12" + +DEFAULT_MOCK_SERVER_TESTS_PYTHON_VERSION = "3.12" +SYSTEM_TEST_PYTHON_VERSIONS: List[str] = ["3.12"] + +UNIT_TEST_PYTHON_VERSIONS: List[str] = [ + "3.9", + "3.10", + "3.11", + "3.12", + "3.13", +] +UNIT_TEST_STANDARD_DEPENDENCIES = [ + "mock", + "asyncmock", + "pytest", + "pytest-cov", + "pytest-asyncio", +] +MOCK_SERVER_ADDITIONAL_DEPENDENCIES = [ + "google-cloud-testutils", +] +UNIT_TEST_EXTERNAL_DEPENDENCIES: List[str] = [] +UNIT_TEST_LOCAL_DEPENDENCIES: List[str] = [] +UNIT_TEST_DEPENDENCIES: List[str] = [] +UNIT_TEST_EXTRAS: List[str] = [] +UNIT_TEST_EXTRAS_BY_PYTHON: Dict[str, List[str]] = {} + +SYSTEM_TEST_STANDARD_DEPENDENCIES: List[str] = [ + "mock", + "pytest", + "google-cloud-testutils", +] +SYSTEM_TEST_EXTERNAL_DEPENDENCIES: List[str] = [] +SYSTEM_TEST_LOCAL_DEPENDENCIES: List[str] = [] +SYSTEM_TEST_DEPENDENCIES: List[str] = [] +SYSTEM_TEST_EXTRAS: List[str] = [ + "tracing", +] +SYSTEM_TEST_EXTRAS_BY_PYTHON: Dict[str, List[str]] = {} + +CURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute() + +nox.options.sessions = [ + "unit-3.9", + "unit-3.10", + "unit-3.11", + "unit-3.12", + "unit-3.13", + "system", + "cover", + "lint", + "lint_setup_py", + "blacken", + "docs", + "docfx", + "format", +] + +# Error if a python version is missing +nox.options.error_on_missing_interpreters = True + + +@nox.session(python=DEFAULT_PYTHON_VERSION) def lint(session): """Run linters. Returns a failure if the linters find linting errors or sufficiently serious code quality issues. """ - session.install("flake8", BLACK_VERSION) - session.run("black", "--check", *BLACK_PATHS) + session.install(FLAKE8_VERSION, BLACK_VERSION) + session.run( + "black", + "--check", + *LINT_PATHS, + ) session.run("flake8", "google", "tests") -@nox.session(python="3.6") +# Use a python runtime which is available in the owlbot post processor here +# https://github.com/googleapis/synthtool/blob/master/docker/owlbot/python/Dockerfile +@nox.session(python=["3.10", DEFAULT_PYTHON_VERSION]) def blacken(session): - """Run black. + """Run black. Format code to uniform standard.""" + session.install(BLACK_VERSION) + session.run( + "black", + *LINT_PATHS, + ) - Format code to uniform standard. - This currently uses Python 3.6 due to the automated Kokoro run of synthtool. - That run uses an image that doesn't have 3.6 installed. Before updating this - check the state of the `gcp_ubuntu_config` we use for that Kokoro run. +@nox.session(python=DEFAULT_PYTHON_VERSION) +def format(session): """ - session.install(BLACK_VERSION) - session.run("black", *BLACK_PATHS) + Run isort to sort imports. Then run black + to format code to uniform standard. + """ + session.install(BLACK_VERSION, ISORT_VERSION) + # Use the --fss option to sort imports using strict alphabetical order. + # See https://pycqa.github.io/isort/docs/configuration/options.html#force-sort-within-sections + session.run( + "isort", + "--fss", + *LINT_PATHS, + ) + session.run( + "black", + *LINT_PATHS, + ) -@nox.session(python="3.7") +@nox.session(python=DEFAULT_PYTHON_VERSION) def lint_setup_py(session): """Verify that setup.py is valid (including RST check).""" - session.install("docutils", "pygments") + session.install("docutils", "pygments", "setuptools>=79.0.1") session.run("python", "setup.py", "check", "--restructuredtext", "--strict") -def default(session): +def install_unittest_dependencies(session, *constraints): + standard_deps = UNIT_TEST_STANDARD_DEPENDENCIES + UNIT_TEST_DEPENDENCIES + session.install(*standard_deps, *constraints) + + if UNIT_TEST_EXTERNAL_DEPENDENCIES: + warnings.warn( + "'unit_test_external_dependencies' is deprecated. Instead, please " + "use 'unit_test_dependencies' or 'unit_test_local_dependencies'.", + DeprecationWarning, + ) + session.install(*UNIT_TEST_EXTERNAL_DEPENDENCIES, *constraints) + + if UNIT_TEST_LOCAL_DEPENDENCIES: + session.install(*UNIT_TEST_LOCAL_DEPENDENCIES, *constraints) + + if UNIT_TEST_EXTRAS_BY_PYTHON: + extras = UNIT_TEST_EXTRAS_BY_PYTHON.get(session.python, []) + elif UNIT_TEST_EXTRAS: + extras = UNIT_TEST_EXTRAS + else: + extras = [] + + if extras: + session.install("-e", f".[{','.join(extras)}]", *constraints) + else: + session.install("-e", ".", *constraints) + + # XXX Work around Kokoro image's older pip, which borks the OT install. + session.run("pip", "install", "--upgrade", "pip") + constraints_path = str( + CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt" + ) + session.install("-e", ".[tracing]", "-c", constraints_path) + # XXX: Dump installed versions to debug OT issue + session.run("pip", "list") + + +@nox.session(python=UNIT_TEST_PYTHON_VERSIONS) +@nox.parametrize( + "protobuf_implementation", + ["python", "upb", "cpp"], +) +def unit(session, protobuf_implementation): # Install all test dependencies, then install this package in-place. - session.install("mock", "pytest", "pytest-cov") - session.install("-e", ".") + + if protobuf_implementation == "cpp" and session.python in ("3.11", "3.12", "3.13"): + session.skip("cpp implementation is not supported in python 3.11+") + + constraints_path = str( + CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt" + ) + install_unittest_dependencies(session, "-c", constraints_path) + + # TODO(https://github.com/googleapis/synthtool/issues/1976): + # Remove the 'cpp' implementation once support for Protobuf 3.x is dropped. + # The 'cpp' implementation requires Protobuf<4. + if protobuf_implementation == "cpp": + session.install("protobuf<4") # Run py.test against the unit tests. session.run( "py.test", "--quiet", - "--cov=google.cloud", - "--cov=tests.unit", + f"--junitxml=unit_{session.python}_sponge_log.xml", + "--cov=google", + "--cov=tests/unit", "--cov-append", "--cov-config=.coveragerc", "--cov-report=", "--cov-fail-under=0", os.path.join("tests", "unit"), *session.posargs, + env={ + "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION": protobuf_implementation, + }, ) -@nox.session(python=["2.7", "3.5", "3.6", "3.7", "3.8"]) -def unit(session): - """Run the unit test suite.""" - default(session) +@nox.session(python=DEFAULT_MOCK_SERVER_TESTS_PYTHON_VERSION) +def mockserver(session): + # Install all test dependencies, then install this package in-place. + + constraints_path = str( + CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt" + ) + standard_deps = ( + UNIT_TEST_STANDARD_DEPENDENCIES + + UNIT_TEST_DEPENDENCIES + + MOCK_SERVER_ADDITIONAL_DEPENDENCIES + ) + session.install(*standard_deps, "-c", constraints_path) + session.install("-e", ".", "-c", constraints_path) + + # Run py.test against the mockserver tests. + session.run( + "py.test", + "--quiet", + f"--junitxml=unit_{session.python}_sponge_log.xml", + "--cov=google", + "--cov=tests/unit", + "--cov-append", + "--cov-config=.coveragerc", + "--cov-report=", + "--cov-fail-under=0", + os.path.join("tests", "mockserver_tests"), + *session.posargs, + ) -@nox.session(python=["2.7", "3.7"]) -def system(session): +def install_systemtest_dependencies(session, *constraints): + # Use pre-release gRPC for system tests. + # Exclude version 1.52.0rc1 which has a known issue. + # See https://github.com/grpc/grpc/issues/32163 + session.install("--pre", "grpcio!=1.52.0rc1") + + session.install(*SYSTEM_TEST_STANDARD_DEPENDENCIES, *constraints) + + if SYSTEM_TEST_EXTERNAL_DEPENDENCIES: + session.install(*SYSTEM_TEST_EXTERNAL_DEPENDENCIES, *constraints) + + if SYSTEM_TEST_LOCAL_DEPENDENCIES: + session.install("-e", *SYSTEM_TEST_LOCAL_DEPENDENCIES, *constraints) + + if SYSTEM_TEST_DEPENDENCIES: + session.install("-e", *SYSTEM_TEST_DEPENDENCIES, *constraints) + + if SYSTEM_TEST_EXTRAS_BY_PYTHON: + extras = SYSTEM_TEST_EXTRAS_BY_PYTHON.get(session.python, []) + elif SYSTEM_TEST_EXTRAS: + extras = SYSTEM_TEST_EXTRAS + else: + extras = [] + + if extras: + session.install("-e", f".[{','.join(extras)}]", *constraints) + else: + session.install("-e", ".", *constraints) + + +@nox.session(python=SYSTEM_TEST_PYTHON_VERSIONS) +@nox.parametrize( + "protobuf_implementation,database_dialect", + [ + ("python", "GOOGLE_STANDARD_SQL"), + ("python", "POSTGRESQL"), + ("upb", "GOOGLE_STANDARD_SQL"), + ("upb", "POSTGRESQL"), + ("cpp", "GOOGLE_STANDARD_SQL"), + ("cpp", "POSTGRESQL"), + ], +) +def system(session, protobuf_implementation, database_dialect): """Run the system test suite.""" + constraints_path = str( + CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt" + ) system_test_path = os.path.join("tests", "system.py") system_test_folder_path = os.path.join("tests", "system") - # Sanity check: Only run tests if either credentials or emulator host is set. + + # Check the value of `RUN_SYSTEM_TESTS` env var. It defaults to true. + if os.environ.get("RUN_SYSTEM_TESTS", "true") == "false": + session.skip("RUN_SYSTEM_TESTS is set to false, skipping") + # Sanity check: Only run tests if the environment variable is set. if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", "") and not os.environ.get( "SPANNER_EMULATOR_HOST", "" ): session.skip( "Credentials or emulator host must be set via environment variable" ) + if not ( + os.environ.get("SPANNER_EMULATOR_HOST") or protobuf_implementation == "python" + ): + session.skip( + "Only run system tests on real Spanner with one protobuf implementation to speed up the build" + ) + + if protobuf_implementation == "cpp" and session.python in ("3.11", "3.12", "3.13"): + session.skip("cpp implementation is not supported in python 3.11+") + + # Install pyopenssl for mTLS testing. + if os.environ.get("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false") == "true": + session.install("pyopenssl") system_test_exists = os.path.exists(system_test_path) system_test_folder_exists = os.path.exists(system_test_folder_path) @@ -108,24 +339,44 @@ def system(session): if not system_test_exists and not system_test_folder_exists: session.skip("System tests were not found") - # Use pre-release gRPC for system tests. - session.install("--pre", "grpcio") - - # Install all test dependencies, then install this package into the - # virtualenv's dist-packages. - session.install("mock", "pytest") + install_systemtest_dependencies(session, "-c", constraints_path) - session.install("-e", ".") - session.install("-e", "test_utils/") + # TODO(https://github.com/googleapis/synthtool/issues/1976): + # Remove the 'cpp' implementation once support for Protobuf 3.x is dropped. + # The 'cpp' implementation requires Protobuf<4. + if protobuf_implementation == "cpp": + session.install("protobuf<4") # Run py.test against the system tests. if system_test_exists: - session.run("py.test", "--quiet", system_test_path, *session.posargs) - if system_test_folder_exists: - session.run("py.test", "--quiet", system_test_folder_path, *session.posargs) + session.run( + "py.test", + "--quiet", + f"--junitxml=system_{session.python}_sponge_log.xml", + system_test_path, + *session.posargs, + env={ + "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION": protobuf_implementation, + "SPANNER_DATABASE_DIALECT": database_dialect, + "SKIP_BACKUP_TESTS": "true", + }, + ) + elif system_test_folder_exists: + session.run( + "py.test", + "--quiet", + f"--junitxml=system_{session.python}_sponge_log.xml", + system_test_folder_path, + *session.posargs, + env={ + "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION": protobuf_implementation, + "SPANNER_DATABASE_DIALECT": database_dialect, + "SKIP_BACKUP_TESTS": "true", + }, + ) -@nox.session(python="3.7") +@nox.session(python=DEFAULT_PYTHON_VERSION) def cover(session): """Run the final coverage report. @@ -133,17 +384,30 @@ def cover(session): test runs (not system test runs), and then erases coverage data. """ session.install("coverage", "pytest-cov") - session.run("coverage", "report", "--show-missing", "--fail-under=99") + session.run("coverage", "report", "--show-missing", "--fail-under=98") session.run("coverage", "erase") -@nox.session(python="3.7") +@nox.session(python="3.10") def docs(session): """Build the docs for this library.""" - session.install("-e", ".") - session.install("sphinx", "alabaster", "recommonmark") + session.install("-e", ".[tracing]") + session.install( + # We need to pin to specific versions of the `sphinxcontrib-*` packages + # which still support sphinx 4.x. + # See https://github.com/googleapis/sphinx-docfx-yaml/issues/344 + # and https://github.com/googleapis/sphinx-docfx-yaml/issues/345. + "sphinxcontrib-applehelp==1.0.4", + "sphinxcontrib-devhelp==1.0.2", + "sphinxcontrib-htmlhelp==2.0.1", + "sphinxcontrib-qthelp==1.0.3", + "sphinxcontrib-serializinghtml==1.1.5", + "sphinx==4.5.0", + "alabaster", + "recommonmark", + ) shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True) session.run( @@ -158,3 +422,174 @@ def docs(session): os.path.join("docs", ""), os.path.join("docs", "_build", "html", ""), ) + + +@nox.session(python="3.10") +def docfx(session): + """Build the docfx yaml files for this library.""" + + session.install("-e", ".[tracing]") + session.install( + # We need to pin to specific versions of the `sphinxcontrib-*` packages + # which still support sphinx 4.x. + # See https://github.com/googleapis/sphinx-docfx-yaml/issues/344 + # and https://github.com/googleapis/sphinx-docfx-yaml/issues/345. + "sphinxcontrib-applehelp==1.0.4", + "sphinxcontrib-devhelp==1.0.2", + "sphinxcontrib-htmlhelp==2.0.1", + "sphinxcontrib-qthelp==1.0.3", + "sphinxcontrib-serializinghtml==1.1.5", + "gcp-sphinx-docfx-yaml", + "alabaster", + "recommonmark", + ) + + shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True) + session.run( + "sphinx-build", + "-T", # show full traceback on exception + "-N", # no colors + "-D", + ( + "extensions=sphinx.ext.autodoc," + "sphinx.ext.autosummary," + "docfx_yaml.extension," + "sphinx.ext.intersphinx," + "sphinx.ext.coverage," + "sphinx.ext.napoleon," + "sphinx.ext.todo," + "sphinx.ext.viewcode," + "recommonmark" + ), + "-b", + "html", + "-d", + os.path.join("docs", "_build", "doctrees", ""), + os.path.join("docs", ""), + os.path.join("docs", "_build", "html", ""), + ) + + +@nox.session(python="3.13") +@nox.parametrize( + "protobuf_implementation,database_dialect", + [ + ("python", "GOOGLE_STANDARD_SQL"), + ("python", "POSTGRESQL"), + ("upb", "GOOGLE_STANDARD_SQL"), + ("upb", "POSTGRESQL"), + ("cpp", "GOOGLE_STANDARD_SQL"), + ("cpp", "POSTGRESQL"), + ], +) +def prerelease_deps(session, protobuf_implementation, database_dialect): + """Run all tests with prerelease versions of dependencies installed.""" + + if protobuf_implementation == "cpp" and session.python in ("3.11", "3.12", "3.13"): + session.skip("cpp implementation is not supported in python 3.11+") + + # Install all dependencies + session.install("-e", ".[all, tests, tracing]") + unit_deps_all = UNIT_TEST_STANDARD_DEPENDENCIES + UNIT_TEST_EXTERNAL_DEPENDENCIES + session.install(*unit_deps_all) + system_deps_all = ( + SYSTEM_TEST_STANDARD_DEPENDENCIES + SYSTEM_TEST_EXTERNAL_DEPENDENCIES + ) + session.install(*system_deps_all) + + # Because we test minimum dependency versions on the minimum Python + # version, the first version we test with in the unit tests sessions has a + # constraints file containing all dependencies and extras. + with open( + CURRENT_DIRECTORY + / "testing" + / f"constraints-{UNIT_TEST_PYTHON_VERSIONS[0]}.txt", + encoding="utf-8", + ) as constraints_file: + constraints_text = constraints_file.read() + + # Ignore leading whitespace and comment lines. + constraints_deps = [ + match.group(1) + for match in re.finditer( + r"^\s*([a-zA-Z0-9._-]+)", constraints_text, flags=re.MULTILINE + ) + ] + + if constraints_deps: + session.install(*constraints_deps) + + prerel_deps = [ + "protobuf", + # dependency of grpc + "six", + "grpc-google-iam-v1", + "googleapis-common-protos", + "grpcio", + "grpcio-status", + "google-api-core", + "google-auth", + "proto-plus", + "google-cloud-testutils", + # dependencies of google-cloud-testutils" + "click", + ] + + for dep in prerel_deps: + session.install("--pre", "--no-deps", "--upgrade", dep) + + # Remaining dependencies + other_deps = [ + "requests", + ] + session.install(*other_deps) + + # Print out prerelease package versions + session.run( + "python", "-c", "import google.protobuf; print(google.protobuf.__version__)" + ) + session.run("python", "-c", "import grpc; print(grpc.__version__)") + session.run("python", "-c", "import google.auth; print(google.auth.__version__)") + + session.run( + "py.test", + "tests/unit", + env={ + "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION": protobuf_implementation, + "SPANNER_DATABASE_DIALECT": database_dialect, + "SKIP_BACKUP_TESTS": "true", + }, + ) + + system_test_path = os.path.join("tests", "system.py") + system_test_folder_path = os.path.join("tests", "system") + + # Only run system tests for one protobuf implementation on real Spanner to speed up the build. + if os.environ.get("SPANNER_EMULATOR_HOST") or protobuf_implementation == "python": + # Only run system tests if found. + if os.path.exists(system_test_path): + session.run( + "py.test", + "--verbose", + f"--junitxml=system_{session.python}_sponge_log.xml", + system_test_path, + *session.posargs, + env={ + "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION": protobuf_implementation, + "SPANNER_DATABASE_DIALECT": database_dialect, + "SKIP_BACKUP_TESTS": "true", + }, + ) + elif os.path.exists(system_test_folder_path): + session.run( + "py.test", + "--verbose", + f"--junitxml=system_{session.python}_sponge_log.xml", + system_test_folder_path, + *session.posargs, + env={ + "PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION": protobuf_implementation, + "SPANNER_DATABASE_DIALECT": database_dialect, + "SKIP_BACKUP_TESTS": "true", + }, + ) diff --git a/owlbot.py b/owlbot.py new file mode 100644 index 0000000000..cf460877a3 --- /dev/null +++ b/owlbot.py @@ -0,0 +1,277 @@ +# Copyright 2018 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This script is used to synthesize generated parts of this library.""" + +from pathlib import Path +import shutil +from typing import List, Optional + +import synthtool as s +from synthtool import gcp +from synthtool.languages import python + +common = gcp.CommonTemplates() + + +def get_staging_dirs( + # This is a customized version of the s.get_staging_dirs() function + # from synthtool to # cater for copying 3 different folders from + # googleapis-gen: + # spanner, spanner/admin/instance and spanner/admin/database. + # Source: + # https://github.com/googleapis/synthtool/blob/master/synthtool/transforms.py#L280 + default_version: Optional[str] = None, + sub_directory: Optional[str] = None, +) -> List[Path]: + """Returns the list of directories, one per version, copied from + https://github.com/googleapis/googleapis-gen. Will return in lexical sorting + order with the exception of the default_version which will be last (if specified). + + Args: + default_version (str): the default version of the API. The directory for this version + will be the last item in the returned list if specified. + sub_directory (str): if a `sub_directory` is provided, only the directories within the + specified `sub_directory` will be returned. + + Returns: the empty list if no file were copied. + """ + + staging = Path("owl-bot-staging") + + if sub_directory: + staging /= sub_directory + + if staging.is_dir(): + # Collect the subdirectories of the staging directory. + versions = [v.name for v in staging.iterdir() if v.is_dir()] + # Reorder the versions so the default version always comes last. + versions = [v for v in versions if v != default_version] + versions.sort() + if default_version is not None: + versions += [default_version] + dirs = [staging / v for v in versions] + for dir in dirs: + s._tracked_paths.add(dir) + return dirs + else: + return [] + + +spanner_default_version = "v1" +spanner_admin_instance_default_version = "v1" +spanner_admin_database_default_version = "v1" + +clean_up_generated_samples = True + +for library in get_staging_dirs(spanner_default_version, "spanner"): + if clean_up_generated_samples: + shutil.rmtree("samples/generated_samples", ignore_errors=True) + clean_up_generated_samples = False + + # Customization for MetricsInterceptor + + assert 6 == s.replace( + [ + library / "google/cloud/spanner_v1/services/spanner/transports/*.py", + library / "google/cloud/spanner_v1/services/spanner/client.py", + ], + """from google.cloud.spanner_v1.types import transaction""", + """from google.cloud.spanner_v1.types import transaction +from google.cloud.spanner_v1.metrics.metrics_interceptor import MetricsInterceptor""", + ) + + assert 1 == s.replace( + library / "google/cloud/spanner_v1/services/spanner/transports/*.py", + """api_audience: Optional\[str\] = None, + \*\*kwargs, + \) -> None: + \"\"\"Instantiate the transport.""", +"""api_audience: Optional[str] = None, + metrics_interceptor: Optional[MetricsInterceptor] = None, + **kwargs, + ) -> None: + \"\"\"Instantiate the transport.""" + ) + + assert 4 == s.replace( + library / "google/cloud/spanner_v1/services/spanner/transports/*.py", + """api_audience: Optional\[str\] = None, + \) -> None: + \"\"\"Instantiate the transport.""", +"""api_audience: Optional[str] = None, + metrics_interceptor: Optional[MetricsInterceptor] = None, + ) -> None: + \"\"\"Instantiate the transport.""" + ) + + assert 1 == s.replace( + library / "google/cloud/spanner_v1/services/spanner/transports/grpc.py", + """\)\n\n self._interceptor = _LoggingClientInterceptor\(\)""", + """) + + # Wrap the gRPC channel with the metric interceptor + if metrics_interceptor is not None: + self._metrics_interceptor = metrics_interceptor + self._grpc_channel = grpc.intercept_channel( + self._grpc_channel, metrics_interceptor + ) + + self._interceptor = _LoggingClientInterceptor()""" + ) + + assert 1 == s.replace( + library / "google/cloud/spanner_v1/services/spanner/transports/grpc.py", + """self._stubs: Dict\[str, Callable\] = \{\}\n\n if api_mtls_endpoint:""", + """self._stubs: Dict[str, Callable] = {} + self._metrics_interceptor = None + + if api_mtls_endpoint:""" + ) + + assert 1 == s.replace( + library / "google/cloud/spanner_v1/services/spanner/client.py", + """# initialize with the provided callable or the passed in class + self._transport = transport_init\( + credentials=credentials, + credentials_file=self._client_options.credentials_file, + host=self._api_endpoint, + scopes=self._client_options.scopes, + client_cert_source_for_mtls=self._client_cert_source, + quota_project_id=self._client_options.quota_project_id, + client_info=client_info, + always_use_jwt_access=True, + api_audience=self._client_options.api_audience, + \)""", + """# initialize with the provided callable or the passed in class + self._transport = transport_init( + credentials=credentials, + credentials_file=self._client_options.credentials_file, + host=self._api_endpoint, + scopes=self._client_options.scopes, + client_cert_source_for_mtls=self._client_cert_source, + quota_project_id=self._client_options.quota_project_id, + client_info=client_info, + always_use_jwt_access=True, + api_audience=self._client_options.api_audience, + metrics_interceptor=MetricsInterceptor(), + )""", + ) + + assert 12 == s.replace( + library / "tests/unit/gapic/spanner_v1/test_spanner.py", + """api_audience=None,\n(\s+)\)""", + """api_audience=None, + metrics_interceptor=mock.ANY, + )""" + ) + + assert 1 == s.replace( + library / "tests/unit/gapic/spanner_v1/test_spanner.py", + """api_audience="https://language.googleapis.com"\n(\s+)\)""", + """api_audience="https://language.googleapis.com", + metrics_interceptor=mock.ANY, + )""" + ) + + s.move( + library, + excludes=[ + "google/cloud/spanner/**", + "*.*", + "noxfile.py", + "docs/index.rst", + "google/cloud/spanner_v1/__init__.py", + "**/gapic_version.py", + "testing/constraints-3.7.txt", + ], + ) + +for library in get_staging_dirs( + spanner_admin_instance_default_version, "spanner_admin_instance" +): + s.move( + library, + excludes=["google/cloud/spanner_admin_instance/**", "*.*", "docs/index.rst", "noxfile.py", "**/gapic_version.py", "testing/constraints-3.7.txt",], + ) + +for library in get_staging_dirs( + spanner_admin_database_default_version, "spanner_admin_database" +): + s.move( + library, + excludes=["google/cloud/spanner_admin_database/**", "*.*", "docs/index.rst", "noxfile.py", "**/gapic_version.py", "testing/constraints-3.7.txt",], + ) + +s.remove_staging_dirs() + +# ---------------------------------------------------------------------------- +# Add templated files +# ---------------------------------------------------------------------------- +templated_files = common.py_library( + microgenerator=True, + samples=True, + cov_level=98, + split_system_tests=True, + system_test_extras=["tracing"], + system_test_python_versions=["3.12"] +) +s.move( + templated_files, + excludes=[ + ".coveragerc", + ".github/workflows", # exclude gh actions as credentials are needed for tests + "README.rst", + ".github/release-please.yml", + ".kokoro/test-samples-impl.sh", + ".kokoro/presubmit/presubmit.cfg", + ".kokoro/samples/python3.7/**", + ".kokoro/samples/python3.8/**", + ], +) + +# Ensure CI runs on a new instance each time +s.replace( + ".kokoro/build.sh", + "# Setup project id.", + """\ +# Set up creating a new instance for each system test run +export GOOGLE_CLOUD_TESTS_CREATE_SPANNER_INSTANCE=true + +# Setup project id.""", +) + +# Update samples folder in CONTRIBUTING.rst +s.replace("CONTRIBUTING.rst", "samples/snippets", "samples/samples") + +# ---------------------------------------------------------------------------- +# Samples templates +# ---------------------------------------------------------------------------- + +python.py_samples() + +s.replace( + "samples/**/noxfile.py", + 'BLACK_VERSION = "black==22.3.0"', + 'BLACK_VERSION = "black==23.7.0"', +) +s.replace( + "samples/**/noxfile.py", + r'ALL_VERSIONS = \["3.7", "3.8", "3.9", "3.10", "3.11", "3.12", "3.13"\]', + 'ALL_VERSIONS = ["3.9", "3.10", "3.11", "3.12", "3.13"]', +) + +# Use a python runtime which is available in the owlbot post processor here +# https://github.com/googleapis/synthtool/blob/master/docker/owlbot/python/Dockerfile +s.shell.run(["nox", "-s", "blacken-3.10"], hide_output=False) diff --git a/pylint.config.py b/pylint.config.py deleted file mode 100644 index f7928f6760..0000000000 --- a/pylint.config.py +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright 2017 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""This module is used to configure gcp-devrel-py-tools run-pylint.""" - -import copy - -from gcp_devrel.tools import pylint - -# Library configuration - -# library_additions = {} -# Ignore generated code -library_replacements = copy.deepcopy(pylint.DEFAULT_LIBRARY_RC_REPLACEMENTS) -library_replacements['MASTER']['ignore'].append('spanner_v1') -library_replacements['MASTER']['ignore'].append('spanner_admin_instance_v1') -library_replacements['MASTER']['ignore'].append('spanner_admin_database_v1') - -# Test configuration - -# test_additions = copy.deepcopy(library_additions) -# test_replacements = copy.deepcopy(library_replacements) diff --git a/release-please-config.json b/release-please-config.json new file mode 100644 index 0000000000..faae5c405c --- /dev/null +++ b/release-please-config.json @@ -0,0 +1,35 @@ +{ + "$schema": "https://raw.githubusercontent.com/googleapis/release-please/main/schemas/config.json", + "packages": { + ".": { + "release-type": "python", + "extra-files": [ + "google/cloud/spanner_admin_instance_v1/gapic_version.py", + "google/cloud/spanner_v1/gapic_version.py", + "google/cloud/spanner_admin_database_v1/gapic_version.py", + { + "type": "json", + "path": "samples/generated_samples/snippet_metadata_google.spanner.v1.json", + "jsonpath": "$.clientLibrary.version" + }, + { + "type": "json", + "path": "samples/generated_samples/snippet_metadata_google.spanner.admin.instance.v1.json", + "jsonpath": "$.clientLibrary.version" + }, + { + "type": "json", + "path": "samples/generated_samples/snippet_metadata_google.spanner.admin.database.v1.json", + "jsonpath": "$.clientLibrary.version" + } + ] + } + }, + "release-type": "python", + "plugins": [ + { + "type": "sentence-case" + } + ], + "initial-version": "0.1.0" +} diff --git a/renovate.json b/renovate.json index 4fa949311b..c7875c469b 100644 --- a/renovate.json +++ b/renovate.json @@ -1,5 +1,12 @@ { "extends": [ - "config:base", ":preserveSemverRanges" - ] + "config:base", + "group:all", + ":preserveSemverRanges", + ":disableDependencyDashboard" + ], + "ignorePaths": [".pre-commit-config.yaml", ".kokoro/requirements.txt", "setup.py", ".github/workflows/unittest.yml"], + "pip_requirements": { + "fileMatch": ["requirements-test.txt", "samples/[\\S/]*constraints.txt", "samples/[\\S/]*constraints-test.txt"] + } } diff --git a/samples/AUTHORING_GUIDE.md b/samples/AUTHORING_GUIDE.md new file mode 100644 index 0000000000..8249522ffc --- /dev/null +++ b/samples/AUTHORING_GUIDE.md @@ -0,0 +1 @@ +See https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/AUTHORING_GUIDE.md \ No newline at end of file diff --git a/samples/CONTRIBUTING.md b/samples/CONTRIBUTING.md new file mode 100644 index 0000000000..f5fe2e6baf --- /dev/null +++ b/samples/CONTRIBUTING.md @@ -0,0 +1 @@ +See https://github.com/GoogleCloudPlatform/python-docs-samples/blob/main/CONTRIBUTING.md \ No newline at end of file diff --git a/samples/generated_samples/snippet_metadata_google.spanner.admin.database.v1.json b/samples/generated_samples/snippet_metadata_google.spanner.admin.database.v1.json new file mode 100644 index 0000000000..d10e70605f --- /dev/null +++ b/samples/generated_samples/snippet_metadata_google.spanner.admin.database.v1.json @@ -0,0 +1,4480 @@ +{ + "clientLibrary": { + "apis": [ + { + "id": "google.spanner.admin.database.v1", + "version": "v1" + } + ], + "language": "PYTHON", + "name": "google-cloud-spanner-admin-database", + "version": "3.58.0" + }, + "snippets": [ + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.add_split_points", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "AddSplitPoints" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.AddSplitPointsRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "split_points", + "type": "MutableSequence[google.cloud.spanner_admin_database_v1.types.SplitPoints]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.AddSplitPointsResponse", + "shortName": "add_split_points" + }, + "description": "Sample for AddSplitPoints", + "file": "spanner_v1_generated_database_admin_add_split_points_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_AddSplitPoints_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_add_split_points_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.add_split_points", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.AddSplitPoints", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "AddSplitPoints" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.AddSplitPointsRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "split_points", + "type": "MutableSequence[google.cloud.spanner_admin_database_v1.types.SplitPoints]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.AddSplitPointsResponse", + "shortName": "add_split_points" + }, + "description": "Sample for AddSplitPoints", + "file": "spanner_v1_generated_database_admin_add_split_points_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_AddSplitPoints_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_add_split_points_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.copy_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "CopyBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.CopyBackupRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "backup_id", + "type": "str" + }, + { + "name": "source_backup", + "type": "str" + }, + { + "name": "expire_time", + "type": "google.protobuf.timestamp_pb2.Timestamp" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "copy_backup" + }, + "description": "Sample for CopyBackup", + "file": "spanner_v1_generated_database_admin_copy_backup_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_CopyBackup_async", + "segments": [ + { + "end": 57, + "start": 27, + "type": "FULL" + }, + { + "end": 57, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 54, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 58, + "start": 55, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_copy_backup_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.copy_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.CopyBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "CopyBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.CopyBackupRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "backup_id", + "type": "str" + }, + { + "name": "source_backup", + "type": "str" + }, + { + "name": "expire_time", + "type": "google.protobuf.timestamp_pb2.Timestamp" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "copy_backup" + }, + "description": "Sample for CopyBackup", + "file": "spanner_v1_generated_database_admin_copy_backup_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_CopyBackup_sync", + "segments": [ + { + "end": 57, + "start": 27, + "type": "FULL" + }, + { + "end": 57, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 54, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 58, + "start": 55, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_copy_backup_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.create_backup_schedule", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.CreateBackupSchedule", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "CreateBackupSchedule" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.CreateBackupScheduleRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "backup_schedule", + "type": "google.cloud.spanner_admin_database_v1.types.BackupSchedule" + }, + { + "name": "backup_schedule_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.BackupSchedule", + "shortName": "create_backup_schedule" + }, + "description": "Sample for CreateBackupSchedule", + "file": "spanner_v1_generated_database_admin_create_backup_schedule_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_CreateBackupSchedule_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_create_backup_schedule_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.create_backup_schedule", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.CreateBackupSchedule", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "CreateBackupSchedule" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.CreateBackupScheduleRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "backup_schedule", + "type": "google.cloud.spanner_admin_database_v1.types.BackupSchedule" + }, + { + "name": "backup_schedule_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.BackupSchedule", + "shortName": "create_backup_schedule" + }, + "description": "Sample for CreateBackupSchedule", + "file": "spanner_v1_generated_database_admin_create_backup_schedule_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_CreateBackupSchedule_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_create_backup_schedule_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.create_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "CreateBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.CreateBackupRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "backup", + "type": "google.cloud.spanner_admin_database_v1.types.Backup" + }, + { + "name": "backup_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "create_backup" + }, + "description": "Sample for CreateBackup", + "file": "spanner_v1_generated_database_admin_create_backup_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_CreateBackup_async", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_create_backup_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.create_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.CreateBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "CreateBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.CreateBackupRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "backup", + "type": "google.cloud.spanner_admin_database_v1.types.Backup" + }, + { + "name": "backup_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "create_backup" + }, + "description": "Sample for CreateBackup", + "file": "spanner_v1_generated_database_admin_create_backup_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_CreateBackup_sync", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_create_backup_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.create_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "CreateDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.CreateDatabaseRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "create_statement", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "create_database" + }, + "description": "Sample for CreateDatabase", + "file": "spanner_v1_generated_database_admin_create_database_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_CreateDatabase_async", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_create_database_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.create_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "CreateDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.CreateDatabaseRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "create_statement", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "create_database" + }, + "description": "Sample for CreateDatabase", + "file": "spanner_v1_generated_database_admin_create_database_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_CreateDatabase_sync", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_create_database_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.delete_backup_schedule", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackupSchedule", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "DeleteBackupSchedule" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.DeleteBackupScheduleRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_backup_schedule" + }, + "description": "Sample for DeleteBackupSchedule", + "file": "spanner_v1_generated_database_admin_delete_backup_schedule_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_DeleteBackupSchedule_async", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_delete_backup_schedule_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.delete_backup_schedule", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackupSchedule", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "DeleteBackupSchedule" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.DeleteBackupScheduleRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_backup_schedule" + }, + "description": "Sample for DeleteBackupSchedule", + "file": "spanner_v1_generated_database_admin_delete_backup_schedule_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_DeleteBackupSchedule_sync", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_delete_backup_schedule_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.delete_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "DeleteBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.DeleteBackupRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_backup" + }, + "description": "Sample for DeleteBackup", + "file": "spanner_v1_generated_database_admin_delete_backup_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_DeleteBackup_async", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_delete_backup_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.delete_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.DeleteBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "DeleteBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.DeleteBackupRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_backup" + }, + "description": "Sample for DeleteBackup", + "file": "spanner_v1_generated_database_admin_delete_backup_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_DeleteBackup_sync", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_delete_backup_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.drop_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "DropDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.DropDatabaseRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "drop_database" + }, + "description": "Sample for DropDatabase", + "file": "spanner_v1_generated_database_admin_drop_database_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_DropDatabase_async", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_drop_database_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.drop_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.DropDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "DropDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.DropDatabaseRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "drop_database" + }, + "description": "Sample for DropDatabase", + "file": "spanner_v1_generated_database_admin_drop_database_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_DropDatabase_sync", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_drop_database_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.get_backup_schedule", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetBackupSchedule", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetBackupSchedule" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.GetBackupScheduleRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.BackupSchedule", + "shortName": "get_backup_schedule" + }, + "description": "Sample for GetBackupSchedule", + "file": "spanner_v1_generated_database_admin_get_backup_schedule_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetBackupSchedule_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_backup_schedule_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.get_backup_schedule", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetBackupSchedule", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetBackupSchedule" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.GetBackupScheduleRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.BackupSchedule", + "shortName": "get_backup_schedule" + }, + "description": "Sample for GetBackupSchedule", + "file": "spanner_v1_generated_database_admin_get_backup_schedule_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetBackupSchedule_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_backup_schedule_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.get_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.GetBackupRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.Backup", + "shortName": "get_backup" + }, + "description": "Sample for GetBackup", + "file": "spanner_v1_generated_database_admin_get_backup_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetBackup_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_backup_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.get_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.GetBackupRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.Backup", + "shortName": "get_backup" + }, + "description": "Sample for GetBackup", + "file": "spanner_v1_generated_database_admin_get_backup_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetBackup_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_backup_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.get_database_ddl", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetDatabaseDdl" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlResponse", + "shortName": "get_database_ddl" + }, + "description": "Sample for GetDatabaseDdl", + "file": "spanner_v1_generated_database_admin_get_database_ddl_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetDatabaseDdl_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_database_ddl_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.get_database_ddl", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetDatabaseDdl", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetDatabaseDdl" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.GetDatabaseDdlResponse", + "shortName": "get_database_ddl" + }, + "description": "Sample for GetDatabaseDdl", + "file": "spanner_v1_generated_database_admin_get_database_ddl_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetDatabaseDdl_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_database_ddl_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.get_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.GetDatabaseRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.Database", + "shortName": "get_database" + }, + "description": "Sample for GetDatabase", + "file": "spanner_v1_generated_database_admin_get_database_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetDatabase_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_database_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.get_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.GetDatabaseRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.Database", + "shortName": "get_database" + }, + "description": "Sample for GetDatabase", + "file": "spanner_v1_generated_database_admin_get_database_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetDatabase_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_database_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.get_iam_policy", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetIamPolicy", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetIamPolicy" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.GetIamPolicyRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.policy_pb2.Policy", + "shortName": "get_iam_policy" + }, + "description": "Sample for GetIamPolicy", + "file": "spanner_v1_generated_database_admin_get_iam_policy_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetIamPolicy_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_iam_policy_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.get_iam_policy", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.GetIamPolicy", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "GetIamPolicy" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.GetIamPolicyRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.policy_pb2.Policy", + "shortName": "get_iam_policy" + }, + "description": "Sample for GetIamPolicy", + "file": "spanner_v1_generated_database_admin_get_iam_policy_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_GetIamPolicy_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_get_iam_policy_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.internal_update_graph_operation", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.InternalUpdateGraphOperation", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "InternalUpdateGraphOperation" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.InternalUpdateGraphOperationRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "operation_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.InternalUpdateGraphOperationResponse", + "shortName": "internal_update_graph_operation" + }, + "description": "Sample for InternalUpdateGraphOperation", + "file": "spanner_v1_generated_database_admin_internal_update_graph_operation_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_InternalUpdateGraphOperation_async", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_internal_update_graph_operation_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.internal_update_graph_operation", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.InternalUpdateGraphOperation", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "InternalUpdateGraphOperation" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.InternalUpdateGraphOperationRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "operation_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.InternalUpdateGraphOperationResponse", + "shortName": "internal_update_graph_operation" + }, + "description": "Sample for InternalUpdateGraphOperation", + "file": "spanner_v1_generated_database_admin_internal_update_graph_operation_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_InternalUpdateGraphOperation_sync", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_internal_update_graph_operation_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.list_backup_operations", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListBackupOperations" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupOperationsAsyncPager", + "shortName": "list_backup_operations" + }, + "description": "Sample for ListBackupOperations", + "file": "spanner_v1_generated_database_admin_list_backup_operations_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListBackupOperations_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_backup_operations_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.list_backup_operations", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListBackupOperations", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListBackupOperations" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListBackupOperationsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupOperationsPager", + "shortName": "list_backup_operations" + }, + "description": "Sample for ListBackupOperations", + "file": "spanner_v1_generated_database_admin_list_backup_operations_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListBackupOperations_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_backup_operations_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.list_backup_schedules", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListBackupSchedules" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupSchedulesAsyncPager", + "shortName": "list_backup_schedules" + }, + "description": "Sample for ListBackupSchedules", + "file": "spanner_v1_generated_database_admin_list_backup_schedules_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListBackupSchedules_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_backup_schedules_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.list_backup_schedules", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListBackupSchedules", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListBackupSchedules" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListBackupSchedulesRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupSchedulesPager", + "shortName": "list_backup_schedules" + }, + "description": "Sample for ListBackupSchedules", + "file": "spanner_v1_generated_database_admin_list_backup_schedules_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListBackupSchedules_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_backup_schedules_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.list_backups", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListBackups", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListBackups" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListBackupsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupsAsyncPager", + "shortName": "list_backups" + }, + "description": "Sample for ListBackups", + "file": "spanner_v1_generated_database_admin_list_backups_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListBackups_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_backups_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.list_backups", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListBackups", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListBackups" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListBackupsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListBackupsPager", + "shortName": "list_backups" + }, + "description": "Sample for ListBackups", + "file": "spanner_v1_generated_database_admin_list_backups_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListBackups_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_backups_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.list_database_operations", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListDatabaseOperations" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseOperationsAsyncPager", + "shortName": "list_database_operations" + }, + "description": "Sample for ListDatabaseOperations", + "file": "spanner_v1_generated_database_admin_list_database_operations_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListDatabaseOperations_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_database_operations_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.list_database_operations", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseOperations", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListDatabaseOperations" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListDatabaseOperationsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseOperationsPager", + "shortName": "list_database_operations" + }, + "description": "Sample for ListDatabaseOperations", + "file": "spanner_v1_generated_database_admin_list_database_operations_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListDatabaseOperations_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_database_operations_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.list_database_roles", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListDatabaseRoles" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseRolesAsyncPager", + "shortName": "list_database_roles" + }, + "description": "Sample for ListDatabaseRoles", + "file": "spanner_v1_generated_database_admin_list_database_roles_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListDatabaseRoles_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_database_roles_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.list_database_roles", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListDatabaseRoles", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListDatabaseRoles" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListDatabaseRolesRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabaseRolesPager", + "shortName": "list_database_roles" + }, + "description": "Sample for ListDatabaseRoles", + "file": "spanner_v1_generated_database_admin_list_database_roles_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListDatabaseRoles_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_database_roles_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.list_databases", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListDatabases" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabasesAsyncPager", + "shortName": "list_databases" + }, + "description": "Sample for ListDatabases", + "file": "spanner_v1_generated_database_admin_list_databases_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListDatabases_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_databases_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.list_databases", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "ListDatabases" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.ListDatabasesRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.services.database_admin.pagers.ListDatabasesPager", + "shortName": "list_databases" + }, + "description": "Sample for ListDatabases", + "file": "spanner_v1_generated_database_admin_list_databases_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_ListDatabases_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_list_databases_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.restore_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "RestoreDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.RestoreDatabaseRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "database_id", + "type": "str" + }, + { + "name": "backup", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "restore_database" + }, + "description": "Sample for RestoreDatabase", + "file": "spanner_v1_generated_database_admin_restore_database_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_RestoreDatabase_async", + "segments": [ + { + "end": 57, + "start": 27, + "type": "FULL" + }, + { + "end": 57, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 54, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 58, + "start": 55, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_restore_database_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.restore_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.RestoreDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "RestoreDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.RestoreDatabaseRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "database_id", + "type": "str" + }, + { + "name": "backup", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "restore_database" + }, + "description": "Sample for RestoreDatabase", + "file": "spanner_v1_generated_database_admin_restore_database_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_RestoreDatabase_sync", + "segments": [ + { + "end": 57, + "start": 27, + "type": "FULL" + }, + { + "end": 57, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 54, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 58, + "start": 55, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_restore_database_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.set_iam_policy", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.SetIamPolicy", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "SetIamPolicy" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.SetIamPolicyRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.policy_pb2.Policy", + "shortName": "set_iam_policy" + }, + "description": "Sample for SetIamPolicy", + "file": "spanner_v1_generated_database_admin_set_iam_policy_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_SetIamPolicy_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_set_iam_policy_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.set_iam_policy", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.SetIamPolicy", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "SetIamPolicy" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.SetIamPolicyRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.policy_pb2.Policy", + "shortName": "set_iam_policy" + }, + "description": "Sample for SetIamPolicy", + "file": "spanner_v1_generated_database_admin_set_iam_policy_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_SetIamPolicy_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_set_iam_policy_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.test_iam_permissions", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.TestIamPermissions", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "TestIamPermissions" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "permissions", + "type": "MutableSequence[str]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse", + "shortName": "test_iam_permissions" + }, + "description": "Sample for TestIamPermissions", + "file": "spanner_v1_generated_database_admin_test_iam_permissions_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_TestIamPermissions_async", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_test_iam_permissions_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.test_iam_permissions", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.TestIamPermissions", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "TestIamPermissions" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "permissions", + "type": "MutableSequence[str]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse", + "shortName": "test_iam_permissions" + }, + "description": "Sample for TestIamPermissions", + "file": "spanner_v1_generated_database_admin_test_iam_permissions_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_TestIamPermissions_sync", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_test_iam_permissions_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.update_backup_schedule", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackupSchedule", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "UpdateBackupSchedule" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.UpdateBackupScheduleRequest" + }, + { + "name": "backup_schedule", + "type": "google.cloud.spanner_admin_database_v1.types.BackupSchedule" + }, + { + "name": "update_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.BackupSchedule", + "shortName": "update_backup_schedule" + }, + "description": "Sample for UpdateBackupSchedule", + "file": "spanner_v1_generated_database_admin_update_backup_schedule_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_UpdateBackupSchedule_async", + "segments": [ + { + "end": 50, + "start": 27, + "type": "FULL" + }, + { + "end": 50, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 44, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 47, + "start": 45, + "type": "REQUEST_EXECUTION" + }, + { + "end": 51, + "start": 48, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_update_backup_schedule_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.update_backup_schedule", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackupSchedule", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "UpdateBackupSchedule" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.UpdateBackupScheduleRequest" + }, + { + "name": "backup_schedule", + "type": "google.cloud.spanner_admin_database_v1.types.BackupSchedule" + }, + { + "name": "update_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.BackupSchedule", + "shortName": "update_backup_schedule" + }, + "description": "Sample for UpdateBackupSchedule", + "file": "spanner_v1_generated_database_admin_update_backup_schedule_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_UpdateBackupSchedule_sync", + "segments": [ + { + "end": 50, + "start": 27, + "type": "FULL" + }, + { + "end": 50, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 44, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 47, + "start": 45, + "type": "REQUEST_EXECUTION" + }, + { + "end": 51, + "start": 48, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_update_backup_schedule_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.update_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "UpdateBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.UpdateBackupRequest" + }, + { + "name": "backup", + "type": "google.cloud.spanner_admin_database_v1.types.Backup" + }, + { + "name": "update_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.Backup", + "shortName": "update_backup" + }, + "description": "Sample for UpdateBackup", + "file": "spanner_v1_generated_database_admin_update_backup_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_UpdateBackup_async", + "segments": [ + { + "end": 50, + "start": 27, + "type": "FULL" + }, + { + "end": 50, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 44, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 47, + "start": 45, + "type": "REQUEST_EXECUTION" + }, + { + "end": 51, + "start": 48, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_update_backup_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.update_backup", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.UpdateBackup", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "UpdateBackup" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.UpdateBackupRequest" + }, + { + "name": "backup", + "type": "google.cloud.spanner_admin_database_v1.types.Backup" + }, + { + "name": "update_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_database_v1.types.Backup", + "shortName": "update_backup" + }, + "description": "Sample for UpdateBackup", + "file": "spanner_v1_generated_database_admin_update_backup_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_UpdateBackup_sync", + "segments": [ + { + "end": 50, + "start": 27, + "type": "FULL" + }, + { + "end": 50, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 44, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 47, + "start": 45, + "type": "REQUEST_EXECUTION" + }, + { + "end": 51, + "start": 48, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_update_backup_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.update_database_ddl", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "UpdateDatabaseDdl" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.UpdateDatabaseDdlRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "statements", + "type": "MutableSequence[str]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "update_database_ddl" + }, + "description": "Sample for UpdateDatabaseDdl", + "file": "spanner_v1_generated_database_admin_update_database_ddl_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_UpdateDatabaseDdl_async", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_update_database_ddl_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.update_database_ddl", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabaseDdl", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "UpdateDatabaseDdl" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.UpdateDatabaseDdlRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "statements", + "type": "MutableSequence[str]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "update_database_ddl" + }, + "description": "Sample for UpdateDatabaseDdl", + "file": "spanner_v1_generated_database_admin_update_database_ddl_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_UpdateDatabaseDdl_sync", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_update_database_ddl_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient", + "shortName": "DatabaseAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminAsyncClient.update_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "UpdateDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.UpdateDatabaseRequest" + }, + { + "name": "database", + "type": "google.cloud.spanner_admin_database_v1.types.Database" + }, + { + "name": "update_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "update_database" + }, + "description": "Sample for UpdateDatabase", + "file": "spanner_v1_generated_database_admin_update_database_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_UpdateDatabase_async", + "segments": [ + { + "end": 58, + "start": 27, + "type": "FULL" + }, + { + "end": 58, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 48, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 55, + "start": 49, + "type": "REQUEST_EXECUTION" + }, + { + "end": 59, + "start": 56, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_update_database_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient", + "shortName": "DatabaseAdminClient" + }, + "fullName": "google.cloud.spanner_admin_database_v1.DatabaseAdminClient.update_database", + "method": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin.UpdateDatabase", + "service": { + "fullName": "google.spanner.admin.database.v1.DatabaseAdmin", + "shortName": "DatabaseAdmin" + }, + "shortName": "UpdateDatabase" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_database_v1.types.UpdateDatabaseRequest" + }, + { + "name": "database", + "type": "google.cloud.spanner_admin_database_v1.types.Database" + }, + { + "name": "update_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "update_database" + }, + "description": "Sample for UpdateDatabase", + "file": "spanner_v1_generated_database_admin_update_database_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_DatabaseAdmin_UpdateDatabase_sync", + "segments": [ + { + "end": 58, + "start": 27, + "type": "FULL" + }, + { + "end": 58, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 48, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 55, + "start": 49, + "type": "REQUEST_EXECUTION" + }, + { + "end": 59, + "start": 56, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_database_admin_update_database_sync.py" + } + ] +} diff --git a/samples/generated_samples/snippet_metadata_google.spanner.admin.instance.v1.json b/samples/generated_samples/snippet_metadata_google.spanner.admin.instance.v1.json new file mode 100644 index 0000000000..05a040bd1b --- /dev/null +++ b/samples/generated_samples/snippet_metadata_google.spanner.admin.instance.v1.json @@ -0,0 +1,3450 @@ +{ + "clientLibrary": { + "apis": [ + { + "id": "google.spanner.admin.instance.v1", + "version": "v1" + } + ], + "language": "PYTHON", + "name": "google-cloud-spanner-admin-instance", + "version": "3.58.0" + }, + "snippets": [ + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.create_instance_config", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.CreateInstanceConfig", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "CreateInstanceConfig" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.CreateInstanceConfigRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "instance_config", + "type": "google.cloud.spanner_admin_instance_v1.types.InstanceConfig" + }, + { + "name": "instance_config_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "create_instance_config" + }, + "description": "Sample for CreateInstanceConfig", + "file": "spanner_v1_generated_instance_admin_create_instance_config_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_CreateInstanceConfig_async", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_create_instance_config_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.create_instance_config", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.CreateInstanceConfig", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "CreateInstanceConfig" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.CreateInstanceConfigRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "instance_config", + "type": "google.cloud.spanner_admin_instance_v1.types.InstanceConfig" + }, + { + "name": "instance_config_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "create_instance_config" + }, + "description": "Sample for CreateInstanceConfig", + "file": "spanner_v1_generated_instance_admin_create_instance_config_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_CreateInstanceConfig_sync", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_create_instance_config_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.create_instance_partition", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.CreateInstancePartition", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "CreateInstancePartition" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.CreateInstancePartitionRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "instance_partition", + "type": "google.cloud.spanner_admin_instance_v1.types.InstancePartition" + }, + { + "name": "instance_partition_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "create_instance_partition" + }, + "description": "Sample for CreateInstancePartition", + "file": "spanner_v1_generated_instance_admin_create_instance_partition_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_CreateInstancePartition_async", + "segments": [ + { + "end": 63, + "start": 27, + "type": "FULL" + }, + { + "end": 63, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 53, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 60, + "start": 54, + "type": "REQUEST_EXECUTION" + }, + { + "end": 64, + "start": 61, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_create_instance_partition_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.create_instance_partition", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.CreateInstancePartition", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "CreateInstancePartition" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.CreateInstancePartitionRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "instance_partition", + "type": "google.cloud.spanner_admin_instance_v1.types.InstancePartition" + }, + { + "name": "instance_partition_id", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "create_instance_partition" + }, + "description": "Sample for CreateInstancePartition", + "file": "spanner_v1_generated_instance_admin_create_instance_partition_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_CreateInstancePartition_sync", + "segments": [ + { + "end": 63, + "start": 27, + "type": "FULL" + }, + { + "end": 63, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 53, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 60, + "start": 54, + "type": "REQUEST_EXECUTION" + }, + { + "end": 64, + "start": 61, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_create_instance_partition_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.create_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "CreateInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.CreateInstanceRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "instance_id", + "type": "str" + }, + { + "name": "instance", + "type": "google.cloud.spanner_admin_instance_v1.types.Instance" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "create_instance" + }, + "description": "Sample for CreateInstance", + "file": "spanner_v1_generated_instance_admin_create_instance_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_CreateInstance_async", + "segments": [ + { + "end": 62, + "start": 27, + "type": "FULL" + }, + { + "end": 62, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 52, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 59, + "start": 53, + "type": "REQUEST_EXECUTION" + }, + { + "end": 63, + "start": 60, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_create_instance_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.create_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.CreateInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "CreateInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.CreateInstanceRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "instance_id", + "type": "str" + }, + { + "name": "instance", + "type": "google.cloud.spanner_admin_instance_v1.types.Instance" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "create_instance" + }, + "description": "Sample for CreateInstance", + "file": "spanner_v1_generated_instance_admin_create_instance_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_CreateInstance_sync", + "segments": [ + { + "end": 62, + "start": 27, + "type": "FULL" + }, + { + "end": 62, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 52, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 59, + "start": 53, + "type": "REQUEST_EXECUTION" + }, + { + "end": 63, + "start": 60, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_create_instance_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.delete_instance_config", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstanceConfig", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "DeleteInstanceConfig" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.DeleteInstanceConfigRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_instance_config" + }, + "description": "Sample for DeleteInstanceConfig", + "file": "spanner_v1_generated_instance_admin_delete_instance_config_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_DeleteInstanceConfig_async", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_delete_instance_config_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.delete_instance_config", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstanceConfig", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "DeleteInstanceConfig" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.DeleteInstanceConfigRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_instance_config" + }, + "description": "Sample for DeleteInstanceConfig", + "file": "spanner_v1_generated_instance_admin_delete_instance_config_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_DeleteInstanceConfig_sync", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_delete_instance_config_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.delete_instance_partition", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstancePartition", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "DeleteInstancePartition" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.DeleteInstancePartitionRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_instance_partition" + }, + "description": "Sample for DeleteInstancePartition", + "file": "spanner_v1_generated_instance_admin_delete_instance_partition_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_DeleteInstancePartition_async", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_delete_instance_partition_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.delete_instance_partition", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstancePartition", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "DeleteInstancePartition" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.DeleteInstancePartitionRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_instance_partition" + }, + "description": "Sample for DeleteInstancePartition", + "file": "spanner_v1_generated_instance_admin_delete_instance_partition_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_DeleteInstancePartition_sync", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_delete_instance_partition_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.delete_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "DeleteInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.DeleteInstanceRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_instance" + }, + "description": "Sample for DeleteInstance", + "file": "spanner_v1_generated_instance_admin_delete_instance_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_DeleteInstance_async", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_delete_instance_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.delete_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "DeleteInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.DeleteInstanceRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_instance" + }, + "description": "Sample for DeleteInstance", + "file": "spanner_v1_generated_instance_admin_delete_instance_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_DeleteInstance_sync", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_delete_instance_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.get_iam_policy", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.GetIamPolicy", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "GetIamPolicy" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.GetIamPolicyRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.policy_pb2.Policy", + "shortName": "get_iam_policy" + }, + "description": "Sample for GetIamPolicy", + "file": "spanner_v1_generated_instance_admin_get_iam_policy_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_GetIamPolicy_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_get_iam_policy_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.get_iam_policy", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.GetIamPolicy", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "GetIamPolicy" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.GetIamPolicyRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.policy_pb2.Policy", + "shortName": "get_iam_policy" + }, + "description": "Sample for GetIamPolicy", + "file": "spanner_v1_generated_instance_admin_get_iam_policy_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_GetIamPolicy_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_get_iam_policy_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.get_instance_config", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "GetInstanceConfig" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.GetInstanceConfigRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.types.InstanceConfig", + "shortName": "get_instance_config" + }, + "description": "Sample for GetInstanceConfig", + "file": "spanner_v1_generated_instance_admin_get_instance_config_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_GetInstanceConfig_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_get_instance_config_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.get_instance_config", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "GetInstanceConfig" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.GetInstanceConfigRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.types.InstanceConfig", + "shortName": "get_instance_config" + }, + "description": "Sample for GetInstanceConfig", + "file": "spanner_v1_generated_instance_admin_get_instance_config_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_GetInstanceConfig_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_get_instance_config_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.get_instance_partition", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.GetInstancePartition", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "GetInstancePartition" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.GetInstancePartitionRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.types.InstancePartition", + "shortName": "get_instance_partition" + }, + "description": "Sample for GetInstancePartition", + "file": "spanner_v1_generated_instance_admin_get_instance_partition_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_GetInstancePartition_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_get_instance_partition_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.get_instance_partition", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.GetInstancePartition", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "GetInstancePartition" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.GetInstancePartitionRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.types.InstancePartition", + "shortName": "get_instance_partition" + }, + "description": "Sample for GetInstancePartition", + "file": "spanner_v1_generated_instance_admin_get_instance_partition_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_GetInstancePartition_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_get_instance_partition_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.get_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.GetInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "GetInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.GetInstanceRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.types.Instance", + "shortName": "get_instance" + }, + "description": "Sample for GetInstance", + "file": "spanner_v1_generated_instance_admin_get_instance_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_GetInstance_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_get_instance_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.get_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.GetInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "GetInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.GetInstanceRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.types.Instance", + "shortName": "get_instance" + }, + "description": "Sample for GetInstance", + "file": "spanner_v1_generated_instance_admin_get_instance_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_GetInstance_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_get_instance_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.list_instance_config_operations", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstanceConfigOperations" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigOperationsAsyncPager", + "shortName": "list_instance_config_operations" + }, + "description": "Sample for ListInstanceConfigOperations", + "file": "spanner_v1_generated_instance_admin_list_instance_config_operations_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstanceConfigOperations_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instance_config_operations_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.list_instance_config_operations", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigOperations", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstanceConfigOperations" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigOperationsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigOperationsPager", + "shortName": "list_instance_config_operations" + }, + "description": "Sample for ListInstanceConfigOperations", + "file": "spanner_v1_generated_instance_admin_list_instance_config_operations_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstanceConfigOperations_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instance_config_operations_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.list_instance_configs", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstanceConfigs" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigsAsyncPager", + "shortName": "list_instance_configs" + }, + "description": "Sample for ListInstanceConfigs", + "file": "spanner_v1_generated_instance_admin_list_instance_configs_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstanceConfigs_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instance_configs_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.list_instance_configs", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstanceConfigs", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstanceConfigs" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstanceConfigsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstanceConfigsPager", + "shortName": "list_instance_configs" + }, + "description": "Sample for ListInstanceConfigs", + "file": "spanner_v1_generated_instance_admin_list_instance_configs_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstanceConfigs_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instance_configs_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.list_instance_partition_operations", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstancePartitionOperations" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancePartitionOperationsAsyncPager", + "shortName": "list_instance_partition_operations" + }, + "description": "Sample for ListInstancePartitionOperations", + "file": "spanner_v1_generated_instance_admin_list_instance_partition_operations_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstancePartitionOperations_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instance_partition_operations_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.list_instance_partition_operations", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitionOperations", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstancePartitionOperations" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionOperationsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancePartitionOperationsPager", + "shortName": "list_instance_partition_operations" + }, + "description": "Sample for ListInstancePartitionOperations", + "file": "spanner_v1_generated_instance_admin_list_instance_partition_operations_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstancePartitionOperations_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instance_partition_operations_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.list_instance_partitions", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstancePartitions" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancePartitionsAsyncPager", + "shortName": "list_instance_partitions" + }, + "description": "Sample for ListInstancePartitions", + "file": "spanner_v1_generated_instance_admin_list_instance_partitions_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstancePartitions_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instance_partitions_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.list_instance_partitions", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstancePartitions", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstancePartitions" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstancePartitionsRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancePartitionsPager", + "shortName": "list_instance_partitions" + }, + "description": "Sample for ListInstancePartitions", + "file": "spanner_v1_generated_instance_admin_list_instance_partitions_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstancePartitions_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instance_partitions_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.list_instances", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstances", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstances" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancesAsyncPager", + "shortName": "list_instances" + }, + "description": "Sample for ListInstances", + "file": "spanner_v1_generated_instance_admin_list_instances_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstances_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instances_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.list_instances", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.ListInstances", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "ListInstances" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.ListInstancesRequest" + }, + { + "name": "parent", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_admin_instance_v1.services.instance_admin.pagers.ListInstancesPager", + "shortName": "list_instances" + }, + "description": "Sample for ListInstances", + "file": "spanner_v1_generated_instance_admin_list_instances_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_ListInstances_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_list_instances_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.move_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "MoveInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.MoveInstanceRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "move_instance" + }, + "description": "Sample for MoveInstance", + "file": "spanner_v1_generated_instance_admin_move_instance_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_MoveInstance_async", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_move_instance_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.move_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.MoveInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "MoveInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.MoveInstanceRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "move_instance" + }, + "description": "Sample for MoveInstance", + "file": "spanner_v1_generated_instance_admin_move_instance_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_MoveInstance_sync", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_move_instance_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.set_iam_policy", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.SetIamPolicy", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "SetIamPolicy" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.SetIamPolicyRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.policy_pb2.Policy", + "shortName": "set_iam_policy" + }, + "description": "Sample for SetIamPolicy", + "file": "spanner_v1_generated_instance_admin_set_iam_policy_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_SetIamPolicy_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_set_iam_policy_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.set_iam_policy", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.SetIamPolicy", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "SetIamPolicy" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.SetIamPolicyRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.policy_pb2.Policy", + "shortName": "set_iam_policy" + }, + "description": "Sample for SetIamPolicy", + "file": "spanner_v1_generated_instance_admin_set_iam_policy_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_SetIamPolicy_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_set_iam_policy_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.test_iam_permissions", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.TestIamPermissions", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "TestIamPermissions" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "permissions", + "type": "MutableSequence[str]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse", + "shortName": "test_iam_permissions" + }, + "description": "Sample for TestIamPermissions", + "file": "spanner_v1_generated_instance_admin_test_iam_permissions_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_TestIamPermissions_async", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_test_iam_permissions_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.test_iam_permissions", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.TestIamPermissions", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "TestIamPermissions" + }, + "parameters": [ + { + "name": "request", + "type": "google.iam.v1.iam_policy_pb2.TestIamPermissionsRequest" + }, + { + "name": "resource", + "type": "str" + }, + { + "name": "permissions", + "type": "MutableSequence[str]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse", + "shortName": "test_iam_permissions" + }, + "description": "Sample for TestIamPermissions", + "file": "spanner_v1_generated_instance_admin_test_iam_permissions_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_TestIamPermissions_sync", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 41, + "start": 39, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 42, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_test_iam_permissions_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.update_instance_config", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstanceConfig", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "UpdateInstanceConfig" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.UpdateInstanceConfigRequest" + }, + { + "name": "instance_config", + "type": "google.cloud.spanner_admin_instance_v1.types.InstanceConfig" + }, + { + "name": "update_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "update_instance_config" + }, + "description": "Sample for UpdateInstanceConfig", + "file": "spanner_v1_generated_instance_admin_update_instance_config_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_UpdateInstanceConfig_async", + "segments": [ + { + "end": 54, + "start": 27, + "type": "FULL" + }, + { + "end": 54, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 44, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 51, + "start": 45, + "type": "REQUEST_EXECUTION" + }, + { + "end": 55, + "start": 52, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_update_instance_config_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.update_instance_config", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstanceConfig", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "UpdateInstanceConfig" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.UpdateInstanceConfigRequest" + }, + { + "name": "instance_config", + "type": "google.cloud.spanner_admin_instance_v1.types.InstanceConfig" + }, + { + "name": "update_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "update_instance_config" + }, + "description": "Sample for UpdateInstanceConfig", + "file": "spanner_v1_generated_instance_admin_update_instance_config_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_UpdateInstanceConfig_sync", + "segments": [ + { + "end": 54, + "start": 27, + "type": "FULL" + }, + { + "end": 54, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 44, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 51, + "start": 45, + "type": "REQUEST_EXECUTION" + }, + { + "end": 55, + "start": 52, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_update_instance_config_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.update_instance_partition", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstancePartition", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "UpdateInstancePartition" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.UpdateInstancePartitionRequest" + }, + { + "name": "instance_partition", + "type": "google.cloud.spanner_admin_instance_v1.types.InstancePartition" + }, + { + "name": "field_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "update_instance_partition" + }, + "description": "Sample for UpdateInstancePartition", + "file": "spanner_v1_generated_instance_admin_update_instance_partition_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_UpdateInstancePartition_async", + "segments": [ + { + "end": 61, + "start": 27, + "type": "FULL" + }, + { + "end": 61, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 51, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 58, + "start": 52, + "type": "REQUEST_EXECUTION" + }, + { + "end": 62, + "start": 59, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_update_instance_partition_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.update_instance_partition", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstancePartition", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "UpdateInstancePartition" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.UpdateInstancePartitionRequest" + }, + { + "name": "instance_partition", + "type": "google.cloud.spanner_admin_instance_v1.types.InstancePartition" + }, + { + "name": "field_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "update_instance_partition" + }, + "description": "Sample for UpdateInstancePartition", + "file": "spanner_v1_generated_instance_admin_update_instance_partition_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_UpdateInstancePartition_sync", + "segments": [ + { + "end": 61, + "start": 27, + "type": "FULL" + }, + { + "end": 61, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 51, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 58, + "start": 52, + "type": "REQUEST_EXECUTION" + }, + { + "end": 62, + "start": 59, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_update_instance_partition_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient", + "shortName": "InstanceAdminAsyncClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminAsyncClient.update_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "UpdateInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.UpdateInstanceRequest" + }, + { + "name": "instance", + "type": "google.cloud.spanner_admin_instance_v1.types.Instance" + }, + { + "name": "field_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation_async.AsyncOperation", + "shortName": "update_instance" + }, + "description": "Sample for UpdateInstance", + "file": "spanner_v1_generated_instance_admin_update_instance_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_UpdateInstance_async", + "segments": [ + { + "end": 60, + "start": 27, + "type": "FULL" + }, + { + "end": 60, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 50, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 57, + "start": 51, + "type": "REQUEST_EXECUTION" + }, + { + "end": 61, + "start": 58, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_update_instance_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient", + "shortName": "InstanceAdminClient" + }, + "fullName": "google.cloud.spanner_admin_instance_v1.InstanceAdminClient.update_instance", + "method": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance", + "service": { + "fullName": "google.spanner.admin.instance.v1.InstanceAdmin", + "shortName": "InstanceAdmin" + }, + "shortName": "UpdateInstance" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_admin_instance_v1.types.UpdateInstanceRequest" + }, + { + "name": "instance", + "type": "google.cloud.spanner_admin_instance_v1.types.Instance" + }, + { + "name": "field_mask", + "type": "google.protobuf.field_mask_pb2.FieldMask" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.api_core.operation.Operation", + "shortName": "update_instance" + }, + "description": "Sample for UpdateInstance", + "file": "spanner_v1_generated_instance_admin_update_instance_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_InstanceAdmin_UpdateInstance_sync", + "segments": [ + { + "end": 60, + "start": 27, + "type": "FULL" + }, + { + "end": 60, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 50, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 57, + "start": 51, + "type": "REQUEST_EXECUTION" + }, + { + "end": 61, + "start": 58, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_instance_admin_update_instance_sync.py" + } + ] +} diff --git a/samples/generated_samples/snippet_metadata_google.spanner.v1.json b/samples/generated_samples/snippet_metadata_google.spanner.v1.json new file mode 100644 index 0000000000..1eb4c96ad5 --- /dev/null +++ b/samples/generated_samples/snippet_metadata_google.spanner.v1.json @@ -0,0 +1,2579 @@ +{ + "clientLibrary": { + "apis": [ + { + "id": "google.spanner.v1", + "version": "v1" + } + ], + "language": "PYTHON", + "name": "google-cloud-spanner", + "version": "3.58.0" + }, + "snippets": [ + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.batch_create_sessions", + "method": { + "fullName": "google.spanner.v1.Spanner.BatchCreateSessions", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "BatchCreateSessions" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.BatchCreateSessionsRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "session_count", + "type": "int" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.BatchCreateSessionsResponse", + "shortName": "batch_create_sessions" + }, + "description": "Sample for BatchCreateSessions", + "file": "spanner_v1_generated_spanner_batch_create_sessions_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_BatchCreateSessions_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_batch_create_sessions_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.batch_create_sessions", + "method": { + "fullName": "google.spanner.v1.Spanner.BatchCreateSessions", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "BatchCreateSessions" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.BatchCreateSessionsRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "session_count", + "type": "int" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.BatchCreateSessionsResponse", + "shortName": "batch_create_sessions" + }, + "description": "Sample for BatchCreateSessions", + "file": "spanner_v1_generated_spanner_batch_create_sessions_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_BatchCreateSessions_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_batch_create_sessions_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.batch_write", + "method": { + "fullName": "google.spanner.v1.Spanner.BatchWrite", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "BatchWrite" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.BatchWriteRequest" + }, + { + "name": "session", + "type": "str" + }, + { + "name": "mutation_groups", + "type": "MutableSequence[google.cloud.spanner_v1.types.BatchWriteRequest.MutationGroup]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "Iterable[google.cloud.spanner_v1.types.BatchWriteResponse]", + "shortName": "batch_write" + }, + "description": "Sample for BatchWrite", + "file": "spanner_v1_generated_spanner_batch_write_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_BatchWrite_async", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 49, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 52, + "start": 50, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 53, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_batch_write_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.batch_write", + "method": { + "fullName": "google.spanner.v1.Spanner.BatchWrite", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "BatchWrite" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.BatchWriteRequest" + }, + { + "name": "session", + "type": "str" + }, + { + "name": "mutation_groups", + "type": "MutableSequence[google.cloud.spanner_v1.types.BatchWriteRequest.MutationGroup]" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "Iterable[google.cloud.spanner_v1.types.BatchWriteResponse]", + "shortName": "batch_write" + }, + "description": "Sample for BatchWrite", + "file": "spanner_v1_generated_spanner_batch_write_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_BatchWrite_sync", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 49, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 52, + "start": 50, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 53, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_batch_write_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.begin_transaction", + "method": { + "fullName": "google.spanner.v1.Spanner.BeginTransaction", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "BeginTransaction" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.BeginTransactionRequest" + }, + { + "name": "session", + "type": "str" + }, + { + "name": "options", + "type": "google.cloud.spanner_v1.types.TransactionOptions" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.Transaction", + "shortName": "begin_transaction" + }, + "description": "Sample for BeginTransaction", + "file": "spanner_v1_generated_spanner_begin_transaction_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_BeginTransaction_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_begin_transaction_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.begin_transaction", + "method": { + "fullName": "google.spanner.v1.Spanner.BeginTransaction", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "BeginTransaction" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.BeginTransactionRequest" + }, + { + "name": "session", + "type": "str" + }, + { + "name": "options", + "type": "google.cloud.spanner_v1.types.TransactionOptions" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.Transaction", + "shortName": "begin_transaction" + }, + "description": "Sample for BeginTransaction", + "file": "spanner_v1_generated_spanner_begin_transaction_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_BeginTransaction_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_begin_transaction_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.commit", + "method": { + "fullName": "google.spanner.v1.Spanner.Commit", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "Commit" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.CommitRequest" + }, + { + "name": "session", + "type": "str" + }, + { + "name": "transaction_id", + "type": "bytes" + }, + { + "name": "mutations", + "type": "MutableSequence[google.cloud.spanner_v1.types.Mutation]" + }, + { + "name": "single_use_transaction", + "type": "google.cloud.spanner_v1.types.TransactionOptions" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.CommitResponse", + "shortName": "commit" + }, + "description": "Sample for Commit", + "file": "spanner_v1_generated_spanner_commit_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_Commit_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_commit_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.commit", + "method": { + "fullName": "google.spanner.v1.Spanner.Commit", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "Commit" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.CommitRequest" + }, + { + "name": "session", + "type": "str" + }, + { + "name": "transaction_id", + "type": "bytes" + }, + { + "name": "mutations", + "type": "MutableSequence[google.cloud.spanner_v1.types.Mutation]" + }, + { + "name": "single_use_transaction", + "type": "google.cloud.spanner_v1.types.TransactionOptions" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.CommitResponse", + "shortName": "commit" + }, + "description": "Sample for Commit", + "file": "spanner_v1_generated_spanner_commit_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_Commit_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_commit_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.create_session", + "method": { + "fullName": "google.spanner.v1.Spanner.CreateSession", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "CreateSession" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.CreateSessionRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.Session", + "shortName": "create_session" + }, + "description": "Sample for CreateSession", + "file": "spanner_v1_generated_spanner_create_session_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_CreateSession_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_create_session_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.create_session", + "method": { + "fullName": "google.spanner.v1.Spanner.CreateSession", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "CreateSession" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.CreateSessionRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.Session", + "shortName": "create_session" + }, + "description": "Sample for CreateSession", + "file": "spanner_v1_generated_spanner_create_session_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_CreateSession_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_create_session_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.delete_session", + "method": { + "fullName": "google.spanner.v1.Spanner.DeleteSession", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "DeleteSession" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.DeleteSessionRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_session" + }, + "description": "Sample for DeleteSession", + "file": "spanner_v1_generated_spanner_delete_session_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_DeleteSession_async", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_delete_session_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.delete_session", + "method": { + "fullName": "google.spanner.v1.Spanner.DeleteSession", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "DeleteSession" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.DeleteSessionRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "delete_session" + }, + "description": "Sample for DeleteSession", + "file": "spanner_v1_generated_spanner_delete_session_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_DeleteSession_sync", + "segments": [ + { + "end": 49, + "start": 27, + "type": "FULL" + }, + { + "end": 49, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_delete_session_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.execute_batch_dml", + "method": { + "fullName": "google.spanner.v1.Spanner.ExecuteBatchDml", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "ExecuteBatchDml" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ExecuteBatchDmlRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.ExecuteBatchDmlResponse", + "shortName": "execute_batch_dml" + }, + "description": "Sample for ExecuteBatchDml", + "file": "spanner_v1_generated_spanner_execute_batch_dml_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_ExecuteBatchDml_async", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 50, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 51, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_execute_batch_dml_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.execute_batch_dml", + "method": { + "fullName": "google.spanner.v1.Spanner.ExecuteBatchDml", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "ExecuteBatchDml" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ExecuteBatchDmlRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.ExecuteBatchDmlResponse", + "shortName": "execute_batch_dml" + }, + "description": "Sample for ExecuteBatchDml", + "file": "spanner_v1_generated_spanner_execute_batch_dml_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_ExecuteBatchDml_sync", + "segments": [ + { + "end": 56, + "start": 27, + "type": "FULL" + }, + { + "end": 56, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 50, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 53, + "start": 51, + "type": "REQUEST_EXECUTION" + }, + { + "end": 57, + "start": 54, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_execute_batch_dml_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.execute_sql", + "method": { + "fullName": "google.spanner.v1.Spanner.ExecuteSql", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "ExecuteSql" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ExecuteSqlRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.ResultSet", + "shortName": "execute_sql" + }, + "description": "Sample for ExecuteSql", + "file": "spanner_v1_generated_spanner_execute_sql_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_ExecuteSql_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_execute_sql_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.execute_sql", + "method": { + "fullName": "google.spanner.v1.Spanner.ExecuteSql", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "ExecuteSql" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ExecuteSqlRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.ResultSet", + "shortName": "execute_sql" + }, + "description": "Sample for ExecuteSql", + "file": "spanner_v1_generated_spanner_execute_sql_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_ExecuteSql_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_execute_sql_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.execute_streaming_sql", + "method": { + "fullName": "google.spanner.v1.Spanner.ExecuteStreamingSql", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "ExecuteStreamingSql" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ExecuteSqlRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "Iterable[google.cloud.spanner_v1.types.PartialResultSet]", + "shortName": "execute_streaming_sql" + }, + "description": "Sample for ExecuteStreamingSql", + "file": "spanner_v1_generated_spanner_execute_streaming_sql_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_ExecuteStreamingSql_async", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_execute_streaming_sql_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.execute_streaming_sql", + "method": { + "fullName": "google.spanner.v1.Spanner.ExecuteStreamingSql", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "ExecuteStreamingSql" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ExecuteSqlRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "Iterable[google.cloud.spanner_v1.types.PartialResultSet]", + "shortName": "execute_streaming_sql" + }, + "description": "Sample for ExecuteStreamingSql", + "file": "spanner_v1_generated_spanner_execute_streaming_sql_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_ExecuteStreamingSql_sync", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_execute_streaming_sql_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.get_session", + "method": { + "fullName": "google.spanner.v1.Spanner.GetSession", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "GetSession" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.GetSessionRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.Session", + "shortName": "get_session" + }, + "description": "Sample for GetSession", + "file": "spanner_v1_generated_spanner_get_session_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_GetSession_async", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_get_session_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.get_session", + "method": { + "fullName": "google.spanner.v1.Spanner.GetSession", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "GetSession" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.GetSessionRequest" + }, + { + "name": "name", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.Session", + "shortName": "get_session" + }, + "description": "Sample for GetSession", + "file": "spanner_v1_generated_spanner_get_session_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_GetSession_sync", + "segments": [ + { + "end": 51, + "start": 27, + "type": "FULL" + }, + { + "end": 51, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 52, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_get_session_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.list_sessions", + "method": { + "fullName": "google.spanner.v1.Spanner.ListSessions", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "ListSessions" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ListSessionsRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.services.spanner.pagers.ListSessionsAsyncPager", + "shortName": "list_sessions" + }, + "description": "Sample for ListSessions", + "file": "spanner_v1_generated_spanner_list_sessions_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_ListSessions_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_list_sessions_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.list_sessions", + "method": { + "fullName": "google.spanner.v1.Spanner.ListSessions", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "ListSessions" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ListSessionsRequest" + }, + { + "name": "database", + "type": "str" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.services.spanner.pagers.ListSessionsPager", + "shortName": "list_sessions" + }, + "description": "Sample for ListSessions", + "file": "spanner_v1_generated_spanner_list_sessions_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_ListSessions_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 45, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 48, + "start": 46, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 49, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_list_sessions_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.partition_query", + "method": { + "fullName": "google.spanner.v1.Spanner.PartitionQuery", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "PartitionQuery" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.PartitionQueryRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.PartitionResponse", + "shortName": "partition_query" + }, + "description": "Sample for PartitionQuery", + "file": "spanner_v1_generated_spanner_partition_query_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_PartitionQuery_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_partition_query_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.partition_query", + "method": { + "fullName": "google.spanner.v1.Spanner.PartitionQuery", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "PartitionQuery" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.PartitionQueryRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.PartitionResponse", + "shortName": "partition_query" + }, + "description": "Sample for PartitionQuery", + "file": "spanner_v1_generated_spanner_partition_query_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_PartitionQuery_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_partition_query_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.partition_read", + "method": { + "fullName": "google.spanner.v1.Spanner.PartitionRead", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "PartitionRead" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.PartitionReadRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.PartitionResponse", + "shortName": "partition_read" + }, + "description": "Sample for PartitionRead", + "file": "spanner_v1_generated_spanner_partition_read_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_PartitionRead_async", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_partition_read_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.partition_read", + "method": { + "fullName": "google.spanner.v1.Spanner.PartitionRead", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "PartitionRead" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.PartitionReadRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.PartitionResponse", + "shortName": "partition_read" + }, + "description": "Sample for PartitionRead", + "file": "spanner_v1_generated_spanner_partition_read_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_PartitionRead_sync", + "segments": [ + { + "end": 52, + "start": 27, + "type": "FULL" + }, + { + "end": 52, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 49, + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 53, + "start": 50, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_partition_read_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.read", + "method": { + "fullName": "google.spanner.v1.Spanner.Read", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "Read" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ReadRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.ResultSet", + "shortName": "read" + }, + "description": "Sample for Read", + "file": "spanner_v1_generated_spanner_read_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_Read_async", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_read_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.read", + "method": { + "fullName": "google.spanner.v1.Spanner.Read", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "Read" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ReadRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "google.cloud.spanner_v1.types.ResultSet", + "shortName": "read" + }, + "description": "Sample for Read", + "file": "spanner_v1_generated_spanner_read_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_Read_sync", + "segments": [ + { + "end": 53, + "start": 27, + "type": "FULL" + }, + { + "end": 53, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 54, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_read_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.rollback", + "method": { + "fullName": "google.spanner.v1.Spanner.Rollback", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "Rollback" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.RollbackRequest" + }, + { + "name": "session", + "type": "str" + }, + { + "name": "transaction_id", + "type": "bytes" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "rollback" + }, + "description": "Sample for Rollback", + "file": "spanner_v1_generated_spanner_rollback_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_Rollback_async", + "segments": [ + { + "end": 50, + "start": 27, + "type": "FULL" + }, + { + "end": 50, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_rollback_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.rollback", + "method": { + "fullName": "google.spanner.v1.Spanner.Rollback", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "Rollback" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.RollbackRequest" + }, + { + "name": "session", + "type": "str" + }, + { + "name": "transaction_id", + "type": "bytes" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "shortName": "rollback" + }, + "description": "Sample for Rollback", + "file": "spanner_v1_generated_spanner_rollback_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_Rollback_sync", + "segments": [ + { + "end": 50, + "start": 27, + "type": "FULL" + }, + { + "end": 50, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 46, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "start": 47, + "type": "REQUEST_EXECUTION" + }, + { + "end": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_rollback_sync.py" + }, + { + "canonical": true, + "clientMethod": { + "async": true, + "client": { + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient", + "shortName": "SpannerAsyncClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerAsyncClient.streaming_read", + "method": { + "fullName": "google.spanner.v1.Spanner.StreamingRead", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "StreamingRead" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ReadRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "Iterable[google.cloud.spanner_v1.types.PartialResultSet]", + "shortName": "streaming_read" + }, + "description": "Sample for StreamingRead", + "file": "spanner_v1_generated_spanner_streaming_read_async.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_StreamingRead_async", + "segments": [ + { + "end": 54, + "start": 27, + "type": "FULL" + }, + { + "end": 54, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 55, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_streaming_read_async.py" + }, + { + "canonical": true, + "clientMethod": { + "client": { + "fullName": "google.cloud.spanner_v1.SpannerClient", + "shortName": "SpannerClient" + }, + "fullName": "google.cloud.spanner_v1.SpannerClient.streaming_read", + "method": { + "fullName": "google.spanner.v1.Spanner.StreamingRead", + "service": { + "fullName": "google.spanner.v1.Spanner", + "shortName": "Spanner" + }, + "shortName": "StreamingRead" + }, + "parameters": [ + { + "name": "request", + "type": "google.cloud.spanner_v1.types.ReadRequest" + }, + { + "name": "retry", + "type": "google.api_core.retry.Retry" + }, + { + "name": "timeout", + "type": "float" + }, + { + "name": "metadata", + "type": "Sequence[Tuple[str, Union[str, bytes]]]" + } + ], + "resultType": "Iterable[google.cloud.spanner_v1.types.PartialResultSet]", + "shortName": "streaming_read" + }, + "description": "Sample for StreamingRead", + "file": "spanner_v1_generated_spanner_streaming_read_sync.py", + "language": "PYTHON", + "origin": "API_DEFINITION", + "regionTag": "spanner_v1_generated_Spanner_StreamingRead_sync", + "segments": [ + { + "end": 54, + "start": 27, + "type": "FULL" + }, + { + "end": 54, + "start": 27, + "type": "SHORT" + }, + { + "end": 40, + "start": 38, + "type": "CLIENT_INITIALIZATION" + }, + { + "end": 47, + "start": 41, + "type": "REQUEST_INITIALIZATION" + }, + { + "end": 50, + "start": 48, + "type": "REQUEST_EXECUTION" + }, + { + "end": 55, + "start": 51, + "type": "RESPONSE_HANDLING" + } + ], + "title": "spanner_v1_generated_spanner_streaming_read_sync.py" + } + ] +} diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_add_split_points_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_add_split_points_async.py new file mode 100644 index 0000000000..ff6fcfe598 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_add_split_points_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for AddSplitPoints +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_AddSplitPoints_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_add_split_points(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.AddSplitPointsRequest( + database="database_value", + ) + + # Make the request + response = await client.add_split_points(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_AddSplitPoints_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_add_split_points_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_add_split_points_sync.py new file mode 100644 index 0000000000..3819bbe986 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_add_split_points_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for AddSplitPoints +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_AddSplitPoints_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_add_split_points(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.AddSplitPointsRequest( + database="database_value", + ) + + # Make the request + response = client.add_split_points(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_AddSplitPoints_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_copy_backup_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_copy_backup_async.py new file mode 100644 index 0000000000..d885947bb5 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_copy_backup_async.py @@ -0,0 +1,58 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CopyBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_CopyBackup_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_copy_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CopyBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + ) + + # Make the request + operation = client.copy_backup(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_CopyBackup_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_copy_backup_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_copy_backup_sync.py new file mode 100644 index 0000000000..a571e058c9 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_copy_backup_sync.py @@ -0,0 +1,58 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CopyBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_CopyBackup_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_copy_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CopyBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + ) + + # Make the request + operation = client.copy_backup(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_CopyBackup_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_async.py new file mode 100644 index 0000000000..2ad8881f54 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_async.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_CreateBackup_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_create_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + ) + + # Make the request + operation = client.create_backup(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_CreateBackup_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_schedule_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_schedule_async.py new file mode 100644 index 0000000000..efdcc2457e --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_schedule_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateBackupSchedule +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_CreateBackupSchedule_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_create_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateBackupScheduleRequest( + parent="parent_value", + backup_schedule_id="backup_schedule_id_value", + ) + + # Make the request + response = await client.create_backup_schedule(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_CreateBackupSchedule_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_schedule_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_schedule_sync.py new file mode 100644 index 0000000000..60d4b50c3b --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_schedule_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateBackupSchedule +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_CreateBackupSchedule_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_create_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateBackupScheduleRequest( + parent="parent_value", + backup_schedule_id="backup_schedule_id_value", + ) + + # Make the request + response = client.create_backup_schedule(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_CreateBackupSchedule_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_sync.py new file mode 100644 index 0000000000..02b9d1f0e7 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_create_backup_sync.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_CreateBackup_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_create_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + ) + + # Make the request + operation = client.create_backup(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_CreateBackup_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_create_database_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_create_database_async.py new file mode 100644 index 0000000000..47399a8d40 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_create_database_async.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_CreateDatabase_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_create_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateDatabaseRequest( + parent="parent_value", + create_statement="create_statement_value", + ) + + # Make the request + operation = client.create_database(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_CreateDatabase_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_create_database_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_create_database_sync.py new file mode 100644 index 0000000000..6f112cd8a7 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_create_database_sync.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_CreateDatabase_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_create_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.CreateDatabaseRequest( + parent="parent_value", + create_statement="create_statement_value", + ) + + # Make the request + operation = client.create_database(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_CreateDatabase_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_async.py new file mode 100644 index 0000000000..ab10785105 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_async.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_DeleteBackup_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_delete_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DeleteBackupRequest( + name="name_value", + ) + + # Make the request + await client.delete_backup(request=request) + + +# [END spanner_v1_generated_DatabaseAdmin_DeleteBackup_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_schedule_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_schedule_async.py new file mode 100644 index 0000000000..591d45cb10 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_schedule_async.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteBackupSchedule +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_DeleteBackupSchedule_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_delete_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DeleteBackupScheduleRequest( + name="name_value", + ) + + # Make the request + await client.delete_backup_schedule(request=request) + + +# [END spanner_v1_generated_DatabaseAdmin_DeleteBackupSchedule_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_schedule_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_schedule_sync.py new file mode 100644 index 0000000000..720417ba65 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_schedule_sync.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteBackupSchedule +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_DeleteBackupSchedule_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_delete_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DeleteBackupScheduleRequest( + name="name_value", + ) + + # Make the request + client.delete_backup_schedule(request=request) + + +# [END spanner_v1_generated_DatabaseAdmin_DeleteBackupSchedule_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_sync.py new file mode 100644 index 0000000000..736dc56a23 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_delete_backup_sync.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_DeleteBackup_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_delete_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DeleteBackupRequest( + name="name_value", + ) + + # Make the request + client.delete_backup(request=request) + + +# [END spanner_v1_generated_DatabaseAdmin_DeleteBackup_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_drop_database_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_drop_database_async.py new file mode 100644 index 0000000000..15f279b72d --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_drop_database_async.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DropDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_DropDatabase_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_drop_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DropDatabaseRequest( + database="database_value", + ) + + # Make the request + await client.drop_database(request=request) + + +# [END spanner_v1_generated_DatabaseAdmin_DropDatabase_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_drop_database_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_drop_database_sync.py new file mode 100644 index 0000000000..f218cabd83 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_drop_database_sync.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DropDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_DropDatabase_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_drop_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.DropDatabaseRequest( + database="database_value", + ) + + # Make the request + client.drop_database(request=request) + + +# [END spanner_v1_generated_DatabaseAdmin_DropDatabase_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_async.py new file mode 100644 index 0000000000..58b93a119a --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetBackup_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_get_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetBackupRequest( + name="name_value", + ) + + # Make the request + response = await client.get_backup(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetBackup_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_schedule_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_schedule_async.py new file mode 100644 index 0000000000..5a37eec975 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_schedule_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetBackupSchedule +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetBackupSchedule_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_get_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetBackupScheduleRequest( + name="name_value", + ) + + # Make the request + response = await client.get_backup_schedule(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetBackupSchedule_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_schedule_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_schedule_sync.py new file mode 100644 index 0000000000..4006cac333 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_schedule_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetBackupSchedule +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetBackupSchedule_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_get_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetBackupScheduleRequest( + name="name_value", + ) + + # Make the request + response = client.get_backup_schedule(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetBackupSchedule_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_sync.py new file mode 100644 index 0000000000..16cffcd78d --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_backup_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetBackup_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_get_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetBackupRequest( + name="name_value", + ) + + # Make the request + response = client.get_backup(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetBackup_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_database_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_database_async.py new file mode 100644 index 0000000000..fd8621c27b --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_database_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetDatabase_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_get_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetDatabaseRequest( + name="name_value", + ) + + # Make the request + response = await client.get_database(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetDatabase_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_database_ddl_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_database_ddl_async.py new file mode 100644 index 0000000000..8e84b21f78 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_database_ddl_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetDatabaseDdl +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetDatabaseDdl_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_get_database_ddl(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetDatabaseDdlRequest( + database="database_value", + ) + + # Make the request + response = await client.get_database_ddl(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetDatabaseDdl_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_database_ddl_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_database_ddl_sync.py new file mode 100644 index 0000000000..495b557a55 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_database_ddl_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetDatabaseDdl +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetDatabaseDdl_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_get_database_ddl(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetDatabaseDdlRequest( + database="database_value", + ) + + # Make the request + response = client.get_database_ddl(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetDatabaseDdl_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_database_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_database_sync.py new file mode 100644 index 0000000000..ab729bb9e3 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_database_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetDatabase_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_get_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.GetDatabaseRequest( + name="name_value", + ) + + # Make the request + response = client.get_database(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetDatabase_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_iam_policy_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_iam_policy_async.py new file mode 100644 index 0000000000..d5d75de78b --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_iam_policy_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetIamPolicy +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetIamPolicy_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +async def sample_get_iam_policy(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = await client.get_iam_policy(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetIamPolicy_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_get_iam_policy_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_get_iam_policy_sync.py new file mode 100644 index 0000000000..75e0b48b1b --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_get_iam_policy_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetIamPolicy +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_GetIamPolicy_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +def sample_get_iam_policy(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = client.get_iam_policy(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_GetIamPolicy_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_internal_update_graph_operation_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_internal_update_graph_operation_async.py new file mode 100644 index 0000000000..556205a0aa --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_internal_update_graph_operation_async.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for InternalUpdateGraphOperation +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_InternalUpdateGraphOperation_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_internal_update_graph_operation(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.InternalUpdateGraphOperationRequest( + database="database_value", + operation_id="operation_id_value", + vm_identity_token="vm_identity_token_value", + ) + + # Make the request + response = await client.internal_update_graph_operation(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_InternalUpdateGraphOperation_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_internal_update_graph_operation_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_internal_update_graph_operation_sync.py new file mode 100644 index 0000000000..46f1a3c88f --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_internal_update_graph_operation_sync.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for InternalUpdateGraphOperation +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_InternalUpdateGraphOperation_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_internal_update_graph_operation(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.InternalUpdateGraphOperationRequest( + database="database_value", + operation_id="operation_id_value", + vm_identity_token="vm_identity_token_value", + ) + + # Make the request + response = client.internal_update_graph_operation(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_InternalUpdateGraphOperation_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_operations_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_operations_async.py new file mode 100644 index 0000000000..a56ec9f80e --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_operations_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListBackupOperations +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListBackupOperations_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_list_backup_operations(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backup_operations(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListBackupOperations_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_operations_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_operations_sync.py new file mode 100644 index 0000000000..6383e1b247 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_operations_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListBackupOperations +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListBackupOperations_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_list_backup_operations(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backup_operations(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListBackupOperations_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_schedules_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_schedules_async.py new file mode 100644 index 0000000000..25ac53891a --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_schedules_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListBackupSchedules +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListBackupSchedules_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_list_backup_schedules(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupSchedulesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backup_schedules(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListBackupSchedules_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_schedules_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_schedules_sync.py new file mode 100644 index 0000000000..89cf82d278 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_backup_schedules_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListBackupSchedules +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListBackupSchedules_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_list_backup_schedules(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupSchedulesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backup_schedules(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListBackupSchedules_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_backups_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_backups_async.py new file mode 100644 index 0000000000..140e519e07 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_backups_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListBackups +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListBackups_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_list_backups(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backups(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListBackups_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_backups_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_backups_sync.py new file mode 100644 index 0000000000..9f04036f74 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_backups_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListBackups +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListBackups_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_list_backups(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListBackupsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_backups(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListBackups_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_database_operations_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_database_operations_async.py new file mode 100644 index 0000000000..3bc614b232 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_database_operations_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListDatabaseOperations +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListDatabaseOperations_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_list_database_operations(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabaseOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_database_operations(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListDatabaseOperations_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_database_operations_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_database_operations_sync.py new file mode 100644 index 0000000000..3d4dc965a9 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_database_operations_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListDatabaseOperations +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListDatabaseOperations_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_list_database_operations(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabaseOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_database_operations(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListDatabaseOperations_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_database_roles_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_database_roles_async.py new file mode 100644 index 0000000000..46ec91ce89 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_database_roles_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListDatabaseRoles +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListDatabaseRoles_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_list_database_roles(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabaseRolesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_database_roles(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListDatabaseRoles_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_database_roles_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_database_roles_sync.py new file mode 100644 index 0000000000..d39e4759dd --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_database_roles_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListDatabaseRoles +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListDatabaseRoles_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_list_database_roles(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabaseRolesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_database_roles(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListDatabaseRoles_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_databases_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_databases_async.py new file mode 100644 index 0000000000..586dfa56f1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_databases_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListDatabases +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListDatabases_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_list_databases(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabasesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_databases(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListDatabases_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_list_databases_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_list_databases_sync.py new file mode 100644 index 0000000000..e6ef221af6 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_list_databases_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListDatabases +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_ListDatabases_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_list_databases(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.ListDatabasesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_databases(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_ListDatabases_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_restore_database_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_restore_database_async.py new file mode 100644 index 0000000000..384c063c61 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_restore_database_async.py @@ -0,0 +1,58 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for RestoreDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_RestoreDatabase_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_restore_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.RestoreDatabaseRequest( + backup="backup_value", + parent="parent_value", + database_id="database_id_value", + ) + + # Make the request + operation = client.restore_database(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_RestoreDatabase_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_restore_database_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_restore_database_sync.py new file mode 100644 index 0000000000..a327a8ae13 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_restore_database_sync.py @@ -0,0 +1,58 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for RestoreDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_RestoreDatabase_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_restore_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.RestoreDatabaseRequest( + backup="backup_value", + parent="parent_value", + database_id="database_id_value", + ) + + # Make the request + operation = client.restore_database(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_RestoreDatabase_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_set_iam_policy_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_set_iam_policy_async.py new file mode 100644 index 0000000000..edade4c950 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_set_iam_policy_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for SetIamPolicy +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_SetIamPolicy_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +async def sample_set_iam_policy(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = await client.set_iam_policy(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_SetIamPolicy_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_set_iam_policy_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_set_iam_policy_sync.py new file mode 100644 index 0000000000..28a6746f4a --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_set_iam_policy_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for SetIamPolicy +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_SetIamPolicy_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +def sample_set_iam_policy(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = client.set_iam_policy(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_SetIamPolicy_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_test_iam_permissions_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_test_iam_permissions_async.py new file mode 100644 index 0000000000..0e6ea91cb3 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_test_iam_permissions_async.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for TestIamPermissions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_TestIamPermissions_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +async def sample_test_iam_permissions(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + permissions=['permissions_value1', 'permissions_value2'], + ) + + # Make the request + response = await client.test_iam_permissions(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_TestIamPermissions_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_test_iam_permissions_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_test_iam_permissions_sync.py new file mode 100644 index 0000000000..3fd0316dc1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_test_iam_permissions_sync.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for TestIamPermissions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_TestIamPermissions_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +def sample_test_iam_permissions(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + permissions=['permissions_value1', 'permissions_value2'], + ) + + # Make the request + response = client.test_iam_permissions(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_TestIamPermissions_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_async.py new file mode 100644 index 0000000000..95fa2a63f6 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_async.py @@ -0,0 +1,51 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_UpdateBackup_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_update_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateBackupRequest( + ) + + # Make the request + response = await client.update_backup(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_UpdateBackup_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_schedule_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_schedule_async.py new file mode 100644 index 0000000000..de17dfc86e --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_schedule_async.py @@ -0,0 +1,51 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateBackupSchedule +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_UpdateBackupSchedule_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_update_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateBackupScheduleRequest( + ) + + # Make the request + response = await client.update_backup_schedule(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_UpdateBackupSchedule_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_schedule_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_schedule_sync.py new file mode 100644 index 0000000000..4ef64a0673 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_schedule_sync.py @@ -0,0 +1,51 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateBackupSchedule +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_UpdateBackupSchedule_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_update_backup_schedule(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateBackupScheduleRequest( + ) + + # Make the request + response = client.update_backup_schedule(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_UpdateBackupSchedule_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_sync.py new file mode 100644 index 0000000000..9dbb0148dc --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_update_backup_sync.py @@ -0,0 +1,51 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateBackup +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_UpdateBackup_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_update_backup(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateBackupRequest( + ) + + # Make the request + response = client.update_backup(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_UpdateBackup_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_update_database_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_update_database_async.py new file mode 100644 index 0000000000..d5588c3036 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_update_database_async.py @@ -0,0 +1,59 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_UpdateDatabase_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_update_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + database = spanner_admin_database_v1.Database() + database.name = "name_value" + + request = spanner_admin_database_v1.UpdateDatabaseRequest( + database=database, + ) + + # Make the request + operation = client.update_database(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_UpdateDatabase_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_update_database_ddl_async.py b/samples/generated_samples/spanner_v1_generated_database_admin_update_database_ddl_async.py new file mode 100644 index 0000000000..ad98e2da9c --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_update_database_ddl_async.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateDatabaseDdl +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_UpdateDatabaseDdl_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +async def sample_update_database_ddl(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database="database_value", + statements=['statements_value1', 'statements_value2'], + ) + + # Make the request + operation = client.update_database_ddl(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_UpdateDatabaseDdl_async] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_update_database_ddl_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_update_database_ddl_sync.py new file mode 100644 index 0000000000..73297524b9 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_update_database_ddl_sync.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateDatabaseDdl +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_UpdateDatabaseDdl_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_update_database_ddl(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database="database_value", + statements=['statements_value1', 'statements_value2'], + ) + + # Make the request + operation = client.update_database_ddl(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_UpdateDatabaseDdl_sync] diff --git a/samples/generated_samples/spanner_v1_generated_database_admin_update_database_sync.py b/samples/generated_samples/spanner_v1_generated_database_admin_update_database_sync.py new file mode 100644 index 0000000000..62ed40bc84 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_database_admin_update_database_sync.py @@ -0,0 +1,59 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateDatabase +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-database + + +# [START spanner_v1_generated_DatabaseAdmin_UpdateDatabase_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_database_v1 + + +def sample_update_database(): + # Create a client + client = spanner_admin_database_v1.DatabaseAdminClient() + + # Initialize request argument(s) + database = spanner_admin_database_v1.Database() + database.name = "name_value" + + request = spanner_admin_database_v1.UpdateDatabaseRequest( + database=database, + ) + + # Make the request + operation = client.update_database(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_DatabaseAdmin_UpdateDatabase_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_async.py new file mode 100644 index 0000000000..74bd640044 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_async.py @@ -0,0 +1,63 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_CreateInstance_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_create_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + instance = spanner_admin_instance_v1.Instance() + instance.name = "name_value" + instance.config = "config_value" + instance.display_name = "display_name_value" + + request = spanner_admin_instance_v1.CreateInstanceRequest( + parent="parent_value", + instance_id="instance_id_value", + instance=instance, + ) + + # Make the request + operation = client.create_instance(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_CreateInstance_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_config_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_config_async.py new file mode 100644 index 0000000000..c3f266e4c4 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_config_async.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateInstanceConfig +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_CreateInstanceConfig_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_create_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.CreateInstanceConfigRequest( + parent="parent_value", + instance_config_id="instance_config_id_value", + ) + + # Make the request + operation = client.create_instance_config(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_CreateInstanceConfig_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_config_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_config_sync.py new file mode 100644 index 0000000000..c5b7616534 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_config_sync.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateInstanceConfig +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_CreateInstanceConfig_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_create_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.CreateInstanceConfigRequest( + parent="parent_value", + instance_config_id="instance_config_id_value", + ) + + # Make the request + operation = client.create_instance_config(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_CreateInstanceConfig_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_partition_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_partition_async.py new file mode 100644 index 0000000000..a22765f53f --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_partition_async.py @@ -0,0 +1,64 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateInstancePartition +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_CreateInstancePartition_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_create_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + instance_partition = spanner_admin_instance_v1.InstancePartition() + instance_partition.node_count = 1070 + instance_partition.name = "name_value" + instance_partition.config = "config_value" + instance_partition.display_name = "display_name_value" + + request = spanner_admin_instance_v1.CreateInstancePartitionRequest( + parent="parent_value", + instance_partition_id="instance_partition_id_value", + instance_partition=instance_partition, + ) + + # Make the request + operation = client.create_instance_partition(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_CreateInstancePartition_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_partition_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_partition_sync.py new file mode 100644 index 0000000000..5b5f2e0e26 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_partition_sync.py @@ -0,0 +1,64 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateInstancePartition +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_CreateInstancePartition_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_create_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + instance_partition = spanner_admin_instance_v1.InstancePartition() + instance_partition.node_count = 1070 + instance_partition.name = "name_value" + instance_partition.config = "config_value" + instance_partition.display_name = "display_name_value" + + request = spanner_admin_instance_v1.CreateInstancePartitionRequest( + parent="parent_value", + instance_partition_id="instance_partition_id_value", + instance_partition=instance_partition, + ) + + # Make the request + operation = client.create_instance_partition(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_CreateInstancePartition_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_sync.py new file mode 100644 index 0000000000..f43c5016b5 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_create_instance_sync.py @@ -0,0 +1,63 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_CreateInstance_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_create_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + instance = spanner_admin_instance_v1.Instance() + instance.name = "name_value" + instance.config = "config_value" + instance.display_name = "display_name_value" + + request = spanner_admin_instance_v1.CreateInstanceRequest( + parent="parent_value", + instance_id="instance_id_value", + instance=instance, + ) + + # Make the request + operation = client.create_instance(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_CreateInstance_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_async.py new file mode 100644 index 0000000000..262da709aa --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_async.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_DeleteInstance_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_delete_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstanceRequest( + name="name_value", + ) + + # Make the request + await client.delete_instance(request=request) + + +# [END spanner_v1_generated_InstanceAdmin_DeleteInstance_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_config_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_config_async.py new file mode 100644 index 0000000000..df83d9e424 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_config_async.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteInstanceConfig +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_DeleteInstanceConfig_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_delete_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstanceConfigRequest( + name="name_value", + ) + + # Make the request + await client.delete_instance_config(request=request) + + +# [END spanner_v1_generated_InstanceAdmin_DeleteInstanceConfig_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_config_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_config_sync.py new file mode 100644 index 0000000000..9a9c4d7ca1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_config_sync.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteInstanceConfig +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_DeleteInstanceConfig_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_delete_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstanceConfigRequest( + name="name_value", + ) + + # Make the request + client.delete_instance_config(request=request) + + +# [END spanner_v1_generated_InstanceAdmin_DeleteInstanceConfig_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_partition_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_partition_async.py new file mode 100644 index 0000000000..78ca44d6c2 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_partition_async.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteInstancePartition +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_DeleteInstancePartition_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_delete_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstancePartitionRequest( + name="name_value", + ) + + # Make the request + await client.delete_instance_partition(request=request) + + +# [END spanner_v1_generated_InstanceAdmin_DeleteInstancePartition_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_partition_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_partition_sync.py new file mode 100644 index 0000000000..72249ef6c7 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_partition_sync.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteInstancePartition +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_DeleteInstancePartition_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_delete_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstancePartitionRequest( + name="name_value", + ) + + # Make the request + client.delete_instance_partition(request=request) + + +# [END spanner_v1_generated_InstanceAdmin_DeleteInstancePartition_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_sync.py new file mode 100644 index 0000000000..613ac6c070 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_delete_instance_sync.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_DeleteInstance_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_delete_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.DeleteInstanceRequest( + name="name_value", + ) + + # Make the request + client.delete_instance(request=request) + + +# [END spanner_v1_generated_InstanceAdmin_DeleteInstance_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_get_iam_policy_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_get_iam_policy_async.py new file mode 100644 index 0000000000..a0b620ae4f --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_get_iam_policy_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetIamPolicy +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_GetIamPolicy_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +async def sample_get_iam_policy(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = await client.get_iam_policy(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_GetIamPolicy_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_get_iam_policy_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_get_iam_policy_sync.py new file mode 100644 index 0000000000..cc0d725a03 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_get_iam_policy_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetIamPolicy +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_GetIamPolicy_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +def sample_get_iam_policy(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = client.get_iam_policy(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_GetIamPolicy_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_async.py new file mode 100644 index 0000000000..059eb2a078 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_GetInstance_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_get_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstanceRequest( + name="name_value", + ) + + # Make the request + response = await client.get_instance(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_GetInstance_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_config_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_config_async.py new file mode 100644 index 0000000000..9adfb51c2e --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_config_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetInstanceConfig +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_GetInstanceConfig_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_get_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstanceConfigRequest( + name="name_value", + ) + + # Make the request + response = await client.get_instance_config(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_GetInstanceConfig_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_config_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_config_sync.py new file mode 100644 index 0000000000..16e9d3c3c8 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_config_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetInstanceConfig +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_GetInstanceConfig_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_get_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstanceConfigRequest( + name="name_value", + ) + + # Make the request + response = client.get_instance_config(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_GetInstanceConfig_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_partition_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_partition_async.py new file mode 100644 index 0000000000..8e84abcf6e --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_partition_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetInstancePartition +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_GetInstancePartition_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_get_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstancePartitionRequest( + name="name_value", + ) + + # Make the request + response = await client.get_instance_partition(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_GetInstancePartition_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_partition_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_partition_sync.py new file mode 100644 index 0000000000..d617cbb382 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_partition_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetInstancePartition +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_GetInstancePartition_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_get_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstancePartitionRequest( + name="name_value", + ) + + # Make the request + response = client.get_instance_partition(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_GetInstancePartition_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_sync.py new file mode 100644 index 0000000000..4a246a5bf3 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_get_instance_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_GetInstance_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_get_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.GetInstanceRequest( + name="name_value", + ) + + # Make the request + response = client.get_instance(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_GetInstance_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_config_operations_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_config_operations_async.py new file mode 100644 index 0000000000..a0580fef7c --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_config_operations_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstanceConfigOperations +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstanceConfigOperations_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_list_instance_config_operations(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstanceConfigOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_config_operations(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstanceConfigOperations_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_config_operations_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_config_operations_sync.py new file mode 100644 index 0000000000..89213b3a2e --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_config_operations_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstanceConfigOperations +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstanceConfigOperations_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_list_instance_config_operations(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstanceConfigOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_config_operations(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstanceConfigOperations_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_configs_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_configs_async.py new file mode 100644 index 0000000000..651b2f88ae --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_configs_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstanceConfigs +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstanceConfigs_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_list_instance_configs(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstanceConfigsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_configs(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstanceConfigs_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_configs_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_configs_sync.py new file mode 100644 index 0000000000..a0f120277a --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_configs_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstanceConfigs +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstanceConfigs_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_list_instance_configs(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstanceConfigsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_configs(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstanceConfigs_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partition_operations_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partition_operations_async.py new file mode 100644 index 0000000000..9dedb973f1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partition_operations_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstancePartitionOperations +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstancePartitionOperations_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_list_instance_partition_operations(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancePartitionOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_partition_operations(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstancePartitionOperations_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partition_operations_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partition_operations_sync.py new file mode 100644 index 0000000000..b2a7549b29 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partition_operations_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstancePartitionOperations +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstancePartitionOperations_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_list_instance_partition_operations(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancePartitionOperationsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_partition_operations(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstancePartitionOperations_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partitions_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partitions_async.py new file mode 100644 index 0000000000..56adc152fe --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partitions_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstancePartitions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstancePartitions_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_list_instance_partitions(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancePartitionsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_partitions(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstancePartitions_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partitions_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partitions_sync.py new file mode 100644 index 0000000000..1e65552fc1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instance_partitions_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstancePartitions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstancePartitions_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_list_instance_partitions(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancePartitionsRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instance_partitions(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstancePartitions_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instances_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instances_async.py new file mode 100644 index 0000000000..abe1a1affa --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instances_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstances +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstances_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_list_instances(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instances(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstances_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_list_instances_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instances_sync.py new file mode 100644 index 0000000000..f344baff11 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_list_instances_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListInstances +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_ListInstances_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_list_instances(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.ListInstancesRequest( + parent="parent_value", + ) + + # Make the request + page_result = client.list_instances(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_InstanceAdmin_ListInstances_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_move_instance_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_move_instance_async.py new file mode 100644 index 0000000000..ce62120492 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_move_instance_async.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for MoveInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_MoveInstance_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_move_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.MoveInstanceRequest( + name="name_value", + target_config="target_config_value", + ) + + # Make the request + operation = client.move_instance(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_MoveInstance_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_move_instance_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_move_instance_sync.py new file mode 100644 index 0000000000..4621200e0c --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_move_instance_sync.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for MoveInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_MoveInstance_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_move_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.MoveInstanceRequest( + name="name_value", + target_config="target_config_value", + ) + + # Make the request + operation = client.move_instance(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_MoveInstance_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_set_iam_policy_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_set_iam_policy_async.py new file mode 100644 index 0000000000..2443f2127d --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_set_iam_policy_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for SetIamPolicy +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_SetIamPolicy_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +async def sample_set_iam_policy(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = await client.set_iam_policy(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_SetIamPolicy_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_set_iam_policy_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_set_iam_policy_sync.py new file mode 100644 index 0000000000..ba6401602f --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_set_iam_policy_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for SetIamPolicy +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_SetIamPolicy_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +def sample_set_iam_policy(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Make the request + response = client.set_iam_policy(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_SetIamPolicy_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_test_iam_permissions_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_test_iam_permissions_async.py new file mode 100644 index 0000000000..aa0e05dde3 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_test_iam_permissions_async.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for TestIamPermissions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_TestIamPermissions_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +async def sample_test_iam_permissions(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + permissions=['permissions_value1', 'permissions_value2'], + ) + + # Make the request + response = await client.test_iam_permissions(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_TestIamPermissions_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_test_iam_permissions_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_test_iam_permissions_sync.py new file mode 100644 index 0000000000..80b2a4dd21 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_test_iam_permissions_sync.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for TestIamPermissions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_TestIamPermissions_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 +from google.iam.v1 import iam_policy_pb2 # type: ignore + + +def sample_test_iam_permissions(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + permissions=['permissions_value1', 'permissions_value2'], + ) + + # Make the request + response = client.test_iam_permissions(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_TestIamPermissions_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_async.py new file mode 100644 index 0000000000..ecabbf5191 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_async.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_UpdateInstance_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_update_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + instance = spanner_admin_instance_v1.Instance() + instance.name = "name_value" + instance.config = "config_value" + instance.display_name = "display_name_value" + + request = spanner_admin_instance_v1.UpdateInstanceRequest( + instance=instance, + ) + + # Make the request + operation = client.update_instance(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_UpdateInstance_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_config_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_config_async.py new file mode 100644 index 0000000000..f7ea78401c --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_config_async.py @@ -0,0 +1,55 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateInstanceConfig +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_UpdateInstanceConfig_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_update_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.UpdateInstanceConfigRequest( + ) + + # Make the request + operation = client.update_instance_config(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_UpdateInstanceConfig_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_config_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_config_sync.py new file mode 100644 index 0000000000..1d184f6c58 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_config_sync.py @@ -0,0 +1,55 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateInstanceConfig +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_UpdateInstanceConfig_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_update_instance_config(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + request = spanner_admin_instance_v1.UpdateInstanceConfigRequest( + ) + + # Make the request + operation = client.update_instance_config(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_UpdateInstanceConfig_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_partition_async.py b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_partition_async.py new file mode 100644 index 0000000000..42d3c484f8 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_partition_async.py @@ -0,0 +1,62 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateInstancePartition +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_UpdateInstancePartition_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +async def sample_update_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminAsyncClient() + + # Initialize request argument(s) + instance_partition = spanner_admin_instance_v1.InstancePartition() + instance_partition.node_count = 1070 + instance_partition.name = "name_value" + instance_partition.config = "config_value" + instance_partition.display_name = "display_name_value" + + request = spanner_admin_instance_v1.UpdateInstancePartitionRequest( + instance_partition=instance_partition, + ) + + # Make the request + operation = client.update_instance_partition(request=request) + + print("Waiting for operation to complete...") + + response = (await operation).result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_UpdateInstancePartition_async] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_partition_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_partition_sync.py new file mode 100644 index 0000000000..56cd2760a1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_partition_sync.py @@ -0,0 +1,62 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateInstancePartition +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_UpdateInstancePartition_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_update_instance_partition(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + instance_partition = spanner_admin_instance_v1.InstancePartition() + instance_partition.node_count = 1070 + instance_partition.name = "name_value" + instance_partition.config = "config_value" + instance_partition.display_name = "display_name_value" + + request = spanner_admin_instance_v1.UpdateInstancePartitionRequest( + instance_partition=instance_partition, + ) + + # Make the request + operation = client.update_instance_partition(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_UpdateInstancePartition_sync] diff --git a/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_sync.py b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_sync.py new file mode 100644 index 0000000000..2340e701e1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_instance_admin_update_instance_sync.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for UpdateInstance +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner-admin-instance + + +# [START spanner_v1_generated_InstanceAdmin_UpdateInstance_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_admin_instance_v1 + + +def sample_update_instance(): + # Create a client + client = spanner_admin_instance_v1.InstanceAdminClient() + + # Initialize request argument(s) + instance = spanner_admin_instance_v1.Instance() + instance.name = "name_value" + instance.config = "config_value" + instance.display_name = "display_name_value" + + request = spanner_admin_instance_v1.UpdateInstanceRequest( + instance=instance, + ) + + # Make the request + operation = client.update_instance(request=request) + + print("Waiting for operation to complete...") + + response = operation.result() + + # Handle the response + print(response) + +# [END spanner_v1_generated_InstanceAdmin_UpdateInstance_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_batch_create_sessions_async.py b/samples/generated_samples/spanner_v1_generated_spanner_batch_create_sessions_async.py new file mode 100644 index 0000000000..49e64b4ab8 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_batch_create_sessions_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for BatchCreateSessions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_BatchCreateSessions_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_batch_create_sessions(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.BatchCreateSessionsRequest( + database="database_value", + session_count=1420, + ) + + # Make the request + response = await client.batch_create_sessions(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_BatchCreateSessions_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_batch_create_sessions_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_batch_create_sessions_sync.py new file mode 100644 index 0000000000..ade1da3661 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_batch_create_sessions_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for BatchCreateSessions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_BatchCreateSessions_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_batch_create_sessions(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.BatchCreateSessionsRequest( + database="database_value", + session_count=1420, + ) + + # Make the request + response = client.batch_create_sessions(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_BatchCreateSessions_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_batch_write_async.py b/samples/generated_samples/spanner_v1_generated_spanner_batch_write_async.py new file mode 100644 index 0000000000..d1565657e8 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_batch_write_async.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for BatchWrite +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_BatchWrite_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_batch_write(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + mutation_groups = spanner_v1.MutationGroup() + mutation_groups.mutations.insert.table = "table_value" + + request = spanner_v1.BatchWriteRequest( + session="session_value", + mutation_groups=mutation_groups, + ) + + # Make the request + stream = await client.batch_write(request=request) + + # Handle the response + async for response in stream: + print(response) + +# [END spanner_v1_generated_Spanner_BatchWrite_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_batch_write_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_batch_write_sync.py new file mode 100644 index 0000000000..9b6621def9 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_batch_write_sync.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for BatchWrite +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_BatchWrite_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_batch_write(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + mutation_groups = spanner_v1.MutationGroup() + mutation_groups.mutations.insert.table = "table_value" + + request = spanner_v1.BatchWriteRequest( + session="session_value", + mutation_groups=mutation_groups, + ) + + # Make the request + stream = client.batch_write(request=request) + + # Handle the response + for response in stream: + print(response) + +# [END spanner_v1_generated_Spanner_BatchWrite_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_begin_transaction_async.py b/samples/generated_samples/spanner_v1_generated_spanner_begin_transaction_async.py new file mode 100644 index 0000000000..efdd161715 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_begin_transaction_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for BeginTransaction +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_BeginTransaction_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_begin_transaction(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.BeginTransactionRequest( + session="session_value", + ) + + # Make the request + response = await client.begin_transaction(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_BeginTransaction_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_begin_transaction_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_begin_transaction_sync.py new file mode 100644 index 0000000000..764dab8aa2 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_begin_transaction_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for BeginTransaction +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_BeginTransaction_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_begin_transaction(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.BeginTransactionRequest( + session="session_value", + ) + + # Make the request + response = client.begin_transaction(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_BeginTransaction_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_commit_async.py b/samples/generated_samples/spanner_v1_generated_spanner_commit_async.py new file mode 100644 index 0000000000..f61c297d38 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_commit_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for Commit +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_Commit_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_commit(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.CommitRequest( + transaction_id=b'transaction_id_blob', + session="session_value", + ) + + # Make the request + response = await client.commit(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_Commit_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_commit_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_commit_sync.py new file mode 100644 index 0000000000..a945bd2234 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_commit_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for Commit +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_Commit_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_commit(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.CommitRequest( + transaction_id=b'transaction_id_blob', + session="session_value", + ) + + # Make the request + response = client.commit(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_Commit_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_create_session_async.py b/samples/generated_samples/spanner_v1_generated_spanner_create_session_async.py new file mode 100644 index 0000000000..8cddc00c66 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_create_session_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateSession +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_CreateSession_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_create_session(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.CreateSessionRequest( + database="database_value", + ) + + # Make the request + response = await client.create_session(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_CreateSession_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_create_session_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_create_session_sync.py new file mode 100644 index 0000000000..b9de2d34e0 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_create_session_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for CreateSession +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_CreateSession_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_create_session(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.CreateSessionRequest( + database="database_value", + ) + + # Make the request + response = client.create_session(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_CreateSession_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_delete_session_async.py b/samples/generated_samples/spanner_v1_generated_spanner_delete_session_async.py new file mode 100644 index 0000000000..9fed1ddca6 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_delete_session_async.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteSession +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_DeleteSession_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_delete_session(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.DeleteSessionRequest( + name="name_value", + ) + + # Make the request + await client.delete_session(request=request) + + +# [END spanner_v1_generated_Spanner_DeleteSession_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_delete_session_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_delete_session_sync.py new file mode 100644 index 0000000000..1f2a17e2d1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_delete_session_sync.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for DeleteSession +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_DeleteSession_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_delete_session(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.DeleteSessionRequest( + name="name_value", + ) + + # Make the request + client.delete_session(request=request) + + +# [END spanner_v1_generated_Spanner_DeleteSession_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_execute_batch_dml_async.py b/samples/generated_samples/spanner_v1_generated_spanner_execute_batch_dml_async.py new file mode 100644 index 0000000000..8313fd66a0 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_execute_batch_dml_async.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ExecuteBatchDml +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_ExecuteBatchDml_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_execute_batch_dml(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + statements = spanner_v1.Statement() + statements.sql = "sql_value" + + request = spanner_v1.ExecuteBatchDmlRequest( + session="session_value", + statements=statements, + seqno=550, + ) + + # Make the request + response = await client.execute_batch_dml(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_ExecuteBatchDml_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_execute_batch_dml_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_execute_batch_dml_sync.py new file mode 100644 index 0000000000..dd4696b6b2 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_execute_batch_dml_sync.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ExecuteBatchDml +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_ExecuteBatchDml_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_execute_batch_dml(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + statements = spanner_v1.Statement() + statements.sql = "sql_value" + + request = spanner_v1.ExecuteBatchDmlRequest( + session="session_value", + statements=statements, + seqno=550, + ) + + # Make the request + response = client.execute_batch_dml(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_ExecuteBatchDml_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_execute_sql_async.py b/samples/generated_samples/spanner_v1_generated_spanner_execute_sql_async.py new file mode 100644 index 0000000000..a12b20f3e9 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_execute_sql_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ExecuteSql +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_ExecuteSql_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_execute_sql(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + response = await client.execute_sql(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_ExecuteSql_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_execute_sql_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_execute_sql_sync.py new file mode 100644 index 0000000000..761d0ca251 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_execute_sql_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ExecuteSql +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_ExecuteSql_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_execute_sql(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + response = client.execute_sql(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_ExecuteSql_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_execute_streaming_sql_async.py b/samples/generated_samples/spanner_v1_generated_spanner_execute_streaming_sql_async.py new file mode 100644 index 0000000000..86b8eb910e --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_execute_streaming_sql_async.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ExecuteStreamingSql +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_ExecuteStreamingSql_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_execute_streaming_sql(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + stream = await client.execute_streaming_sql(request=request) + + # Handle the response + async for response in stream: + print(response) + +# [END spanner_v1_generated_Spanner_ExecuteStreamingSql_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_execute_streaming_sql_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_execute_streaming_sql_sync.py new file mode 100644 index 0000000000..dc7dba43b8 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_execute_streaming_sql_sync.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ExecuteStreamingSql +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_ExecuteStreamingSql_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_execute_streaming_sql(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + stream = client.execute_streaming_sql(request=request) + + # Handle the response + for response in stream: + print(response) + +# [END spanner_v1_generated_Spanner_ExecuteStreamingSql_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_get_session_async.py b/samples/generated_samples/spanner_v1_generated_spanner_get_session_async.py new file mode 100644 index 0000000000..d2e50f9891 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_get_session_async.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetSession +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_GetSession_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_get_session(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.GetSessionRequest( + name="name_value", + ) + + # Make the request + response = await client.get_session(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_GetSession_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_get_session_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_get_session_sync.py new file mode 100644 index 0000000000..36d6436b04 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_get_session_sync.py @@ -0,0 +1,52 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for GetSession +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_GetSession_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_get_session(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.GetSessionRequest( + name="name_value", + ) + + # Make the request + response = client.get_session(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_GetSession_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_list_sessions_async.py b/samples/generated_samples/spanner_v1_generated_spanner_list_sessions_async.py new file mode 100644 index 0000000000..95aa4bf818 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_list_sessions_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListSessions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_ListSessions_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_list_sessions(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ListSessionsRequest( + database="database_value", + ) + + # Make the request + page_result = client.list_sessions(request=request) + + # Handle the response + async for response in page_result: + print(response) + +# [END spanner_v1_generated_Spanner_ListSessions_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_list_sessions_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_list_sessions_sync.py new file mode 100644 index 0000000000..a9533fed0d --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_list_sessions_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for ListSessions +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_ListSessions_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_list_sessions(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ListSessionsRequest( + database="database_value", + ) + + # Make the request + page_result = client.list_sessions(request=request) + + # Handle the response + for response in page_result: + print(response) + +# [END spanner_v1_generated_Spanner_ListSessions_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_partition_query_async.py b/samples/generated_samples/spanner_v1_generated_spanner_partition_query_async.py new file mode 100644 index 0000000000..200fb2f6a2 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_partition_query_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for PartitionQuery +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_PartitionQuery_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_partition_query(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.PartitionQueryRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + response = await client.partition_query(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_PartitionQuery_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_partition_query_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_partition_query_sync.py new file mode 100644 index 0000000000..d486a3590c --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_partition_query_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for PartitionQuery +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_PartitionQuery_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_partition_query(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.PartitionQueryRequest( + session="session_value", + sql="sql_value", + ) + + # Make the request + response = client.partition_query(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_PartitionQuery_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_partition_read_async.py b/samples/generated_samples/spanner_v1_generated_spanner_partition_read_async.py new file mode 100644 index 0000000000..99055ade8b --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_partition_read_async.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for PartitionRead +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_PartitionRead_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_partition_read(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.PartitionReadRequest( + session="session_value", + table="table_value", + ) + + # Make the request + response = await client.partition_read(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_PartitionRead_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_partition_read_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_partition_read_sync.py new file mode 100644 index 0000000000..0ca01ac423 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_partition_read_sync.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for PartitionRead +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_PartitionRead_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_partition_read(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.PartitionReadRequest( + session="session_value", + table="table_value", + ) + + # Make the request + response = client.partition_read(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_PartitionRead_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_read_async.py b/samples/generated_samples/spanner_v1_generated_spanner_read_async.py new file mode 100644 index 0000000000..e555865245 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_read_async.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for Read +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_Read_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_read(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ReadRequest( + session="session_value", + table="table_value", + columns=['columns_value1', 'columns_value2'], + ) + + # Make the request + response = await client.read(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_Read_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_read_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_read_sync.py new file mode 100644 index 0000000000..8f9ee621f3 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_read_sync.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for Read +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_Read_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_read(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ReadRequest( + session="session_value", + table="table_value", + columns=['columns_value1', 'columns_value2'], + ) + + # Make the request + response = client.read(request=request) + + # Handle the response + print(response) + +# [END spanner_v1_generated_Spanner_Read_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_rollback_async.py b/samples/generated_samples/spanner_v1_generated_spanner_rollback_async.py new file mode 100644 index 0000000000..f99a1b8dd8 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_rollback_async.py @@ -0,0 +1,51 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for Rollback +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_Rollback_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_rollback(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.RollbackRequest( + session="session_value", + transaction_id=b'transaction_id_blob', + ) + + # Make the request + await client.rollback(request=request) + + +# [END spanner_v1_generated_Spanner_Rollback_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_rollback_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_rollback_sync.py new file mode 100644 index 0000000000..00b23b21fc --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_rollback_sync.py @@ -0,0 +1,51 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for Rollback +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_Rollback_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_rollback(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.RollbackRequest( + session="session_value", + transaction_id=b'transaction_id_blob', + ) + + # Make the request + client.rollback(request=request) + + +# [END spanner_v1_generated_Spanner_Rollback_sync] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_streaming_read_async.py b/samples/generated_samples/spanner_v1_generated_spanner_streaming_read_async.py new file mode 100644 index 0000000000..f79b9a96a1 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_streaming_read_async.py @@ -0,0 +1,55 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for StreamingRead +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_StreamingRead_async] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +async def sample_streaming_read(): + # Create a client + client = spanner_v1.SpannerAsyncClient() + + # Initialize request argument(s) + request = spanner_v1.ReadRequest( + session="session_value", + table="table_value", + columns=['columns_value1', 'columns_value2'], + ) + + # Make the request + stream = await client.streaming_read(request=request) + + # Handle the response + async for response in stream: + print(response) + +# [END spanner_v1_generated_Spanner_StreamingRead_async] diff --git a/samples/generated_samples/spanner_v1_generated_spanner_streaming_read_sync.py b/samples/generated_samples/spanner_v1_generated_spanner_streaming_read_sync.py new file mode 100644 index 0000000000..f81ed34b33 --- /dev/null +++ b/samples/generated_samples/spanner_v1_generated_spanner_streaming_read_sync.py @@ -0,0 +1,55 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Generated code. DO NOT EDIT! +# +# Snippet for StreamingRead +# NOTE: This snippet has been automatically generated for illustrative purposes only. +# It may require modifications to work in your environment. + +# To install the latest published package dependency, execute the following: +# python3 -m pip install google-cloud-spanner + + +# [START spanner_v1_generated_Spanner_StreamingRead_sync] +# This snippet has been automatically generated and should be regarded as a +# code template only. +# It will require modifications to work: +# - It may require correct/in-range values for request initialization. +# - It may require specifying regional endpoints when creating the service +# client as shown in: +# https://googleapis.dev/python/google-api-core/latest/client_options.html +from google.cloud import spanner_v1 + + +def sample_streaming_read(): + # Create a client + client = spanner_v1.SpannerClient() + + # Initialize request argument(s) + request = spanner_v1.ReadRequest( + session="session_value", + table="table_value", + columns=['columns_value1', 'columns_value2'], + ) + + # Make the request + stream = client.streaming_read(request=request) + + # Handle the response + for response in stream: + print(response) + +# [END spanner_v1_generated_Spanner_StreamingRead_sync] diff --git a/samples/samples/README.rst b/samples/samples/README.rst new file mode 100644 index 0000000000..b0573c249b --- /dev/null +++ b/samples/samples/README.rst @@ -0,0 +1,290 @@ + +.. This file is automatically generated. Do not edit this file directly. + +Google Cloud Spanner Python Samples +=============================================================================== + +.. image:: https://gstatic.com/cloudssh/images/open-btn.png + :target: https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/GoogleCloudPlatform/python-docs-samples&page=editor&open_in_editor=spanner/cloud-client/README.rst + + +This directory contains samples for Google Cloud Spanner. `Google Cloud Spanner`_ is a highly scalable, transactional, managed, NewSQL database service. Cloud Spanner solves the need for a horizontally-scaling database with consistent global transactions and SQL semantics. + + + + +.. _Google Cloud Spanner: https://cloud.google.com/spanner/docs + + +Setup +------------------------------------------------------------------------------- + + + +Authentication +++++++++++++++ + +This sample requires you to have authentication setup. Refer to the +`Authentication Getting Started Guide`_ for instructions on setting up +credentials for applications. + +.. _Authentication Getting Started Guide: + https://cloud.google.com/docs/authentication/getting-started + + + + +Install Dependencies +++++++++++++++++++++ + +#. Clone python-docs-samples and change directory to the sample directory you want to use. + + .. code-block:: bash + + $ git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git + +#. Install `pip`_ and `virtualenv`_ if you do not already have them. You may want to refer to the `Python Development Environment Setup Guide`_ for Google Cloud Platform for instructions. + + .. _Python Development Environment Setup Guide: + https://cloud.google.com/python/setup + +#. Create a virtualenv. Samples are compatible with Python 3.6+. + + .. code-block:: bash + + $ virtualenv env + $ source env/bin/activate + +#. Install the dependencies needed to run the samples. + + .. code-block:: bash + + $ pip install -r requirements.txt + +.. _pip: https://pip.pypa.io/ +.. _virtualenv: https://virtualenv.pypa.io/ + + + + + + +Samples +------------------------------------------------------------------------------- + + +Snippets ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +.. image:: https://gstatic.com/cloudssh/images/open-btn.png + :target: https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/GoogleCloudPlatform/python-docs-samples&page=editor&open_in_editor=spanner/cloud-client/snippets.py,spanner/cloud-client/README.rst + + + + +To run this sample: + +.. code-block:: bash + + $ python snippets.py + + + usage: snippets.py [-h] [--database-id DATABASE_ID] + instance_id + {create_instance,create_database,insert_data,delete_data,query_data,read_data,read_stale_data,add_column,update_data,query_data_with_new_column,read_write_transaction,read_only_transaction,add_index,query_data_with_index,read_data_with_index,add_storing_index,read_data_with_storing_index,create_table_with_timestamp,insert_data_with_timestamp,add_timestamp_column,update_data_with_timestamp,query_data_with_timestamp,write_struct_data,query_with_struct,query_with_array_of_struct,query_struct_field,query_nested_struct_field,insert_data_with_dml,update_data_with_dml,delete_data_with_dml,update_data_with_dml_timestamp,dml_write_read_transaction,update_data_with_dml_struct,insert_with_dml,query_data_with_parameter,write_with_dml_transaction,update_data_with_partitioned_dml,delete_data_with_partitioned_dml,update_with_batch_dml,create_table_with_datatypes,insert_datatypes_data,query_data_with_array,query_data_with_bool,query_data_with_bytes,query_data_with_date,query_data_with_float,query_data_with_int,query_data_with_string,query_data_with_timestamp_parameter,query_data_with_query_options,create_client_with_query_options} + ... + + This application demonstrates how to do basic operations using Cloud + Spanner. + + For more information, see the README.rst under /spanner. + + positional arguments: + instance_id Your Cloud Spanner instance ID. + {create_instance,create_database,insert_data,delete_data,query_data,read_data,read_stale_data,add_column,update_data,query_data_with_new_column,read_write_transaction,read_only_transaction,add_index,query_data_with_index,read_data_with_index,add_storing_index,read_data_with_storing_index,create_table_with_timestamp,insert_data_with_timestamp,add_timestamp_column,update_data_with_timestamp,query_data_with_timestamp,write_struct_data,query_with_struct,query_with_array_of_struct,query_struct_field,query_nested_struct_field,insert_data_with_dml,update_data_with_dml,delete_data_with_dml,update_data_with_dml_timestamp,dml_write_read_transaction,update_data_with_dml_struct,insert_with_dml,query_data_with_parameter,write_with_dml_transaction,update_data_with_partitioned_dml,delete_data_with_partitioned_dml,update_with_batch_dml,create_table_with_datatypes,insert_datatypes_data,query_data_with_array,query_data_with_bool,query_data_with_bytes,query_data_with_date,query_data_with_float,query_data_with_int,query_data_with_string,query_data_with_timestamp_parameter,query_data_with_query_options,create_client_with_query_options} + create_instance Creates an instance. + create_database Creates a database and tables for sample data. + insert_data Inserts sample data into the given database. The + database and table must already exist and can be + created using `create_database`. + delete_data Deletes sample data from the given database. The + database, table, and data must already exist and can + be created using `create_database` and `insert_data`. + query_data Queries sample data from the database using SQL. + read_data Reads sample data from the database. + read_stale_data Reads sample data from the database. The data is + exactly 15 seconds stale. + add_column Adds a new column to the Albums table in the example + database. + update_data Updates sample data in the database. This updates the + `MarketingBudget` column which must be created before + running this sample. You can add the column by running + the `add_column` sample or by running this DDL + statement against your database: ALTER TABLE Albums + ADD COLUMN MarketingBudget INT64 + query_data_with_new_column + Queries sample data from the database using SQL. This + sample uses the `MarketingBudget` column. You can add + the column by running the `add_column` sample or by + running this DDL statement against your database: + ALTER TABLE Albums ADD COLUMN MarketingBudget INT64 + read_write_transaction + Performs a read-write transaction to update two sample + records in the database. This will transfer 200,000 + from the `MarketingBudget` field for the second Album + to the first Album. If the `MarketingBudget` is too + low, it will raise an exception. Before running this + sample, you will need to run the `update_data` sample + to populate the fields. + read_only_transaction + Reads data inside of a read-only transaction. Within + the read-only transaction, or "snapshot", the + application sees consistent view of the database at a + particular timestamp. + add_index Adds a simple index to the example database. + query_data_with_index + Queries sample data from the database using SQL and an + index. The index must exist before running this + sample. You can add the index by running the + `add_index` sample or by running this DDL statement + against your database: CREATE INDEX AlbumsByAlbumTitle + ON Albums(AlbumTitle) This sample also uses the + `MarketingBudget` column. You can add the column by + running the `add_column` sample or by running this DDL + statement against your database: ALTER TABLE Albums + ADD COLUMN MarketingBudget INT64 + read_data_with_index + Inserts sample data into the given database. The + database and table must already exist and can be + created using `create_database`. + add_storing_index Adds an storing index to the example database. + read_data_with_storing_index + Inserts sample data into the given database. The + database and table must already exist and can be + created using `create_database`. + create_table_with_timestamp + Creates a table with a COMMIT_TIMESTAMP column. + insert_data_with_timestamp + Inserts data with a COMMIT_TIMESTAMP field into a + table. + add_timestamp_column + Adds a new TIMESTAMP column to the Albums table in the + example database. + update_data_with_timestamp + Updates Performances tables in the database with the + COMMIT_TIMESTAMP column. This updates the + `MarketingBudget` column which must be created before + running this sample. You can add the column by running + the `add_column` sample or by running this DDL + statement against your database: ALTER TABLE Albums + ADD COLUMN MarketingBudget INT64 In addition this + update expects the LastUpdateTime column added by + applying this DDL statement against your database: + ALTER TABLE Albums ADD COLUMN LastUpdateTime TIMESTAMP + OPTIONS(allow_commit_timestamp=true) + query_data_with_timestamp + Queries sample data from the database using SQL. This + updates the `LastUpdateTime` column which must be + created before running this sample. You can add the + column by running the `add_timestamp_column` sample or + by running this DDL statement against your database: + ALTER TABLE Performances ADD COLUMN LastUpdateTime + TIMESTAMP OPTIONS (allow_commit_timestamp=true) + write_struct_data Inserts sample data that can be used to test STRUCT + parameters in queries. + query_with_struct Query a table using STRUCT parameters. + query_with_array_of_struct + Query a table using an array of STRUCT parameters. + query_struct_field Query a table using field access on a STRUCT + parameter. + query_nested_struct_field + Query a table using nested field access on a STRUCT + parameter. + insert_data_with_dml + Inserts sample data into the given database using a + DML statement. + update_data_with_dml + Updates sample data from the database using a DML + statement. + delete_data_with_dml + Deletes sample data from the database using a DML + statement. + update_data_with_dml_timestamp + Updates data with Timestamp from the database using a + DML statement. + dml_write_read_transaction + First inserts data then reads it from within a + transaction using DML. + update_data_with_dml_struct + Updates data with a DML statement and STRUCT + parameters. + insert_with_dml Inserts data with a DML statement into the database. + query_data_with_parameter + Queries sample data from the database using SQL with a + parameter. + write_with_dml_transaction + Transfers part of a marketing budget from one album to + another. + update_data_with_partitioned_dml + Update sample data with a partitioned DML statement. + delete_data_with_partitioned_dml + Delete sample data with a partitioned DML statement. + update_with_batch_dml + Updates sample data in the database using Batch DML. + create_table_with_datatypes + Creates a table with supported dataypes. + insert_datatypes_data + Inserts data with supported datatypes into a table. + query_data_with_array + Queries sample data using SQL with an ARRAY parameter. + query_data_with_bool + Queries sample data using SQL with a BOOL parameter. + query_data_with_bytes + Queries sample data using SQL with a BYTES parameter. + query_data_with_date + Queries sample data using SQL with a DATE parameter. + query_data_with_float + Queries sample data using SQL with a FLOAT64 + parameter. + query_data_with_int + Queries sample data using SQL with a INT64 parameter. + query_data_with_string + Queries sample data using SQL with a STRING parameter. + query_data_with_timestamp_parameter + Queries sample data using SQL with a TIMESTAMP + parameter. + query_data_with_query_options + Queries sample data using SQL with query options. + create_client_with_query_options + Create a client with query options. + + optional arguments: + -h, --help show this help message and exit + --database-id DATABASE_ID + Your Cloud Spanner database ID. + + + + + + + + + +The client library +------------------------------------------------------------------------------- + +This sample uses the `Google Cloud Client Library for Python`_. +You can read the documentation for more details on API usage and use GitHub +to `browse the source`_ and `report issues`_. + +.. _Google Cloud Client Library for Python: + https://googlecloudplatform.github.io/google-cloud-python/ +.. _browse the source: + https://github.com/GoogleCloudPlatform/google-cloud-python +.. _report issues: + https://github.com/GoogleCloudPlatform/google-cloud-python/issues + + + +.. _Google Cloud SDK: https://cloud.google.com/sdk/ diff --git a/samples/samples/README.rst.in b/samples/samples/README.rst.in new file mode 100644 index 0000000000..542becb9a7 --- /dev/null +++ b/samples/samples/README.rst.in @@ -0,0 +1,24 @@ +# This file is used to generate README.rst + +product: + name: Google Cloud Spanner + short_name: Cloud Spanner + url: https://cloud.google.com/spanner/docs + description: > + `Google Cloud Spanner`_ is a highly scalable, transactional, managed, + NewSQL database service. Cloud Spanner solves the need for a + horizontally-scaling database with consistent global transactions and + SQL semantics. + +setup: +- auth +- install_deps + +samples: +- name: Snippets + file: snippets.py + show_help: true + +cloud_client_library: true + +folder: spanner/cloud-client \ No newline at end of file diff --git a/google/cloud/spanner_admin_instance_v1/proto/__init__.py b/samples/samples/archived/.gitkeep similarity index 100% rename from google/cloud/spanner_admin_instance_v1/proto/__init__.py rename to samples/samples/archived/.gitkeep diff --git a/samples/samples/archived/backup_snippet.py b/samples/samples/archived/backup_snippet.py new file mode 100644 index 0000000000..f31cbc1f2c --- /dev/null +++ b/samples/samples/archived/backup_snippet.py @@ -0,0 +1,454 @@ +# Copyright 2024 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to create and restore from backups +using Cloud Spanner. + +For more information, see the README.rst under /spanner. +""" + +import time +from datetime import datetime, timedelta + +from google.cloud import spanner + + +# [START spanner_cancel_backup_create] +def cancel_backup(instance_id, database_id, backup_id): + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + expire_time = datetime.utcnow() + timedelta(days=30) + + # Create a backup. + backup = instance.backup(backup_id, database=database, expire_time=expire_time) + operation = backup.create() + + # Cancel backup creation. + operation.cancel() + + # Cancel operations are best effort so either it will complete or + # be cancelled. + while not operation.done(): + time.sleep(300) # 5 mins + + # Deal with resource if the operation succeeded. + if backup.exists(): + print("Backup was created before the cancel completed.") + backup.delete() + print("Backup deleted.") + else: + print("Backup creation was successfully cancelled.") + + +# [END spanner_cancel_backup_create] + + +# [START spanner_copy_backup] +def copy_backup(instance_id, backup_id, source_backup_path): + """Copies a backup.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + # Create a backup object and wait for copy backup operation to complete. + expire_time = datetime.utcnow() + timedelta(days=14) + copy_backup = instance.copy_backup( + backup_id=backup_id, source_backup=source_backup_path, expire_time=expire_time + ) + operation = copy_backup.create() + + # Wait for copy backup operation to complete. + operation.result(2100) + + # Verify that the copy backup is ready. + copy_backup.reload() + assert copy_backup.is_ready() is True + + print( + "Backup {} of size {} bytes was created at {} with version time {}".format( + copy_backup.name, + copy_backup.size_bytes, + copy_backup.create_time, + copy_backup.version_time, + ) + ) + + +# [END spanner_copy_backup] + + +# [START spanner_create_backup] +def create_backup(instance_id, database_id, backup_id, version_time): + """Creates a backup for a database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Create a backup + expire_time = datetime.utcnow() + timedelta(days=14) + backup = instance.backup( + backup_id, database=database, expire_time=expire_time, version_time=version_time + ) + operation = backup.create() + + # Wait for backup operation to complete. + operation.result(2100) + + # Verify that the backup is ready. + backup.reload() + assert backup.is_ready() is True + + # Get the name, create time and backup size. + backup.reload() + print( + "Backup {} of size {} bytes was created at {} for version of database at {}".format( + backup.name, backup.size_bytes, backup.create_time, backup.version_time + ) + ) + + +# [END spanner_create_backup] + + +# [START spanner_create_backup_with_encryption_key] +def create_backup_with_encryption_key( + instance_id, database_id, backup_id, kms_key_name +): + """Creates a backup for a database using a Customer Managed Encryption Key (CMEK).""" + from google.cloud.spanner_admin_database_v1 import \ + CreateBackupEncryptionConfig + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Create a backup + expire_time = datetime.utcnow() + timedelta(days=14) + encryption_config = { + "encryption_type": CreateBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + "kms_key_name": kms_key_name, + } + backup = instance.backup( + backup_id, + database=database, + expire_time=expire_time, + encryption_config=encryption_config, + ) + operation = backup.create() + + # Wait for backup operation to complete. + operation.result(2100) + + # Verify that the backup is ready. + backup.reload() + assert backup.is_ready() is True + + # Get the name, create time, backup size and encryption key. + backup.reload() + print( + "Backup {} of size {} bytes was created at {} using encryption key {}".format( + backup.name, backup.size_bytes, backup.create_time, kms_key_name + ) + ) + + +# [END spanner_create_backup_with_encryption_key] + + +# [START spanner_create_database_with_version_retention_period] +def create_database_with_version_retention_period( + instance_id, database_id, retention_period +): + """Creates a database with a version retention period.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + ddl_statements = [ + "CREATE TABLE Singers (" + + " SingerId INT64 NOT NULL," + + " FirstName STRING(1024)," + + " LastName STRING(1024)," + + " SingerInfo BYTES(MAX)" + + ") PRIMARY KEY (SingerId)", + "CREATE TABLE Albums (" + + " SingerId INT64 NOT NULL," + + " AlbumId INT64 NOT NULL," + + " AlbumTitle STRING(MAX)" + + ") PRIMARY KEY (SingerId, AlbumId)," + + " INTERLEAVE IN PARENT Singers ON DELETE CASCADE", + "ALTER DATABASE `{}`" + " SET OPTIONS (version_retention_period = '{}')".format( + database_id, retention_period + ), + ] + db = instance.database(database_id, ddl_statements) + operation = db.create() + + operation.result(30) + + db.reload() + + print( + "Database {} created with version retention period {} and earliest version time {}".format( + db.database_id, db.version_retention_period, db.earliest_version_time + ) + ) + + db.drop() + + +# [END spanner_create_database_with_version_retention_period] + + +# [START spanner_delete_backup] +def delete_backup(instance_id, backup_id): + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + backup = instance.backup(backup_id) + backup.reload() + + # Wait for databases that reference this backup to finish optimizing. + while backup.referencing_databases: + time.sleep(30) + backup.reload() + + # Delete the backup. + backup.delete() + + # Verify that the backup is deleted. + assert backup.exists() is False + print("Backup {} has been deleted.".format(backup.name)) + + +# [END spanner_delete_backup] + + +# [START spanner_list_backup_operations] +def list_backup_operations(instance_id, database_id, backup_id): + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + # List the CreateBackup operations. + filter_ = ( + "(metadata.@type:type.googleapis.com/" + "google.spanner.admin.database.v1.CreateBackupMetadata) " + "AND (metadata.database:{})" + ).format(database_id) + operations = instance.list_backup_operations(filter_=filter_) + for op in operations: + metadata = op.metadata + print( + "Backup {} on database {}: {}% complete.".format( + metadata.name, metadata.database, metadata.progress.progress_percent + ) + ) + + # List the CopyBackup operations. + filter_ = ( + "(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) " + "AND (metadata.source_backup:{})" + ).format(backup_id) + operations = instance.list_backup_operations(filter_=filter_) + for op in operations: + metadata = op.metadata + print( + "Backup {} on source backup {}: {}% complete.".format( + metadata.name, + metadata.source_backup, + metadata.progress.progress_percent, + ) + ) + + +# [END spanner_list_backup_operations] + + +# [START spanner_list_backups] +def list_backups(instance_id, database_id, backup_id): + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + # List all backups. + print("All backups:") + for backup in instance.list_backups(): + print(backup.name) + + # List all backups that contain a name. + print('All backups with backup name containing "{}":'.format(backup_id)) + for backup in instance.list_backups(filter_="name:{}".format(backup_id)): + print(backup.name) + + # List all backups for a database that contains a name. + print('All backups with database name containing "{}":'.format(database_id)) + for backup in instance.list_backups(filter_="database:{}".format(database_id)): + print(backup.name) + + # List all backups that expire before a timestamp. + expire_time = datetime.utcnow().replace(microsecond=0) + timedelta(days=30) + print( + 'All backups with expire_time before "{}-{}-{}T{}:{}:{}Z":'.format( + *expire_time.timetuple() + ) + ) + for backup in instance.list_backups( + filter_='expire_time < "{}-{}-{}T{}:{}:{}Z"'.format(*expire_time.timetuple()) + ): + print(backup.name) + + # List all backups with a size greater than some bytes. + print("All backups with backup size more than 100 bytes:") + for backup in instance.list_backups(filter_="size_bytes > 100"): + print(backup.name) + + # List backups that were created after a timestamp that are also ready. + create_time = datetime.utcnow().replace(microsecond=0) - timedelta(days=1) + print( + 'All backups created after "{}-{}-{}T{}:{}:{}Z" and are READY:'.format( + *create_time.timetuple() + ) + ) + for backup in instance.list_backups( + filter_='create_time >= "{}-{}-{}T{}:{}:{}Z" AND state:READY'.format( + *create_time.timetuple() + ) + ): + print(backup.name) + + print("All backups with pagination") + # If there are multiple pages, additional ``ListBackup`` + # requests will be made as needed while iterating. + paged_backups = set() + for backup in instance.list_backups(page_size=2): + paged_backups.add(backup.name) + for backup in paged_backups: + print(backup) + + +# [END spanner_list_backups] + + +# [START spanner_list_database_operations] +def list_database_operations(instance_id): + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + # List the progress of restore. + filter_ = ( + "(metadata.@type:type.googleapis.com/" + "google.spanner.admin.database.v1.OptimizeRestoredDatabaseMetadata)" + ) + operations = instance.list_database_operations(filter_=filter_) + for op in operations: + print( + "Database {} restored from backup is {}% optimized.".format( + op.metadata.name, op.metadata.progress.progress_percent + ) + ) + + +# [END spanner_list_database_operations] + + +# [START spanner_restore_backup] +def restore_database(instance_id, new_database_id, backup_id): + """Restores a database from a backup.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + # Create a backup on database_id. + + # Start restoring an existing backup to a new database. + backup = instance.backup(backup_id) + new_database = instance.database(new_database_id) + operation = new_database.restore(backup) + + # Wait for restore operation to complete. + operation.result(1600) + + # Newly created database has restore information. + new_database.reload() + restore_info = new_database.restore_info + print( + "Database {} restored to {} from backup {} with version time {}.".format( + restore_info.backup_info.source_database, + new_database_id, + restore_info.backup_info.backup, + restore_info.backup_info.version_time, + ) + ) + + +# [END spanner_restore_backup] + + +# [START spanner_restore_backup_with_encryption_key] +def restore_database_with_encryption_key( + instance_id, new_database_id, backup_id, kms_key_name +): + """Restores a database from a backup using a Customer Managed Encryption Key (CMEK).""" + from google.cloud.spanner_admin_database_v1 import \ + RestoreDatabaseEncryptionConfig + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + # Start restoring an existing backup to a new database. + backup = instance.backup(backup_id) + encryption_config = { + "encryption_type": RestoreDatabaseEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + "kms_key_name": kms_key_name, + } + new_database = instance.database( + new_database_id, encryption_config=encryption_config + ) + operation = new_database.restore(backup) + + # Wait for restore operation to complete. + operation.result(1600) + + # Newly created database has restore information. + new_database.reload() + restore_info = new_database.restore_info + print( + "Database {} restored to {} from backup {} with using encryption key {}.".format( + restore_info.backup_info.source_database, + new_database_id, + restore_info.backup_info.backup, + new_database.encryption_config.kms_key_name, + ) + ) + + +# [END spanner_restore_backup_with_encryption_key] + + +# [START spanner_update_backup] +def update_backup(instance_id, backup_id): + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + backup = instance.backup(backup_id) + backup.reload() + + # Expire time must be within 366 days of the create time of the backup. + old_expire_time = backup.expire_time + # New expire time should be less than the max expire time + new_expire_time = min(backup.max_expire_time, old_expire_time + timedelta(days=30)) + backup.update_expire_time(new_expire_time) + print( + "Backup {} expire time was updated from {} to {}.".format( + backup.name, old_expire_time, new_expire_time + ) + ) + + +# [END spanner_update_backup] diff --git a/samples/samples/archived/backup_snippet_test.py b/samples/samples/archived/backup_snippet_test.py new file mode 100644 index 0000000000..888124ffad --- /dev/null +++ b/samples/samples/archived/backup_snippet_test.py @@ -0,0 +1,196 @@ +# Copyright 2024 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import uuid + +import backup_snippet +import pytest +from google.api_core.exceptions import DeadlineExceeded +from test_utils.retry import RetryErrors + + +@pytest.fixture(scope="module") +def sample_name(): + return "backup" + + +def unique_database_id(): + """Creates a unique id for the database.""" + return f"test-db-{uuid.uuid4().hex[:10]}" + + +def unique_backup_id(): + """Creates a unique id for the backup.""" + return f"test-backup-{uuid.uuid4().hex[:10]}" + + +RESTORE_DB_ID = unique_database_id() +BACKUP_ID = unique_backup_id() +CMEK_RESTORE_DB_ID = unique_database_id() +CMEK_BACKUP_ID = unique_backup_id() +RETENTION_DATABASE_ID = unique_database_id() +RETENTION_PERIOD = "7d" +COPY_BACKUP_ID = unique_backup_id() + + +@pytest.mark.dependency(name="create_backup") +def test_create_backup(capsys, instance_id, sample_database): + with sample_database.snapshot() as snapshot: + results = snapshot.execute_sql("SELECT CURRENT_TIMESTAMP()") + version_time = list(results)[0][0] + + backup_snippet.create_backup( + instance_id, + sample_database.database_id, + BACKUP_ID, + version_time, + ) + out, _ = capsys.readouterr() + assert BACKUP_ID in out + + +@pytest.mark.dependency(name="copy_backup", depends=["create_backup"]) +def test_copy_backup(capsys, instance_id, spanner_client): + source_backp_path = ( + spanner_client.project_name + + "/instances/" + + instance_id + + "/backups/" + + BACKUP_ID + ) + backup_snippet.copy_backup(instance_id, COPY_BACKUP_ID, source_backp_path) + out, _ = capsys.readouterr() + assert COPY_BACKUP_ID in out + + +@pytest.mark.dependency(name="create_backup_with_encryption_key") +def test_create_backup_with_encryption_key( + capsys, + instance_id, + sample_database, + kms_key_name, +): + backup_snippet.create_backup_with_encryption_key( + instance_id, + sample_database.database_id, + CMEK_BACKUP_ID, + kms_key_name, + ) + out, _ = capsys.readouterr() + assert CMEK_BACKUP_ID in out + assert kms_key_name in out + + +@pytest.mark.skip(reason="same test passes on unarchived test suite, " + "but fails here. Needs investigation") +@pytest.mark.dependency(depends=["create_backup"]) +@RetryErrors(exception=DeadlineExceeded, max_tries=2) +def test_restore_database(capsys, instance_id, sample_database): + backup_snippet.restore_database(instance_id, RESTORE_DB_ID, BACKUP_ID) + out, _ = capsys.readouterr() + assert (sample_database.database_id + " restored to ") in out + assert (RESTORE_DB_ID + " from backup ") in out + assert BACKUP_ID in out + + +@pytest.mark.skip(reason="same test passes on unarchived test suite, " + "but fails here. Needs investigation") +@pytest.mark.dependency(depends=["create_backup_with_encryption_key"]) +@RetryErrors(exception=DeadlineExceeded, max_tries=2) +def test_restore_database_with_encryption_key( + capsys, + instance_id, + sample_database, + kms_key_name, +): + backup_snippet.restore_database_with_encryption_key( + instance_id, CMEK_RESTORE_DB_ID, CMEK_BACKUP_ID, kms_key_name + ) + out, _ = capsys.readouterr() + assert (sample_database.database_id + " restored to ") in out + assert (CMEK_RESTORE_DB_ID + " from backup ") in out + assert CMEK_BACKUP_ID in out + assert kms_key_name in out + + +@pytest.mark.dependency(depends=["create_backup", "copy_backup"]) +def test_list_backup_operations(capsys, instance_id, sample_database): + backup_snippet.list_backup_operations( + instance_id, sample_database.database_id, BACKUP_ID + ) + out, _ = capsys.readouterr() + assert BACKUP_ID in out + assert sample_database.database_id in out + assert COPY_BACKUP_ID in out + print(out) + + +@pytest.mark.dependency(name="list_backup", depends=["create_backup", "copy_backup"]) +def test_list_backups( + capsys, + instance_id, + sample_database, +): + backup_snippet.list_backups( + instance_id, + sample_database.database_id, + BACKUP_ID, + ) + out, _ = capsys.readouterr() + id_count = out.count(BACKUP_ID) + assert id_count == 7 + + +@pytest.mark.dependency(depends=["create_backup"]) +def test_update_backup(capsys, instance_id): + backup_snippet.update_backup(instance_id, BACKUP_ID) + out, _ = capsys.readouterr() + assert BACKUP_ID in out + + +@pytest.mark.dependency(depends=["create_backup", "copy_backup", "list_backup"]) +def test_delete_backup(capsys, instance_id): + backup_snippet.delete_backup(instance_id, BACKUP_ID) + out, _ = capsys.readouterr() + assert BACKUP_ID in out + backup_snippet.delete_backup(instance_id, COPY_BACKUP_ID) + out, _ = capsys.readouterr() + assert "has been deleted." in out + assert COPY_BACKUP_ID in out + + +@pytest.mark.dependency(depends=["create_backup"]) +def test_cancel_backup(capsys, instance_id, sample_database): + backup_snippet.cancel_backup( + instance_id, + sample_database.database_id, + BACKUP_ID, + ) + out, _ = capsys.readouterr() + cancel_success = "Backup creation was successfully cancelled." in out + cancel_failure = ("Backup was created before the cancel completed." in out) and ( + "Backup deleted." in out + ) + assert cancel_success or cancel_failure + + +@RetryErrors(exception=DeadlineExceeded, max_tries=2) +def test_create_database_with_retention_period(capsys, sample_instance): + backup_snippet.create_database_with_version_retention_period( + sample_instance.instance_id, + RETENTION_DATABASE_ID, + RETENTION_PERIOD, + ) + out, _ = capsys.readouterr() + assert (RETENTION_DATABASE_ID + " created with ") in out + assert ("retention period " + RETENTION_PERIOD) in out diff --git a/samples/samples/archived/pg_samples.py b/samples/samples/archived/pg_samples.py new file mode 100644 index 0000000000..2d0dd0e5a9 --- /dev/null +++ b/samples/samples/archived/pg_samples.py @@ -0,0 +1,311 @@ +#!/usr/bin/env python + +# Copyright 2024 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to do basic operations using Cloud +Spanner PostgreSql dialect. +For more information, see the README.rst under /spanner. +""" +from google.cloud import spanner, spanner_admin_database_v1 +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect + +OPERATION_TIMEOUT_SECONDS = 240 + + +# [START spanner_postgresql_add_column] +def add_column(instance_id, database_id): + """Adds a new column to the Albums table in the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + ["ALTER TABLE Albums ADD COLUMN MarketingBudget BIGINT"] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the MarketingBudget column.") + + +# [END spanner_postgresql_add_column] + + +# [START spanner_postgresql_jsonb_add_column] +def add_jsonb_column(instance_id, database_id): + """ + Alters Venues tables in the database adding a JSONB column. + You can create the table by running the `create_table_with_datatypes` + sample or by running this DDL statement against your database: + CREATE TABLE Venues ( + VenueId BIGINT NOT NULL, + VenueName character varying(100), + VenueInfo BYTEA, + Capacity BIGINT, + OutdoorVenue BOOL, + PopularityScore FLOAT8, + Revenue NUMERIC, + LastUpdateTime SPANNER.COMMIT_TIMESTAMP NOT NULL, + PRIMARY KEY (VenueId)) + """ + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + ["ALTER TABLE Venues ADD COLUMN VenueDetails JSONB"] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Venues" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_postgresql_jsonb_add_column] + + +# [START spanner_postgresql_alter_sequence] +def alter_sequence(instance_id, database_id): + """Alters the Sequence and insert data""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl(["ALTER SEQUENCE Seq SKIP RANGE 1000 5000000"]) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Altered Seq sequence to skip an inclusive range between 1000 and 5000000 on database {} on instance {}".format( + database_id, instance_id + ) + ) + + def insert_customers(transaction): + results = transaction.execute_sql( + "INSERT INTO Customers (CustomerName) VALUES " + "('Lea'), " + "('Cataline'), " + "('Smith') " + "RETURNING CustomerId" + ) + for result in results: + print("Inserted customer record with Customer Id: {}".format(*result)) + print( + "Number of customer records inserted is {}".format( + results.stats.row_count_exact + ) + ) + + database.run_in_transaction(insert_customers) + + +# [END spanner_postgresql_alter_sequence] + + +# [START spanner_postgresql_create_database] +def create_database(instance_id, database_id): + """Creates a PostgreSql database and tables for sample data.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database( + database_id, + database_dialect=DatabaseDialect.POSTGRESQL, + ) + + operation = database.create() + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + create_table_using_ddl(database.name) + print("Created database {} on instance {}".format(database_id, instance_id)) + + +def create_table_using_ddl(database_name): + spanner_client = spanner.Client() + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database=database_name, + statements=[ + """CREATE TABLE Singers ( + SingerId bigint NOT NULL, + FirstName character varying(1024), + LastName character varying(1024), + SingerInfo bytea, + FullName character varying(2048) + GENERATED ALWAYS AS (FirstName || ' ' || LastName) STORED, + PRIMARY KEY (SingerId) + )""", + """CREATE TABLE Albums ( + SingerId bigint NOT NULL, + AlbumId bigint NOT NULL, + AlbumTitle character varying(1024), + PRIMARY KEY (SingerId, AlbumId) + ) INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + ], + ) + operation = spanner_client.database_admin_api.update_database_ddl(request) + operation.result(OPERATION_TIMEOUT_SECONDS) + + +# [END spanner_postgresql_create_database] + + +# [START spanner_postgresql_create_sequence] +def create_sequence(instance_id, database_id): + """Creates the Sequence and insert data""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database=database.name, + statements=[ + "CREATE SEQUENCE Seq BIT_REVERSED_POSITIVE", + """CREATE TABLE Customers ( + CustomerId BIGINT DEFAULT nextval('Seq'), + CustomerName character varying(1024), + PRIMARY KEY (CustomerId) + )""", + ], + ) + operation = spanner_client.database_admin_api.update_database_ddl(request) + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Seq sequence and Customers table, where the key column CustomerId uses the sequence as a default value on database {} on instance {}".format( + database_id, instance_id + ) + ) + + def insert_customers(transaction): + results = transaction.execute_sql( + "INSERT INTO Customers (CustomerName) VALUES " + "('Alice'), " + "('David'), " + "('Marc') " + "RETURNING CustomerId" + ) + for result in results: + print("Inserted customer record with Customer Id: {}".format(*result)) + print( + "Number of customer records inserted is {}".format( + results.stats.row_count_exact + ) + ) + + database.run_in_transaction(insert_customers) + + +# [END spanner_postgresql_create_sequence] + + +# [START spanner_postgresql_create_storing_index] +def add_storing_index(instance_id, database_id): + """Adds an storing index to the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + "CREATE INDEX AlbumsByAlbumTitle2 ON Albums(AlbumTitle)" + "INCLUDE (MarketingBudget)" + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the AlbumsByAlbumTitle2 index.") + + +# [END spanner_postgresql_create_storing_index] + + +# [START spanner_postgresql_drop_sequence] +def drop_sequence(instance_id, database_id): + """Drops the Sequence""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + "ALTER TABLE Customers ALTER COLUMN CustomerId DROP DEFAULT", + "DROP SEQUENCE Seq", + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Altered Customers table to drop DEFAULT from CustomerId column and dropped the Seq sequence on database {} on instance {}".format( + database_id, instance_id + ) + ) + + +# [END spanner_postgresql_drop_sequence] + + +def create_table_with_datatypes(instance_id, database_id): + """Creates a table with supported datatypes.""" + # [START spanner_postgresql_create_table_with_datatypes] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database=database.name, + statements=[ + """CREATE TABLE Venues ( + VenueId BIGINT NOT NULL, + VenueName character varying(100), + VenueInfo BYTEA, + Capacity BIGINT, + OutdoorVenue BOOL, + PopularityScore FLOAT8, + Revenue NUMERIC, + LastUpdateTime SPANNER.COMMIT_TIMESTAMP NOT NULL, + PRIMARY KEY (VenueId))""" + ], + ) + operation = spanner_client.database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Venues table on database {} on instance {}".format( + database_id, instance_id + ) + ) + # [END spanner_postgresql_create_table_with_datatypes] diff --git a/samples/samples/archived/pg_samples_test.py b/samples/samples/archived/pg_samples_test.py new file mode 100644 index 0000000000..3863f5aa56 --- /dev/null +++ b/samples/samples/archived/pg_samples_test.py @@ -0,0 +1,178 @@ +# Copyright 2024 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import uuid + +import pg_samples as samples +import pytest +from google.api_core import exceptions +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect +from test_utils.retry import RetryErrors + +CREATE_TABLE_SINGERS = """\ +CREATE TABLE Singers ( + SingerId BIGINT NOT NULL, + FirstName CHARACTER VARYING(1024), + LastName CHARACTER VARYING(1024), + SingerInfo BYTEA, + FullName CHARACTER VARYING(2048) + GENERATED ALWAYS AS (FirstName || ' ' || LastName) STORED, + PRIMARY KEY (SingerId) +) +""" + +CREATE_TABLE_ALBUMS = """\ +CREATE TABLE Albums ( + SingerId BIGINT NOT NULL, + AlbumId BIGINT NOT NULL, + AlbumTitle CHARACTER VARYING(1024), + PRIMARY KEY (SingerId, AlbumId) + ) INTERLEAVE IN PARENT Singers ON DELETE CASCADE +""" + +retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + + +@pytest.fixture(scope="module") +def sample_name(): + return "pg_snippets" + + +@pytest.fixture(scope="module") +def database_dialect(): + """Spanner dialect to be used for this sample. + The dialect is used to initialize the dialect for the database. + It can either be GoogleStandardSql or PostgreSql. + """ + return DatabaseDialect.POSTGRESQL + + +@pytest.fixture(scope="module") +def create_instance_id(): + """Id for the low-cost instance.""" + return f"create-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def lci_instance_id(): + """Id for the low-cost instance.""" + return f"lci-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_id(): + return f"test-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def create_database_id(): + return f"create-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def cmek_database_id(): + return f"cmek-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def default_leader_database_id(): + return f"leader_db_{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_ddl(): + """Sequence of DDL statements used to set up the database. + Sample testcase modules can override as needed. + """ + return [CREATE_TABLE_SINGERS, CREATE_TABLE_ALBUMS] + + +@pytest.fixture(scope="module") +def default_leader(): + """Default leader for multi-region instances.""" + return "us-east4" + + +@pytest.mark.dependency(name="create_database") +def test_create_database_explicit(sample_instance, create_database_id): + # Rather than re-use 'sample_database', we create a new database, to + # ensure that the 'create_database' snippet is tested. + samples.create_database(sample_instance.instance_id, create_database_id) + database = sample_instance.database(create_database_id) + database.drop() + + +@pytest.mark.dependency(name="create_table_with_datatypes") +def test_create_table_with_datatypes(capsys, instance_id, sample_database): + samples.create_table_with_datatypes(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created Venues table on database" in out + + +@pytest.mark.dependency(name="add_column", depends=["create_database"]) +def test_add_column(capsys, instance_id, sample_database): + samples.add_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the MarketingBudget column." in out + + +@pytest.mark.dependency(name="add_storing_index", depends=["create_database"]) +def test_add_storing_index(capsys, instance_id, sample_database): + samples.add_storing_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the AlbumsByAlbumTitle2 index." in out + + +@pytest.mark.dependency( + name="add_jsonb_column", depends=["create_table_with_datatypes"] +) +def test_add_jsonb_column(capsys, instance_id, sample_database): + samples.add_jsonb_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Waiting for operation to complete..." in out + assert 'Altered table "Venues" on database ' in out + + +@pytest.mark.dependency(name="create_sequence") +def test_create_sequence(capsys, instance_id, bit_reverse_sequence_database): + samples.create_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Created Seq sequence and Customers table, where the key column CustomerId uses the sequence as a default value on database" + in out + ) + assert "Number of customer records inserted is 3" in out + assert "Inserted customer record with Customer Id:" in out + + +@pytest.mark.dependency(name="alter_sequence", depends=["create_sequence"]) +def test_alter_sequence(capsys, instance_id, bit_reverse_sequence_database): + samples.alter_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Altered Seq sequence to skip an inclusive range between 1000 and 5000000 on database" + in out + ) + assert "Number of customer records inserted is 3" in out + assert "Inserted customer record with Customer Id:" in out + + +@pytest.mark.dependency(depends=["alter_sequence"]) +def test_drop_sequence(capsys, instance_id, bit_reverse_sequence_database): + samples.drop_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Altered Customers table to drop DEFAULT from CustomerId column and dropped the Seq sequence on database" + in out + ) diff --git a/samples/samples/archived/samples.py b/samples/samples/archived/samples.py new file mode 100644 index 0000000000..0f930d4a35 --- /dev/null +++ b/samples/samples/archived/samples.py @@ -0,0 +1,824 @@ +#!/usr/bin/env python + +# Copyright 2024 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to do basic operations using Cloud +Spanner. + +For more information, see the README.rst under /spanner. +""" + +import time + +from google.cloud import spanner +from google.iam.v1 import policy_pb2 +from google.type import expr_pb2 + +OPERATION_TIMEOUT_SECONDS = 240 + + +def add_and_drop_database_roles(instance_id, database_id): + """Showcases how to manage a user defined database role.""" + # [START spanner_add_and_drop_database_role] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + role_parent = "new_parent" + role_child = "new_child" + + operation = database.update_ddl( + [ + "CREATE ROLE {}".format(role_parent), + "GRANT SELECT ON TABLE Singers TO ROLE {}".format(role_parent), + "CREATE ROLE {}".format(role_child), + "GRANT ROLE {} TO ROLE {}".format(role_parent, role_child), + ] + ) + operation.result(OPERATION_TIMEOUT_SECONDS) + print( + "Created roles {} and {} and granted privileges".format(role_parent, role_child) + ) + + operation = database.update_ddl( + [ + "REVOKE ROLE {} FROM ROLE {}".format(role_parent, role_child), + "DROP ROLE {}".format(role_child), + ] + ) + operation.result(OPERATION_TIMEOUT_SECONDS) + print("Revoked privileges and dropped role {}".format(role_child)) + + # [END spanner_add_and_drop_database_role] + + +# [START spanner_add_column] +def add_column(instance_id, database_id): + """Adds a new column to the Albums table in the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + ["ALTER TABLE Albums ADD COLUMN MarketingBudget INT64"] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the MarketingBudget column.") + + +# [END spanner_add_column] + + +# [START spanner_add_json_column] +def add_json_column(instance_id, database_id): + """Adds a new JSON column to the Venues table in the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + operation = database.update_ddl(["ALTER TABLE Venues ADD COLUMN VenueDetails JSON"]) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Venues" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_add_json_column] + + +# [START spanner_add_numeric_column] +def add_numeric_column(instance_id, database_id): + """Adds a new NUMERIC column to the Venues table in the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + operation = database.update_ddl(["ALTER TABLE Venues ADD COLUMN Revenue NUMERIC"]) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Venues" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_add_numeric_column] + + +# [START spanner_add_timestamp_column] +def add_timestamp_column(instance_id, database_id): + """Adds a new TIMESTAMP column to the Albums table in the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + operation = database.update_ddl( + [ + "ALTER TABLE Albums ADD COLUMN LastUpdateTime TIMESTAMP " + "OPTIONS(allow_commit_timestamp=true)" + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Albums" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_add_timestamp_column] + + +# [START spanner_alter_sequence] +def alter_sequence(instance_id, database_id): + """Alters the Sequence and insert data""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + "ALTER SEQUENCE Seq SET OPTIONS (skip_range_min = 1000, skip_range_max = 5000000)" + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Altered Seq sequence to skip an inclusive range between 1000 and 5000000 on database {} on instance {}".format( + database_id, instance_id + ) + ) + + def insert_customers(transaction): + results = transaction.execute_sql( + "INSERT INTO Customers (CustomerName) VALUES " + "('Lea'), " + "('Cataline'), " + "('Smith') " + "THEN RETURN CustomerId" + ) + for result in results: + print("Inserted customer record with Customer Id: {}".format(*result)) + print( + "Number of customer records inserted is {}".format( + results.stats.row_count_exact + ) + ) + + database.run_in_transaction(insert_customers) + + +# [END spanner_alter_sequence] + + +# [START spanner_alter_table_with_foreign_key_delete_cascade] +def alter_table_with_foreign_key_delete_cascade(instance_id, database_id): + """Alters a table with foreign key delete cascade action""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + """ALTER TABLE ShoppingCarts + ADD CONSTRAINT FKShoppingCartsCustomerName + FOREIGN KEY (CustomerName) + REFERENCES Customers(CustomerName) + ON DELETE CASCADE""" + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + """Altered ShoppingCarts table with FKShoppingCartsCustomerName + foreign key constraint on database {} on instance {}""".format( + database_id, instance_id + ) + ) + + +# [END spanner_alter_table_with_foreign_key_delete_cascade] + + +# [START spanner_create_database] +def create_database(instance_id, database_id): + """Creates a database and tables for sample data.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database( + database_id, + ddl_statements=[ + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX), + FullName STRING(2048) AS ( + ARRAY_TO_STRING([FirstName, LastName], " ") + ) STORED + ) PRIMARY KEY (SingerId)""", + """CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) + ) PRIMARY KEY (SingerId, AlbumId), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + ], + ) + + operation = database.create() + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Created database {} on instance {}".format(database_id, instance_id)) + + +# [END spanner_create_database] + + +# [START spanner_create_database_with_default_leader] +def create_database_with_default_leader(instance_id, database_id, default_leader): + """Creates a database with tables with a default leader.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database( + database_id, + ddl_statements=[ + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX) + ) PRIMARY KEY (SingerId)""", + """CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) + ) PRIMARY KEY (SingerId, AlbumId), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + "ALTER DATABASE {}" + " SET OPTIONS (default_leader = '{}')".format(database_id, default_leader), + ], + ) + operation = database.create() + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + database.reload() + + print( + "Database {} created with default leader {}".format( + database.name, database.default_leader + ) + ) + + +# [END spanner_create_database_with_default_leader] + + +# [START spanner_create_database_with_encryption_key] +def create_database_with_encryption_key(instance_id, database_id, kms_key_name): + """Creates a database with tables using a Customer Managed Encryption Key (CMEK).""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database( + database_id, + ddl_statements=[ + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX) + ) PRIMARY KEY (SingerId)""", + """CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) + ) PRIMARY KEY (SingerId, AlbumId), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + ], + encryption_config={"kms_key_name": kms_key_name}, + ) + + operation = database.create() + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Database {} created with encryption key {}".format( + database.name, database.encryption_config.kms_key_name + ) + ) + + +# [END spanner_create_database_with_encryption_key] + + +# [START spanner_create_index] +def add_index(instance_id, database_id): + """Adds a simple index to the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + ["CREATE INDEX AlbumsByAlbumTitle ON Albums(AlbumTitle)"] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the AlbumsByAlbumTitle index.") + + +# [END spanner_create_index] + + +# [START spanner_create_instance] +def create_instance(instance_id): + """Creates an instance.""" + spanner_client = spanner.Client() + + config_name = "{}/instanceConfigs/regional-us-central1".format( + spanner_client.project_name + ) + + instance = spanner_client.instance( + instance_id, + configuration_name=config_name, + display_name="This is a display name.", + node_count=1, + labels={ + "cloud_spanner_samples": "true", + "sample_name": "snippets-create_instance-explicit", + "created": str(int(time.time())), + }, + ) + + operation = instance.create() + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Created instance {}".format(instance_id)) + + +# [END spanner_create_instance] + + +# [START spanner_create_instance_with_processing_units] +def create_instance_with_processing_units(instance_id, processing_units): + """Creates an instance.""" + spanner_client = spanner.Client() + + config_name = "{}/instanceConfigs/regional-us-central1".format( + spanner_client.project_name + ) + + instance = spanner_client.instance( + instance_id, + configuration_name=config_name, + display_name="This is a display name.", + processing_units=processing_units, + labels={ + "cloud_spanner_samples": "true", + "sample_name": "snippets-create_instance_with_processing_units", + "created": str(int(time.time())), + }, + ) + + operation = instance.create() + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created instance {} with {} processing units".format( + instance_id, instance.processing_units + ) + ) + + +# [END spanner_create_instance_with_processing_units] + + +# [START spanner_create_sequence] +def create_sequence(instance_id, database_id): + """Creates the Sequence and insert data""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + "CREATE SEQUENCE Seq OPTIONS (sequence_kind = 'bit_reversed_positive')", + """CREATE TABLE Customers ( + CustomerId INT64 DEFAULT (GET_NEXT_SEQUENCE_VALUE(Sequence Seq)), + CustomerName STRING(1024) + ) PRIMARY KEY (CustomerId)""", + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Seq sequence and Customers table, where the key column CustomerId uses the sequence as a default value on database {} on instance {}".format( + database_id, instance_id + ) + ) + + def insert_customers(transaction): + results = transaction.execute_sql( + "INSERT INTO Customers (CustomerName) VALUES " + "('Alice'), " + "('David'), " + "('Marc') " + "THEN RETURN CustomerId" + ) + for result in results: + print("Inserted customer record with Customer Id: {}".format(*result)) + print( + "Number of customer records inserted is {}".format( + results.stats.row_count_exact + ) + ) + + database.run_in_transaction(insert_customers) + + +# [END spanner_create_sequence] + + +# [START spanner_create_storing_index] +def add_storing_index(instance_id, database_id): + """Adds an storing index to the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + "CREATE INDEX AlbumsByAlbumTitle2 ON Albums(AlbumTitle)" + "STORING (MarketingBudget)" + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the AlbumsByAlbumTitle2 index.") + + +# [END spanner_create_storing_index] + + +def create_table_with_datatypes(instance_id, database_id): + """Creates a table with supported datatypes.""" + # [START spanner_create_table_with_datatypes] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + """CREATE TABLE Venues ( + VenueId INT64 NOT NULL, + VenueName STRING(100), + VenueInfo BYTES(MAX), + Capacity INT64, + AvailableDates ARRAY, + LastContactDate DATE, + OutdoorVenue BOOL, + PopularityScore FLOAT64, + LastUpdateTime TIMESTAMP NOT NULL + OPTIONS(allow_commit_timestamp=true) + ) PRIMARY KEY (VenueId)""" + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Venues table on database {} on instance {}".format( + database_id, instance_id + ) + ) + # [END spanner_create_table_with_datatypes] + + +# [START spanner_create_table_with_foreign_key_delete_cascade] +def create_table_with_foreign_key_delete_cascade(instance_id, database_id): + """Creates a table with foreign key delete cascade action""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + """CREATE TABLE Customers ( + CustomerId INT64 NOT NULL, + CustomerName STRING(62) NOT NULL, + ) PRIMARY KEY (CustomerId) + """, + """ + CREATE TABLE ShoppingCarts ( + CartId INT64 NOT NULL, + CustomerId INT64 NOT NULL, + CustomerName STRING(62) NOT NULL, + CONSTRAINT FKShoppingCartsCustomerId FOREIGN KEY (CustomerId) + REFERENCES Customers (CustomerId) ON DELETE CASCADE + ) PRIMARY KEY (CartId) + """, + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + """Created Customers and ShoppingCarts table with FKShoppingCartsCustomerId + foreign key constraint on database {} on instance {}""".format( + database_id, instance_id + ) + ) + + +# [END spanner_create_table_with_foreign_key_delete_cascade] + + +# [START spanner_create_table_with_timestamp_column] +def create_table_with_timestamp(instance_id, database_id): + """Creates a table with a COMMIT_TIMESTAMP column.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + """CREATE TABLE Performances ( + SingerId INT64 NOT NULL, + VenueId INT64 NOT NULL, + EventDate Date, + Revenue INT64, + LastUpdateTime TIMESTAMP NOT NULL + OPTIONS(allow_commit_timestamp=true) + ) PRIMARY KEY (SingerId, VenueId, EventDate), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""" + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Performances table on database {} on instance {}".format( + database_id, instance_id + ) + ) + + +# [END spanner_create_table_with_timestamp_column] + + +# [START spanner_drop_foreign_key_constraint_delete_cascade] +def drop_foreign_key_constraint_delete_cascade(instance_id, database_id): + """Alter table to drop foreign key delete cascade action""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + """ALTER TABLE ShoppingCarts + DROP CONSTRAINT FKShoppingCartsCustomerName""" + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + """Altered ShoppingCarts table to drop FKShoppingCartsCustomerName + foreign key constraint on database {} on instance {}""".format( + database_id, instance_id + ) + ) + + +# [END spanner_drop_foreign_key_constraint_delete_cascade] + + +# [START spanner_drop_sequence] +def drop_sequence(instance_id, database_id): + """Drops the Sequence""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + [ + "ALTER TABLE Customers ALTER COLUMN CustomerId DROP DEFAULT", + "DROP SEQUENCE Seq", + ] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Altered Customers table to drop DEFAULT from CustomerId column and dropped the Seq sequence on database {} on instance {}".format( + database_id, instance_id + ) + ) + + +# [END spanner_drop_sequence] + + +def enable_fine_grained_access( + instance_id, + database_id, + iam_member="user:alice@example.com", + database_role="new_parent", + title="condition title", +): + """Showcases how to enable fine grained access control.""" + # [START spanner_enable_fine_grained_access] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + # iam_member = "user:alice@example.com" + # database_role = "new_parent" + # title = "condition title" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # The policy in the response from getDatabaseIAMPolicy might use the policy version + # that you specified, or it might use a lower policy version. For example, if you + # specify version 3, but the policy has no conditional role bindings, the response + # uses version 1. Valid values are 0, 1, and 3. + policy = database.get_iam_policy(3) + if policy.version < 3: + policy.version = 3 + + new_binding = policy_pb2.Binding( + role="roles/spanner.fineGrainedAccessUser", + members=[iam_member], + condition=expr_pb2.Expr( + title=title, + expression=f'resource.name.endsWith("/databaseRoles/{database_role}")', + ), + ) + + policy.version = 3 + policy.bindings.append(new_binding) + database.set_iam_policy(policy) + + new_policy = database.get_iam_policy(3) + print( + f"Enabled fine-grained access in IAM. New policy has version {new_policy.version}" + ) + # [END spanner_enable_fine_grained_access] + + +def list_database_roles(instance_id, database_id): + """Showcases how to list Database Roles.""" + # [START spanner_list_database_roles] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # List database roles. + print("Database Roles are:") + for role in database.list_database_roles(): + print(role.name.split("/")[-1]) + # [END spanner_list_database_roles] + + +# [START spanner_list_databases] +def list_databases(instance_id): + """Lists databases and their leader options.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + databases = list(instance.list_databases()) + for database in databases: + print( + "Database {} has default leader {}".format( + database.name, database.default_leader + ) + ) + + +# [END spanner_list_databases] + + +# [START spanner_list_instance_configs] +def list_instance_config(): + """Lists the available instance configurations.""" + spanner_client = spanner.Client() + configs = spanner_client.list_instance_configs() + for config in configs: + print( + "Available leader options for instance config {}: {}".format( + config.name, config.leader_options + ) + ) + + +# [END spanner_list_instance_configs] + + +# [START spanner_update_database] +def update_database(instance_id, database_id): + """Updates the drop protection setting for a database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + db = instance.database(database_id) + db.enable_drop_protection = True + + operation = db.update(["enable_drop_protection"]) + + print("Waiting for update operation for {} to complete...".format(db.name)) + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Updated database {}.".format(db.name)) + + +# [END spanner_update_database] + + +# [START spanner_update_database_with_default_leader] +def update_database_with_default_leader(instance_id, database_id, default_leader): + """Updates a database with tables with a default leader.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + operation = database.update_ddl( + [ + "ALTER DATABASE {}" + " SET OPTIONS (default_leader = '{}')".format(database_id, default_leader) + ] + ) + operation.result(OPERATION_TIMEOUT_SECONDS) + + database.reload() + + print( + "Database {} updated with default leader {}".format( + database.name, database.default_leader + ) + ) + + +# [END spanner_update_database_with_default_leader] diff --git a/samples/samples/archived/samples_test.py b/samples/samples/archived/samples_test.py new file mode 100644 index 0000000000..6435dc5311 --- /dev/null +++ b/samples/samples/archived/samples_test.py @@ -0,0 +1,374 @@ +#!/usr/bin/env python + +# Copyright 2024 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to do basic operations using Cloud +Spanner. +For more information, see the README.rst under /spanner. +""" + +import uuid + +import pytest +from google.api_core import exceptions +from google.cloud import spanner +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect +from test_utils.retry import RetryErrors + +import samples + +CREATE_TABLE_SINGERS = """\ +CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX), + FullName STRING(2048) AS ( + ARRAY_TO_STRING([FirstName, LastName], " ") + ) STORED +) PRIMARY KEY (SingerId) +""" + +CREATE_TABLE_ALBUMS = """\ +CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) +) PRIMARY KEY (SingerId, AlbumId), +INTERLEAVE IN PARENT Singers ON DELETE CASCADE +""" + +retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + + +@pytest.fixture(scope="module") +def sample_name(): + return "snippets" + + +@pytest.fixture(scope="module") +def database_dialect(): + """Spanner dialect to be used for this sample. + + The dialect is used to initialize the dialect for the database. + It can either be GoogleStandardSql or PostgreSql. + """ + return DatabaseDialect.GOOGLE_STANDARD_SQL + + +@pytest.fixture(scope="module") +def create_instance_id(): + """Id for the low-cost instance.""" + return f"create-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def lci_instance_id(): + """Id for the low-cost instance.""" + return f"lci-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_id(): + return f"test-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def create_database_id(): + return f"create-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def cmek_database_id(): + return f"cmek-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def default_leader_database_id(): + return f"leader_db_{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_ddl(): + """Sequence of DDL statements used to set up the database. + + Sample testcase modules can override as needed. + """ + return [CREATE_TABLE_SINGERS, CREATE_TABLE_ALBUMS] + + +@pytest.fixture(scope="module") +def default_leader(): + """Default leader for multi-region instances.""" + return "us-east4" + + +@pytest.fixture(scope="module") +def base_instance_config_id(spanner_client): + return "{}/instanceConfigs/{}".format(spanner_client.project_name, "nam7") + + +def test_create_instance_explicit(spanner_client, create_instance_id): + # Rather than re-use 'sample_isntance', we create a new instance, to + # ensure that the 'create_instance' snippet is tested. + retry_429(samples.create_instance)(create_instance_id) + instance = spanner_client.instance(create_instance_id) + retry_429(instance.delete)() + + +def test_create_instance_with_processing_units(capsys, lci_instance_id): + processing_units = 500 + retry_429(samples.create_instance_with_processing_units)( + lci_instance_id, + processing_units, + ) + out, _ = capsys.readouterr() + assert lci_instance_id in out + assert "{} processing units".format(processing_units) in out + spanner_client = spanner.Client() + instance = spanner_client.instance(lci_instance_id) + retry_429(instance.delete)() + + +def test_create_database_explicit(sample_instance, create_database_id): + # Rather than re-use 'sample_database', we create a new database, to + # ensure that the 'create_database' snippet is tested. + samples.create_database(sample_instance.instance_id, create_database_id) + database = sample_instance.database(create_database_id) + database.drop() + + +def test_create_database_with_encryption_config( + capsys, instance_id, cmek_database_id, kms_key_name +): + samples.create_database_with_encryption_key( + instance_id, cmek_database_id, kms_key_name + ) + out, _ = capsys.readouterr() + assert cmek_database_id in out + assert kms_key_name in out + + +@pytest.mark.dependency(name="create_database_with_default_leader") +def test_create_database_with_default_leader( + capsys, + multi_region_instance, + multi_region_instance_id, + default_leader_database_id, + default_leader, +): + retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + retry_429(samples.create_database_with_default_leader)( + multi_region_instance_id, default_leader_database_id, default_leader + ) + out, _ = capsys.readouterr() + assert default_leader_database_id in out + assert default_leader in out + + +@pytest.mark.dependency(depends=["create_database_with_default_leader"]) +def test_update_database_with_default_leader( + capsys, + multi_region_instance, + multi_region_instance_id, + default_leader_database_id, + default_leader, +): + retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + retry_429(samples.update_database_with_default_leader)( + multi_region_instance_id, default_leader_database_id, default_leader + ) + out, _ = capsys.readouterr() + assert default_leader_database_id in out + assert default_leader in out + + +def test_update_database(capsys, instance_id, sample_database): + samples.update_database(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated database {}.".format(sample_database.name) in out + + # Cleanup + sample_database.enable_drop_protection = False + op = sample_database.update(["enable_drop_protection"]) + op.result() + + +def test_list_databases(capsys, instance_id): + samples.list_databases(instance_id) + out, _ = capsys.readouterr() + assert "has default leader" in out + + +@pytest.mark.dependency( + name="add_and_drop_database_roles", depends=["create_table_with_datatypes"] +) +def test_add_and_drop_database_roles(capsys, instance_id, sample_database): + samples.add_and_drop_database_roles(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created roles new_parent and new_child and granted privileges" in out + assert "Revoked privileges and dropped role new_child" in out + + +@pytest.mark.dependency(depends=["add_and_drop_database_roles"]) +def test_list_database_roles(capsys, instance_id, sample_database): + samples.list_database_roles(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "new_parent" in out + + +def test_list_instance_config(capsys): + samples.list_instance_config() + out, _ = capsys.readouterr() + assert "regional-us-central1" in out + + +@pytest.mark.dependency(name="create_table_with_datatypes") +def test_create_table_with_datatypes(capsys, instance_id, sample_database): + samples.create_table_with_datatypes(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created Venues table on database" in out + + +@pytest.mark.dependency(name="create_table_with_timestamp") +def test_create_table_with_timestamp(capsys, instance_id, sample_database): + samples.create_table_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created Performances table on database" in out + + +@pytest.mark.dependency( + name="add_json_column", + depends=["create_table_with_datatypes"], +) +def test_add_json_column(capsys, instance_id, sample_database): + samples.add_json_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert 'Altered table "Venues" on database ' in out + + +@pytest.mark.dependency( + name="add_numeric_column", + depends=["create_table_with_datatypes"], +) +def test_add_numeric_column(capsys, instance_id, sample_database): + samples.add_numeric_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert 'Altered table "Venues" on database ' in out + + +@pytest.mark.dependency(name="create_table_with_foreign_key_delete_cascade") +def test_create_table_with_foreign_key_delete_cascade( + capsys, instance_id, sample_database +): + samples.create_table_with_foreign_key_delete_cascade( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert ( + "Created Customers and ShoppingCarts table with FKShoppingCartsCustomerId" + in out + ) + + +@pytest.mark.dependency( + name="alter_table_with_foreign_key_delete_cascade", + depends=["create_table_with_foreign_key_delete_cascade"], +) +def test_alter_table_with_foreign_key_delete_cascade( + capsys, instance_id, sample_database +): + samples.alter_table_with_foreign_key_delete_cascade( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert "Altered ShoppingCarts table with FKShoppingCartsCustomerName" in out + + +@pytest.mark.dependency(depends=["alter_table_with_foreign_key_delete_cascade"]) +def test_drop_foreign_key_contraint_delete_cascade( + capsys, instance_id, sample_database +): + samples.drop_foreign_key_constraint_delete_cascade( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert "Altered ShoppingCarts table to drop FKShoppingCartsCustomerName" in out + + +@pytest.mark.dependency(name="create_sequence") +def test_create_sequence(capsys, instance_id, bit_reverse_sequence_database): + samples.create_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Created Seq sequence and Customers table, where the key column CustomerId uses the sequence as a default value on database" + in out + ) + assert "Number of customer records inserted is 3" in out + assert "Inserted customer record with Customer Id:" in out + + +@pytest.mark.dependency(depends=["create_sequence"]) +def test_alter_sequence(capsys, instance_id, bit_reverse_sequence_database): + samples.alter_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Altered Seq sequence to skip an inclusive range between 1000 and 5000000 on database" + in out + ) + assert "Number of customer records inserted is 3" in out + assert "Inserted customer record with Customer Id:" in out + + +@pytest.mark.dependency(depends=["alter_sequence"]) +def test_drop_sequence(capsys, instance_id, bit_reverse_sequence_database): + samples.drop_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Altered Customers table to drop DEFAULT from CustomerId column and dropped the Seq sequence on database" + in out + ) + + +@pytest.mark.dependency(name="add_column", depends=["create_table_with_datatypes"]) +def test_add_column(capsys, instance_id, sample_database): + samples.add_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the MarketingBudget column." in out + + +@pytest.mark.dependency( + name="add_timestamp_column", depends=["create_table_with_datatypes"] +) +def test_add_timestamp_column(capsys, instance_id, sample_database): + samples.add_timestamp_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert 'Altered table "Albums" on database ' in out + + +@pytest.mark.dependency(name="add_index", depends=["create_table_with_datatypes"]) +def test_add_index(capsys, instance_id, sample_database): + samples.add_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the AlbumsByAlbumTitle index" in out + + +@pytest.mark.dependency( + name="add_storing_index", depends=["create_table_with_datatypes"] +) +def test_add_storing_index(capsys, instance_id, sample_database): + samples.add_storing_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the AlbumsByAlbumTitle2 index." in out diff --git a/samples/samples/autocommit.py b/samples/samples/autocommit.py new file mode 100644 index 0000000000..873ed2b7bd --- /dev/null +++ b/samples/samples/autocommit.py @@ -0,0 +1,64 @@ +# Copyright 2020 Google LLC +# +# Use of this source code is governed by a BSD-style +# license that can be found in the LICENSE file or at +# https://developers.google.com/open-source/licenses/bsd + +import argparse + +from google.cloud.spanner_dbapi import connect + + +def enable_autocommit_mode(instance_id, database_id): + """Enables autocommit mode.""" + # [START spanner_enable_autocommit_mode] + + connection = connect(instance_id, database_id) + connection.autocommit = True + print("Autocommit mode is enabled.") + + cursor = connection.cursor() + + cursor.execute( + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX) + ) PRIMARY KEY (SingerId)""" + ) + + cursor.execute( + """INSERT INTO Singers (SingerId, FirstName, LastName) VALUES + (12, 'Melissa', 'Garcia'), + (13, 'Russell', 'Morales'), + (14, 'Jacqueline', 'Long'), + (15, 'Dylan', 'Shaw')""" + ) + + cursor.execute("""SELECT * FROM Singers WHERE SingerId = 13""") + + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*cursor.fetchone())) + + connection.close() + # [END spanner_enable_autocommit_mode] + + +if __name__ == "__main__": + parser = argparse.ArgumentParser( + description=__doc__, + formatter_class=argparse.RawDescriptionHelpFormatter, + ) + parser.add_argument("instance_id", help="Your Cloud Spanner instance ID.") + parser.add_argument( + "--database-id", + help="Your Cloud Spanner database ID.", + default="example_db", + ) + subparsers = parser.add_subparsers(dest="command") + subparsers.add_parser("enable_autocommit_mode", help=enable_autocommit_mode.__doc__) + args = parser.parse_args() + if args.command == "enable_autocommit_mode": + enable_autocommit_mode(args.instance_id, args.database_id) + else: + print(f"Command {args.command} did not match expected commands.") diff --git a/samples/samples/autocommit_test.py b/samples/samples/autocommit_test.py new file mode 100644 index 0000000000..a22f74e6b4 --- /dev/null +++ b/samples/samples/autocommit_test.py @@ -0,0 +1,33 @@ +# Copyright 2020 Google LLC +# +# Use of this source code is governed by a BSD-style +# license that can be found in the LICENSE file or at +# https://developers.google.com/open-source/licenses/bsd + +import pytest +from google.api_core.exceptions import Aborted +from test_utils.retry import RetryErrors + +import autocommit + + +@pytest.fixture(scope="module") +def sample_name(): + return "autocommit" + + +@RetryErrors(exception=Aborted, max_tries=2) +def test_enable_autocommit_mode(capsys, instance_id, sample_database): + # Delete table if it exists for retry attempts. + table = sample_database.table("Singers") + if table.exists(): + op = sample_database.update_ddl(["DROP TABLE Singers"]) + op.result() + + autocommit.enable_autocommit_mode( + instance_id, + sample_database.database_id, + ) + out, _ = capsys.readouterr() + assert "Autocommit mode is enabled." in out + assert "SingerId: 13, AlbumId: Russell, AlbumTitle: Morales" in out diff --git a/samples/samples/backup_sample.py b/samples/samples/backup_sample.py new file mode 100644 index 0000000000..e984d3a11e --- /dev/null +++ b/samples/samples/backup_sample.py @@ -0,0 +1,798 @@ +# Copyright 2020 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to create and restore from backups +using Cloud Spanner. + +For more information, see the README.rst under /spanner. +""" + +import argparse +from datetime import datetime, timedelta +import time + +from google.api_core import protobuf_helpers +from google.cloud import spanner +from google.cloud.exceptions import NotFound + + +# [START spanner_create_backup] +def create_backup(instance_id, database_id, backup_id, version_time): + """Creates a backup for a database.""" + + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # Create a backup + expire_time = datetime.utcnow() + timedelta(days=14) + + request = backup_pb.CreateBackupRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + backup_id=backup_id, + backup=backup_pb.Backup( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + expire_time=expire_time, + version_time=version_time, + ), + ) + + operation = database_admin_api.create_backup(request) + + # Wait for backup operation to complete. + backup = operation.result(2100) + + # Verify that the backup is ready. + assert backup.state == backup_pb.Backup.State.READY + + print( + "Backup {} of size {} bytes was created at {} for version of database at {}".format( + backup.name, backup.size_bytes, backup.create_time, backup.version_time + ) + ) + + +# [END spanner_create_backup] + + +# [START spanner_create_backup_with_encryption_key] +def create_backup_with_encryption_key( + instance_id, database_id, backup_id, kms_key_name +): + """Creates a backup for a database using a Customer Managed Encryption Key (CMEK).""" + + from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # Create a backup + expire_time = datetime.utcnow() + timedelta(days=14) + encryption_config = { + "encryption_type": CreateBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + "kms_key_name": kms_key_name, + } + request = backup_pb.CreateBackupRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + backup_id=backup_id, + backup=backup_pb.Backup( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + expire_time=expire_time, + ), + encryption_config=encryption_config, + ) + operation = database_admin_api.create_backup(request) + + # Wait for backup operation to complete. + backup = operation.result(2100) + + # Verify that the backup is ready. + assert backup.state == backup_pb.Backup.State.READY + + # Get the name, create time, backup size and encryption key. + print( + "Backup {} of size {} bytes was created at {} using encryption key {}".format( + backup.name, backup.size_bytes, backup.create_time, kms_key_name + ) + ) + + +# [END spanner_create_backup_with_encryption_key] + + +# [START spanner_create_backup_with_MR_CMEK] +def create_backup_with_multiple_kms_keys( + instance_id, database_id, backup_id, kms_key_names +): + """Creates a backup for a database using multiple KMS keys(CMEK).""" + + from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # Create a backup + expire_time = datetime.utcnow() + timedelta(days=14) + encryption_config = { + "encryption_type": CreateBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + "kms_key_names": kms_key_names, + } + request = backup_pb.CreateBackupRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + backup_id=backup_id, + backup=backup_pb.Backup( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + expire_time=expire_time, + ), + encryption_config=encryption_config, + ) + operation = database_admin_api.create_backup(request) + + # Wait for backup operation to complete. + backup = operation.result(2100) + + # Verify that the backup is ready. + assert backup.state == backup_pb.Backup.State.READY + + # Get the name, create time, backup size and encryption key. + print( + "Backup {} of size {} bytes was created at {} using encryption key {}".format( + backup.name, backup.size_bytes, backup.create_time, kms_key_names + ) + ) + + +# [END spanner_create_backup_with_MR_CMEK] + + +# [START spanner_restore_backup] +def restore_database(instance_id, new_database_id, backup_id): + """Restores a database from a backup.""" + from google.cloud.spanner_admin_database_v1 import RestoreDatabaseRequest + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # Start restoring an existing backup to a new database. + request = RestoreDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + database_id=new_database_id, + backup=database_admin_api.backup_path( + spanner_client.project, instance_id, backup_id + ), + ) + operation = database_admin_api.restore_database(request) + + # Wait for restore operation to complete. + db = operation.result(1600) + + # Newly created database has restore information. + restore_info = db.restore_info + print( + "Database {} restored to {} from backup {} with version time {}.".format( + restore_info.backup_info.source_database, + new_database_id, + restore_info.backup_info.backup, + restore_info.backup_info.version_time, + ) + ) + + +# [END spanner_restore_backup] + + +# [START spanner_restore_backup_with_encryption_key] +def restore_database_with_encryption_key( + instance_id, new_database_id, backup_id, kms_key_name +): + """Restores a database from a backup using a Customer Managed Encryption Key (CMEK).""" + from google.cloud.spanner_admin_database_v1 import ( + RestoreDatabaseEncryptionConfig, + RestoreDatabaseRequest, + ) + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # Start restoring an existing backup to a new database. + encryption_config = { + "encryption_type": RestoreDatabaseEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + "kms_key_name": kms_key_name, + } + + request = RestoreDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + database_id=new_database_id, + backup=database_admin_api.backup_path( + spanner_client.project, instance_id, backup_id + ), + encryption_config=encryption_config, + ) + operation = database_admin_api.restore_database(request) + + # Wait for restore operation to complete. + db = operation.result(1600) + + # Newly created database has restore information. + restore_info = db.restore_info + print( + "Database {} restored to {} from backup {} with using encryption key {}.".format( + restore_info.backup_info.source_database, + new_database_id, + restore_info.backup_info.backup, + db.encryption_config.kms_key_name, + ) + ) + + +# [END spanner_restore_backup_with_encryption_key] + + +# [START spanner_restore_backup_with_MR_CMEK] +def restore_database_with_multiple_kms_keys( + instance_id, new_database_id, backup_id, kms_key_names +): + """Restores a database from a backup using a Customer Managed Encryption Key (CMEK).""" + from google.cloud.spanner_admin_database_v1 import ( + RestoreDatabaseEncryptionConfig, + RestoreDatabaseRequest, + ) + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # Start restoring an existing backup to a new database. + encryption_config = { + "encryption_type": RestoreDatabaseEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + "kms_key_names": kms_key_names, + } + + request = RestoreDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + database_id=new_database_id, + backup=database_admin_api.backup_path( + spanner_client.project, instance_id, backup_id + ), + encryption_config=encryption_config, + ) + operation = database_admin_api.restore_database(request) + + # Wait for restore operation to complete. + db = operation.result(1600) + + # Newly created database has restore information. + restore_info = db.restore_info + print( + "Database {} restored to {} from backup {} with using encryption key {}.".format( + restore_info.backup_info.source_database, + new_database_id, + restore_info.backup_info.backup, + db.encryption_config.kms_key_names, + ) + ) + + +# [END spanner_restore_backup_with_MR_CMEK] + + +# [START spanner_cancel_backup_create] +def cancel_backup(instance_id, database_id, backup_id): + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + expire_time = datetime.utcnow() + timedelta(days=30) + + # Create a backup. + request = backup_pb.CreateBackupRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + backup_id=backup_id, + backup=backup_pb.Backup( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + expire_time=expire_time, + ), + ) + + operation = database_admin_api.create_backup(request) + # Cancel backup creation. + operation.cancel() + + # Cancel operations are the best effort so either it will complete or + # be cancelled. + while not operation.done(): + time.sleep(300) # 5 mins + + try: + database_admin_api.get_backup( + backup_pb.GetBackupRequest( + name=database_admin_api.backup_path( + spanner_client.project, instance_id, backup_id + ), + ) + ) + except NotFound: + print("Backup creation was successfully cancelled.") + return + print("Backup was created before the cancel completed.") + database_admin_api.delete_backup( + backup_pb.DeleteBackupRequest( + name=database_admin_api.backup_path( + spanner_client.project, instance_id, backup_id + ), + ) + ) + print("Backup deleted.") + + +# [END spanner_cancel_backup_create] + + +# [START spanner_list_backup_operations] +def list_backup_operations(instance_id, database_id, backup_id): + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # List the CreateBackup operations. + filter_ = ( + "(metadata.@type:type.googleapis.com/" + "google.spanner.admin.database.v1.CreateBackupMetadata) " + "AND (metadata.database:{})" + ).format(database_id) + request = backup_pb.ListBackupOperationsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter=filter_, + ) + operations = database_admin_api.list_backup_operations(request) + for op in operations: + metadata = protobuf_helpers.from_any_pb( + backup_pb.CreateBackupMetadata, op.metadata + ) + print( + "Backup {} on database {}: {}% complete.".format( + metadata.name, metadata.database, metadata.progress.progress_percent + ) + ) + + # List the CopyBackup operations. + filter_ = ( + "(metadata.@type:type.googleapis.com/google.spanner.admin.database.v1.CopyBackupMetadata) " + "AND (metadata.source_backup:{})" + ).format(backup_id) + request = backup_pb.ListBackupOperationsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter=filter_, + ) + operations = database_admin_api.list_backup_operations(request) + for op in operations: + metadata = protobuf_helpers.from_any_pb( + backup_pb.CopyBackupMetadata, op.metadata + ) + print( + "Backup {} on source backup {}: {}% complete.".format( + metadata.name, + metadata.source_backup, + metadata.progress.progress_percent, + ) + ) + + +# [END spanner_list_backup_operations] + + +# [START spanner_list_database_operations] +def list_database_operations(instance_id): + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # List the progress of restore. + filter_ = ( + "(metadata.@type:type.googleapis.com/" + "google.spanner.admin.database.v1.OptimizeRestoredDatabaseMetadata)" + ) + request = spanner_database_admin.ListDatabaseOperationsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter=filter_, + ) + operations = database_admin_api.list_database_operations(request) + for op in operations: + metadata = protobuf_helpers.from_any_pb( + spanner_database_admin.OptimizeRestoredDatabaseMetadata, op.metadata + ) + print( + "Database {} restored from backup is {}% optimized.".format( + metadata.name, metadata.progress.progress_percent + ) + ) + + +# [END spanner_list_database_operations] + + +# [START spanner_list_backups] +def list_backups(instance_id, database_id, backup_id): + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # List all backups. + print("All backups:") + request = backup_pb.ListBackupsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter="", + ) + operations = database_admin_api.list_backups(request) + for backup in operations: + print(backup.name) + + # List all backups that contain a name. + print('All backups with backup name containing "{}":'.format(backup_id)) + request = backup_pb.ListBackupsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter="name:{}".format(backup_id), + ) + operations = database_admin_api.list_backups(request) + for backup in operations: + print(backup.name) + + # List all backups for a database that contains a name. + print('All backups with database name containing "{}":'.format(database_id)) + request = backup_pb.ListBackupsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter="database:{}".format(database_id), + ) + operations = database_admin_api.list_backups(request) + for backup in operations: + print(backup.name) + + # List all backups that expire before a timestamp. + expire_time = datetime.utcnow().replace(microsecond=0) + timedelta(days=30) + print( + 'All backups with expire_time before "{}-{}-{}T{}:{}:{}Z":'.format( + *expire_time.timetuple() + ) + ) + request = backup_pb.ListBackupsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter='expire_time < "{}-{}-{}T{}:{}:{}Z"'.format(*expire_time.timetuple()), + ) + operations = database_admin_api.list_backups(request) + for backup in operations: + print(backup.name) + + # List all backups with a size greater than some bytes. + print("All backups with backup size more than 100 bytes:") + request = backup_pb.ListBackupsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter="size_bytes > 100", + ) + operations = database_admin_api.list_backups(request) + for backup in operations: + print(backup.name) + + # List backups that were created after a timestamp that are also ready. + create_time = datetime.utcnow().replace(microsecond=0) - timedelta(days=1) + print( + 'All backups created after "{}-{}-{}T{}:{}:{}Z" and are READY:'.format( + *create_time.timetuple() + ) + ) + request = backup_pb.ListBackupsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + filter='create_time >= "{}-{}-{}T{}:{}:{}Z" AND state:READY'.format( + *create_time.timetuple() + ), + ) + operations = database_admin_api.list_backups(request) + for backup in operations: + print(backup.name) + + print("All backups with pagination") + # If there are multiple pages, additional ``ListBackup`` + # requests will be made as needed while iterating. + paged_backups = set() + request = backup_pb.ListBackupsRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + page_size=2, + ) + operations = database_admin_api.list_backups(request) + for backup in operations: + paged_backups.add(backup.name) + for backup in paged_backups: + print(backup) + + +# [END spanner_list_backups] + + +# [START spanner_delete_backup] +def delete_backup(instance_id, backup_id): + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + backup = database_admin_api.get_backup( + backup_pb.GetBackupRequest( + name=database_admin_api.backup_path( + spanner_client.project, instance_id, backup_id + ), + ) + ) + + # Wait for databases that reference this backup to finish optimizing. + while backup.referencing_databases: + time.sleep(30) + backup = database_admin_api.get_backup( + backup_pb.GetBackupRequest( + name=database_admin_api.backup_path( + spanner_client.project, instance_id, backup_id + ), + ) + ) + + # Delete the backup. + database_admin_api.delete_backup(backup_pb.DeleteBackupRequest(name=backup.name)) + + # Verify that the backup is deleted. + try: + backup = database_admin_api.get_backup( + backup_pb.GetBackupRequest(name=backup.name) + ) + except NotFound: + print("Backup {} has been deleted.".format(backup.name)) + return + + +# [END spanner_delete_backup] + + +# [START spanner_update_backup] +def update_backup(instance_id, backup_id): + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + backup = database_admin_api.get_backup( + backup_pb.GetBackupRequest( + name=database_admin_api.backup_path( + spanner_client.project, instance_id, backup_id + ), + ) + ) + + # Expire time must be within 366 days of the create time of the backup. + old_expire_time = backup.expire_time + # New expire time should be less than the max expire time + new_expire_time = min(backup.max_expire_time, old_expire_time + timedelta(days=30)) + database_admin_api.update_backup( + backup_pb.UpdateBackupRequest( + backup=backup_pb.Backup(name=backup.name, expire_time=new_expire_time), + update_mask={"paths": ["expire_time"]}, + ) + ) + print( + "Backup {} expire time was updated from {} to {}.".format( + backup.name, old_expire_time, new_expire_time + ) + ) + + +# [END spanner_update_backup] + + +# [START spanner_create_database_with_version_retention_period] +def create_database_with_version_retention_period( + instance_id, database_id, retention_period +): + """Creates a database with a version retention period.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + ddl_statements = [ + "CREATE TABLE Singers (" + + " SingerId INT64 NOT NULL," + + " FirstName STRING(1024)," + + " LastName STRING(1024)," + + " SingerInfo BYTES(MAX)" + + ") PRIMARY KEY (SingerId)", + "CREATE TABLE Albums (" + + " SingerId INT64 NOT NULL," + + " AlbumId INT64 NOT NULL," + + " AlbumTitle STRING(MAX)" + + ") PRIMARY KEY (SingerId, AlbumId)," + + " INTERLEAVE IN PARENT Singers ON DELETE CASCADE", + "ALTER DATABASE `{}`" + " SET OPTIONS (version_retention_period = '{}')".format( + database_id, retention_period + ), + ] + operation = database_admin_api.create_database( + request=spanner_database_admin.CreateDatabaseRequest( + parent=database_admin_api.instance_path( + spanner_client.project, instance_id + ), + create_statement="CREATE DATABASE `{}`".format(database_id), + extra_statements=ddl_statements, + ) + ) + + db = operation.result(30) + print( + "Database {} created with version retention period {} and earliest version time {}".format( + db.name, db.version_retention_period, db.earliest_version_time + ) + ) + + database_admin_api.drop_database( + spanner_database_admin.DropDatabaseRequest(database=db.name) + ) + + +# [END spanner_create_database_with_version_retention_period] + + +# [START spanner_copy_backup] +def copy_backup(instance_id, backup_id, source_backup_path): + """Copies a backup.""" + + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # Create a backup object and wait for copy backup operation to complete. + expire_time = datetime.utcnow() + timedelta(days=14) + request = backup_pb.CopyBackupRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + backup_id=backup_id, + source_backup=source_backup_path, + expire_time=expire_time, + ) + + operation = database_admin_api.copy_backup(request) + + # Wait for backup operation to complete. + copy_backup = operation.result(2100) + + # Verify that the copy backup is ready. + assert copy_backup.state == backup_pb.Backup.State.READY + + print( + "Backup {} of size {} bytes was created at {} with version time {}".format( + copy_backup.name, + copy_backup.size_bytes, + copy_backup.create_time, + copy_backup.version_time, + ) + ) + + +# [END spanner_copy_backup] + + +# [START spanner_copy_backup_with_MR_CMEK] +def copy_backup_with_multiple_kms_keys( + instance_id, backup_id, source_backup_path, kms_key_names +): + """Copies a backup.""" + + from google.cloud.spanner_admin_database_v1.types import backup as backup_pb + from google.cloud.spanner_admin_database_v1 import CopyBackupEncryptionConfig + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + encryption_config = { + "encryption_type": CopyBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + "kms_key_names": kms_key_names, + } + + # Create a backup object and wait for copy backup operation to complete. + expire_time = datetime.utcnow() + timedelta(days=14) + request = backup_pb.CopyBackupRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + backup_id=backup_id, + source_backup=source_backup_path, + expire_time=expire_time, + encryption_config=encryption_config, + ) + + operation = database_admin_api.copy_backup(request) + + # Wait for backup operation to complete. + copy_backup = operation.result(2100) + + # Verify that the copy backup is ready. + assert copy_backup.state == backup_pb.Backup.State.READY + + print( + "Backup {} of size {} bytes was created at {} with version time {} using encryption keys {}".format( + copy_backup.name, + copy_backup.size_bytes, + copy_backup.create_time, + copy_backup.version_time, + copy_backup.encryption_information, + ) + ) + + +# [END spanner_copy_backup_with_MR_CMEK] + + +if __name__ == "__main__": # noqa: C901 + parser = argparse.ArgumentParser( + description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter + ) + parser.add_argument("instance_id", help="Your Cloud Spanner instance ID.") + parser.add_argument( + "--database-id", help="Your Cloud Spanner database ID.", default="example_db" + ) + parser.add_argument( + "--backup-id", help="Your Cloud Spanner backup ID.", default="example_backup" + ) + + subparsers = parser.add_subparsers(dest="command") + subparsers.add_parser("create_backup", help=create_backup.__doc__) + subparsers.add_parser("cancel_backup", help=cancel_backup.__doc__) + subparsers.add_parser("update_backup", help=update_backup.__doc__) + subparsers.add_parser("restore_database", help=restore_database.__doc__) + subparsers.add_parser("list_backups", help=list_backups.__doc__) + subparsers.add_parser("list_backup_operations", help=list_backup_operations.__doc__) + subparsers.add_parser( + "list_database_operations", help=list_database_operations.__doc__ + ) + subparsers.add_parser("delete_backup", help=delete_backup.__doc__) + subparsers.add_parser("copy_backup", help=copy_backup.__doc__) + + args = parser.parse_args() + + if args.command == "create_backup": + create_backup(args.instance_id, args.database_id, args.backup_id) + elif args.command == "cancel_backup": + cancel_backup(args.instance_id, args.database_id, args.backup_id) + elif args.command == "update_backup": + update_backup(args.instance_id, args.backup_id) + elif args.command == "restore_database": + restore_database(args.instance_id, args.database_id, args.backup_id) + elif args.command == "list_backups": + list_backups(args.instance_id, args.database_id, args.backup_id) + elif args.command == "list_backup_operations": + list_backup_operations(args.instance_id, args.database_id, args.backup_id) + elif args.command == "list_database_operations": + list_database_operations(args.instance_id) + elif args.command == "delete_backup": + delete_backup(args.instance_id, args.backup_id) + elif args.command == "copy_backup": + copy_backup(args.instance_id, args.backup_id, args.source_backup_id) + else: + print("Command {} did not match expected commands.".format(args.command)) diff --git a/samples/samples/backup_sample_test.py b/samples/samples/backup_sample_test.py new file mode 100644 index 0000000000..b588d5735b --- /dev/null +++ b/samples/samples/backup_sample_test.py @@ -0,0 +1,257 @@ +# Copyright 2020 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import uuid + +from google.api_core.exceptions import DeadlineExceeded +import pytest +from test_utils.retry import RetryErrors + +import backup_sample + + +@pytest.fixture(scope="module") +def sample_name(): + return "backup" + + +def unique_database_id(): + """Creates a unique id for the database.""" + return f"test-db-{uuid.uuid4().hex[:10]}" + + +def unique_backup_id(): + """Creates a unique id for the backup.""" + return f"test-backup-{uuid.uuid4().hex[:10]}" + + +RESTORE_DB_ID = unique_database_id() +BACKUP_ID = unique_backup_id() +CMEK_RESTORE_DB_ID = unique_database_id() +CMEK_BACKUP_ID = unique_backup_id() +RETENTION_DATABASE_ID = unique_database_id() +RETENTION_PERIOD = "7d" +COPY_BACKUP_ID = unique_backup_id() + + +@pytest.mark.dependency(name="create_backup") +def test_create_backup(capsys, instance_id, sample_database): + version_time = None + with sample_database.snapshot() as snapshot: + results = snapshot.execute_sql("SELECT CURRENT_TIMESTAMP()") + version_time = list(results)[0][0] + + backup_sample.create_backup( + instance_id, + sample_database.database_id, + BACKUP_ID, + version_time, + ) + out, _ = capsys.readouterr() + assert BACKUP_ID in out + + +@pytest.mark.dependency(name="copy_backup", depends=["create_backup"]) +def test_copy_backup(capsys, instance_id, spanner_client): + source_backp_path = ( + spanner_client.project_name + + "/instances/" + + instance_id + + "/backups/" + + BACKUP_ID + ) + backup_sample.copy_backup(instance_id, COPY_BACKUP_ID, source_backp_path) + out, _ = capsys.readouterr() + assert COPY_BACKUP_ID in out + + +@pytest.mark.dependency(name="create_backup_with_encryption_key") +def test_create_backup_with_encryption_key( + capsys, + instance_id, + sample_database, + kms_key_name, +): + backup_sample.create_backup_with_encryption_key( + instance_id, + sample_database.database_id, + CMEK_BACKUP_ID, + kms_key_name, + ) + out, _ = capsys.readouterr() + assert CMEK_BACKUP_ID in out + assert kms_key_name in out + + +@pytest.mark.skip(reason="skipped since the KMS keys are not added on test " "project") +@pytest.mark.dependency(name="create_backup_with_multiple_kms_keys") +def test_create_backup_with_multiple_kms_keys( + capsys, + multi_region_instance, + multi_region_instance_id, + sample_multi_region_database, + kms_key_names, +): + backup_sample.create_backup_with_multiple_kms_keys( + multi_region_instance_id, + sample_multi_region_database.database_id, + CMEK_BACKUP_ID, + kms_key_names, + ) + out, _ = capsys.readouterr() + assert CMEK_BACKUP_ID in out + assert kms_key_names[0] in out + assert kms_key_names[1] in out + assert kms_key_names[2] in out + + +@pytest.mark.skip(reason="skipped since the KMS keys are not added on test " "project") +@pytest.mark.dependency(depends=["create_backup_with_multiple_kms_keys"]) +def test_copy_backup_with_multiple_kms_keys( + capsys, multi_region_instance_id, spanner_client, kms_key_names +): + source_backup_path = ( + spanner_client.project_name + + "/instances/" + + multi_region_instance_id + + "/backups/" + + CMEK_BACKUP_ID + ) + backup_sample.copy_backup_with_multiple_kms_keys( + multi_region_instance_id, COPY_BACKUP_ID, source_backup_path, kms_key_names + ) + out, _ = capsys.readouterr() + assert COPY_BACKUP_ID in out + + +@pytest.mark.dependency(depends=["create_backup"]) +@RetryErrors(exception=DeadlineExceeded, max_tries=2) +def test_restore_database(capsys, instance_id, sample_database): + backup_sample.restore_database(instance_id, RESTORE_DB_ID, BACKUP_ID) + out, _ = capsys.readouterr() + assert (sample_database.database_id + " restored to ") in out + assert (RESTORE_DB_ID + " from backup ") in out + assert BACKUP_ID in out + + +@pytest.mark.dependency(depends=["create_backup_with_encryption_key"]) +@RetryErrors(exception=DeadlineExceeded, max_tries=2) +def test_restore_database_with_encryption_key( + capsys, + instance_id, + sample_database, + kms_key_name, +): + backup_sample.restore_database_with_encryption_key( + instance_id, CMEK_RESTORE_DB_ID, CMEK_BACKUP_ID, kms_key_name + ) + out, _ = capsys.readouterr() + assert (sample_database.database_id + " restored to ") in out + assert (CMEK_RESTORE_DB_ID + " from backup ") in out + assert CMEK_BACKUP_ID in out + assert kms_key_name in out + + +@pytest.mark.skip(reason="skipped since the KMS keys are not added on test " "project") +@pytest.mark.dependency(depends=["create_backup_with_multiple_kms_keys"]) +@RetryErrors(exception=DeadlineExceeded, max_tries=2) +def test_restore_database_with_multiple_kms_keys( + capsys, + multi_region_instance_id, + sample_multi_region_database, + kms_key_names, +): + backup_sample.restore_database_with_multiple_kms_keys( + multi_region_instance_id, CMEK_RESTORE_DB_ID, CMEK_BACKUP_ID, kms_key_names + ) + out, _ = capsys.readouterr() + assert (sample_multi_region_database.database_id + " restored to ") in out + assert (CMEK_RESTORE_DB_ID + " from backup ") in out + assert CMEK_BACKUP_ID in out + assert kms_key_names[0] in out + assert kms_key_names[1] in out + assert kms_key_names[2] in out + + +@pytest.mark.dependency(depends=["create_backup", "copy_backup"]) +def test_list_backup_operations(capsys, instance_id, sample_database): + backup_sample.list_backup_operations( + instance_id, sample_database.database_id, BACKUP_ID + ) + out, _ = capsys.readouterr() + assert BACKUP_ID in out + assert sample_database.database_id in out + assert COPY_BACKUP_ID in out + print(out) + + +@pytest.mark.dependency(name="list_backup", depends=["create_backup", "copy_backup"]) +def test_list_backups( + capsys, + instance_id, + sample_database, +): + backup_sample.list_backups( + instance_id, + sample_database.database_id, + BACKUP_ID, + ) + out, _ = capsys.readouterr() + id_count = out.count(BACKUP_ID) + assert id_count == 7 + + +@pytest.mark.dependency(depends=["create_backup"]) +def test_update_backup(capsys, instance_id): + backup_sample.update_backup(instance_id, BACKUP_ID) + out, _ = capsys.readouterr() + assert BACKUP_ID in out + + +@pytest.mark.dependency(depends=["create_backup", "copy_backup", "list_backup"]) +def test_delete_backup(capsys, instance_id): + backup_sample.delete_backup(instance_id, BACKUP_ID) + out, _ = capsys.readouterr() + assert BACKUP_ID in out + backup_sample.delete_backup(instance_id, COPY_BACKUP_ID) + out, _ = capsys.readouterr() + assert COPY_BACKUP_ID in out + + +@pytest.mark.dependency(depends=["create_backup"]) +def test_cancel_backup(capsys, instance_id, sample_database): + backup_sample.cancel_backup( + instance_id, + sample_database.database_id, + BACKUP_ID, + ) + out, _ = capsys.readouterr() + cancel_success = "Backup creation was successfully cancelled." in out + cancel_failure = ("Backup was created before the cancel completed." in out) and ( + "Backup deleted." in out + ) + assert cancel_success or cancel_failure + + +@RetryErrors(exception=DeadlineExceeded, max_tries=2) +def test_create_database_with_retention_period(capsys, sample_instance): + backup_sample.create_database_with_version_retention_period( + sample_instance.instance_id, + RETENTION_DATABASE_ID, + RETENTION_PERIOD, + ) + out, _ = capsys.readouterr() + assert (RETENTION_DATABASE_ID + " created with ") in out + assert ("retention period " + RETENTION_PERIOD) in out + database = sample_instance.database(RETENTION_DATABASE_ID) + database.drop() diff --git a/samples/samples/backup_schedule_samples.py b/samples/samples/backup_schedule_samples.py new file mode 100644 index 0000000000..c3c86b1538 --- /dev/null +++ b/samples/samples/backup_schedule_samples.py @@ -0,0 +1,316 @@ +# Copyright 2024 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +""" +This application demonstrates how to create and manage backup schedules using +Cloud Spanner. +""" + +import argparse + +from enum import Enum + + +# [START spanner_create_full_backup_schedule] +def create_full_backup_schedule( + instance_id: str, + database_id: str, + schedule_id: str, +) -> None: + from datetime import timedelta + from google.cloud import spanner + from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as backup_schedule_pb, + ) + from google.cloud.spanner_admin_database_v1.types import ( + CreateBackupEncryptionConfig, + FullBackupSpec, + ) + + client = spanner.Client() + database_admin_api = client.database_admin_api + + request = backup_schedule_pb.CreateBackupScheduleRequest( + parent=database_admin_api.database_path( + client.project, instance_id, database_id + ), + backup_schedule_id=schedule_id, + backup_schedule=backup_schedule_pb.BackupSchedule( + spec=backup_schedule_pb.BackupScheduleSpec( + cron_spec=backup_schedule_pb.CrontabSpec( + text="30 12 * * *", + ), + ), + retention_duration=timedelta(hours=24), + encryption_config=CreateBackupEncryptionConfig( + encryption_type=CreateBackupEncryptionConfig.EncryptionType.USE_DATABASE_ENCRYPTION, + ), + full_backup_spec=FullBackupSpec(), + ), + ) + + response = database_admin_api.create_backup_schedule(request) + print(f"Created full backup schedule: {response}") + + +# [END spanner_create_full_backup_schedule] + + +# [START spanner_create_incremental_backup_schedule] +def create_incremental_backup_schedule( + instance_id: str, + database_id: str, + schedule_id: str, +) -> None: + from datetime import timedelta + from google.cloud import spanner + from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as backup_schedule_pb, + ) + from google.cloud.spanner_admin_database_v1.types import ( + CreateBackupEncryptionConfig, + IncrementalBackupSpec, + ) + + client = spanner.Client() + database_admin_api = client.database_admin_api + + request = backup_schedule_pb.CreateBackupScheduleRequest( + parent=database_admin_api.database_path( + client.project, instance_id, database_id + ), + backup_schedule_id=schedule_id, + backup_schedule=backup_schedule_pb.BackupSchedule( + spec=backup_schedule_pb.BackupScheduleSpec( + cron_spec=backup_schedule_pb.CrontabSpec( + text="30 12 * * *", + ), + ), + retention_duration=timedelta(hours=24), + encryption_config=CreateBackupEncryptionConfig( + encryption_type=CreateBackupEncryptionConfig.EncryptionType.GOOGLE_DEFAULT_ENCRYPTION, + ), + incremental_backup_spec=IncrementalBackupSpec(), + ), + ) + + response = database_admin_api.create_backup_schedule(request) + print(f"Created incremental backup schedule: {response}") + + +# [END spanner_create_incremental_backup_schedule] + + +# [START spanner_list_backup_schedules] +def list_backup_schedules(instance_id: str, database_id: str) -> None: + from google.cloud import spanner + from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as backup_schedule_pb, + ) + + client = spanner.Client() + database_admin_api = client.database_admin_api + + request = backup_schedule_pb.ListBackupSchedulesRequest( + parent=database_admin_api.database_path( + client.project, + instance_id, + database_id, + ), + ) + + for backup_schedule in database_admin_api.list_backup_schedules(request): + print(f"Backup schedule: {backup_schedule}") + + +# [END spanner_list_backup_schedules] + + +# [START spanner_get_backup_schedule] +def get_backup_schedule( + instance_id: str, + database_id: str, + schedule_id: str, +) -> None: + from google.cloud import spanner + from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as backup_schedule_pb, + ) + + client = spanner.Client() + database_admin_api = client.database_admin_api + + request = backup_schedule_pb.GetBackupScheduleRequest( + name=database_admin_api.backup_schedule_path( + client.project, + instance_id, + database_id, + schedule_id, + ), + ) + + response = database_admin_api.get_backup_schedule(request) + print(f"Backup schedule: {response}") + + +# [END spanner_get_backup_schedule] + + +# [START spanner_update_backup_schedule] +def update_backup_schedule( + instance_id: str, + database_id: str, + schedule_id: str, +) -> None: + from datetime import timedelta + from google.cloud import spanner + from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as backup_schedule_pb, + ) + from google.cloud.spanner_admin_database_v1.types import ( + CreateBackupEncryptionConfig, + ) + from google.protobuf.field_mask_pb2 import FieldMask + + client = spanner.Client() + database_admin_api = client.database_admin_api + + request = backup_schedule_pb.UpdateBackupScheduleRequest( + backup_schedule=backup_schedule_pb.BackupSchedule( + name=database_admin_api.backup_schedule_path( + client.project, + instance_id, + database_id, + schedule_id, + ), + spec=backup_schedule_pb.BackupScheduleSpec( + cron_spec=backup_schedule_pb.CrontabSpec( + text="45 15 * * *", + ), + ), + retention_duration=timedelta(hours=48), + encryption_config=CreateBackupEncryptionConfig( + encryption_type=CreateBackupEncryptionConfig.EncryptionType.USE_DATABASE_ENCRYPTION, + ), + ), + update_mask=FieldMask( + paths=[ + "spec.cron_spec.text", + "retention_duration", + "encryption_config", + ], + ), + ) + + response = database_admin_api.update_backup_schedule(request) + print(f"Updated backup schedule: {response}") + + +# [END spanner_update_backup_schedule] + + +# [START spanner_delete_backup_schedule] +def delete_backup_schedule( + instance_id: str, + database_id: str, + schedule_id: str, +) -> None: + from google.cloud import spanner + from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as backup_schedule_pb, + ) + + client = spanner.Client() + database_admin_api = client.database_admin_api + + request = backup_schedule_pb.DeleteBackupScheduleRequest( + name=database_admin_api.backup_schedule_path( + client.project, + instance_id, + database_id, + schedule_id, + ), + ) + + database_admin_api.delete_backup_schedule(request) + print("Deleted backup schedule") + + +# [END spanner_delete_backup_schedule] + + +class Command(Enum): + CREATE_FULL_BACKUP_SCHEDULE = "create-full-backup-schedule" + CREATE_INCREMENTAL_BACKUP_SCHEDULE = "create-incremental-backup-schedule" + LIST_BACKUP_SCHEDULES = "list-backup-schedules" + GET_BACKUP_SCHEDULE = "get-backup-schedule" + UPDATE_BACKUP_SCHEDULE = "update-backup-schedule" + DELETE_BACKUP_SCHEDULE = "delete-backup-schedule" + + def __str__(self): + return self.value + + +if __name__ == "__main__": + parser = argparse.ArgumentParser( + description=__doc__, + formatter_class=argparse.RawDescriptionHelpFormatter, + ) + parser.add_argument("--instance-id", required=True) + parser.add_argument("--database-id", required=True) + parser.add_argument("--schedule-id", required=False) + parser.add_argument( + "command", + type=Command, + choices=list(Command), + ) + args = parser.parse_args() + + if args.command == Command.CREATE_FULL_BACKUP_SCHEDULE: + create_full_backup_schedule( + args.instance_id, + args.database_id, + args.schedule_id, + ) + elif args.command == Command.CREATE_INCREMENTAL_BACKUP_SCHEDULE: + create_incremental_backup_schedule( + args.instance_id, + args.database_id, + args.schedule_id, + ) + elif args.command == Command.LIST_BACKUP_SCHEDULES: + list_backup_schedules( + args.instance_id, + args.database_id, + ) + elif args.command == Command.GET_BACKUP_SCHEDULE: + get_backup_schedule( + args.instance_id, + args.database_id, + args.schedule_id, + ) + elif args.command == Command.UPDATE_BACKUP_SCHEDULE: + update_backup_schedule( + args.instance_id, + args.database_id, + args.schedule_id, + ) + elif args.command == Command.DELETE_BACKUP_SCHEDULE: + delete_backup_schedule( + args.instance_id, + args.database_id, + args.schedule_id, + ) + else: + print(f"Unknown command: {args.command}") diff --git a/samples/samples/backup_schedule_samples_test.py b/samples/samples/backup_schedule_samples_test.py new file mode 100644 index 0000000000..6584d89701 --- /dev/null +++ b/samples/samples/backup_schedule_samples_test.py @@ -0,0 +1,163 @@ +# Copyright 2024 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import backup_schedule_samples as samples +import pytest +import uuid + + +__FULL_BACKUP_SCHEDULE_ID = "full-backup-schedule" +__INCREMENTAL_BACKUP_SCHEDULE_ID = "incremental-backup-schedule" + + +@pytest.fixture(scope="module") +def sample_name(): + return "backup_schedule" + + +@pytest.fixture(scope="module") +def database_id(): + return f"test-db-{uuid.uuid4().hex[:10]}" + + +@pytest.mark.dependency(name="create_full_backup_schedule") +def test_create_full_backup_schedule( + capsys, + sample_instance, + sample_database, +) -> None: + samples.create_full_backup_schedule( + sample_instance.instance_id, + sample_database.database_id, + __FULL_BACKUP_SCHEDULE_ID, + ) + out, _ = capsys.readouterr() + assert "Created full backup schedule" in out + assert ( + f"/instances/{sample_instance.instance_id}" + f"/databases/{sample_database.database_id}" + f"/backupSchedules/{__FULL_BACKUP_SCHEDULE_ID}" + ) in out + + +@pytest.mark.dependency(name="create_incremental_backup_schedule") +def test_create_incremental_backup_schedule( + capsys, + sample_instance, + sample_database, +) -> None: + samples.create_incremental_backup_schedule( + sample_instance.instance_id, + sample_database.database_id, + __INCREMENTAL_BACKUP_SCHEDULE_ID, + ) + out, _ = capsys.readouterr() + assert "Created incremental backup schedule" in out + assert ( + f"/instances/{sample_instance.instance_id}" + f"/databases/{sample_database.database_id}" + f"/backupSchedules/{__INCREMENTAL_BACKUP_SCHEDULE_ID}" + ) in out + + +@pytest.mark.dependency( + depends=[ + "create_full_backup_schedule", + "create_incremental_backup_schedule", + ] +) +def test_list_backup_schedules( + capsys, + sample_instance, + sample_database, +) -> None: + samples.list_backup_schedules( + sample_instance.instance_id, + sample_database.database_id, + ) + out, _ = capsys.readouterr() + assert ( + f"/instances/{sample_instance.instance_id}" + f"/databases/{sample_database.database_id}" + f"/backupSchedules/{__FULL_BACKUP_SCHEDULE_ID}" + ) in out + assert ( + f"/instances/{sample_instance.instance_id}" + f"/databases/{sample_database.database_id}" + f"/backupSchedules/{__INCREMENTAL_BACKUP_SCHEDULE_ID}" + ) in out + + +@pytest.mark.dependency(depends=["create_full_backup_schedule"]) +def test_get_backup_schedule( + capsys, + sample_instance, + sample_database, +) -> None: + samples.get_backup_schedule( + sample_instance.instance_id, + sample_database.database_id, + __FULL_BACKUP_SCHEDULE_ID, + ) + out, _ = capsys.readouterr() + assert ( + f"/instances/{sample_instance.instance_id}" + f"/databases/{sample_database.database_id}" + f"/backupSchedules/{__FULL_BACKUP_SCHEDULE_ID}" + ) in out + + +@pytest.mark.dependency(depends=["create_full_backup_schedule"]) +def test_update_backup_schedule( + capsys, + sample_instance, + sample_database, +) -> None: + samples.update_backup_schedule( + sample_instance.instance_id, + sample_database.database_id, + __FULL_BACKUP_SCHEDULE_ID, + ) + out, _ = capsys.readouterr() + assert "Updated backup schedule" in out + assert ( + f"/instances/{sample_instance.instance_id}" + f"/databases/{sample_database.database_id}" + f"/backupSchedules/{__FULL_BACKUP_SCHEDULE_ID}" + ) in out + + +@pytest.mark.dependency( + depends=[ + "create_full_backup_schedule", + "create_incremental_backup_schedule", + ] +) +def test_delete_backup_schedule( + capsys, + sample_instance, + sample_database, +) -> None: + samples.delete_backup_schedule( + sample_instance.instance_id, + sample_database.database_id, + __FULL_BACKUP_SCHEDULE_ID, + ) + samples.delete_backup_schedule( + sample_instance.instance_id, + sample_database.database_id, + __INCREMENTAL_BACKUP_SCHEDULE_ID, + ) + out, _ = capsys.readouterr() + assert "Deleted backup schedule" in out diff --git a/samples/samples/batch_sample.py b/samples/samples/batch_sample.py new file mode 100644 index 0000000000..69913ac4b3 --- /dev/null +++ b/samples/samples/batch_sample.py @@ -0,0 +1,95 @@ +# Copyright 2018 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to do batch operations using Cloud +Spanner. + +For more information, see the README.rst under /spanner. +""" + +import argparse +import concurrent.futures +import time + +from google.cloud import spanner + + +# [START spanner_batch_client] +def run_batch_query(instance_id, database_id): + """Runs an example batch query.""" + + # Expected Table Format: + # CREATE TABLE Singers ( + # SingerId INT64 NOT NULL, + # FirstName STRING(1024), + # LastName STRING(1024), + # SingerInfo BYTES(MAX), + # ) PRIMARY KEY (SingerId); + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Create the batch transaction and generate partitions + snapshot = database.batch_snapshot() + partitions = snapshot.generate_read_batches( + table="Singers", + columns=("SingerId", "FirstName", "LastName"), + keyset=spanner.KeySet(all_=True), + # A Partition object is serializable and can be used from a different process. + # DataBoost option is an optional parameter which can also be used for partition read + # and query to execute the request via spanner independent compute resources. + data_boost_enabled=True, + ) + + # Create a pool of workers for the tasks + start = time.time() + with concurrent.futures.ThreadPoolExecutor() as executor: + futures = [executor.submit(process, snapshot, p) for p in partitions] + + for future in concurrent.futures.as_completed(futures, timeout=3600): + finish, row_ct = future.result() + elapsed = finish - start + print("Completed {} rows in {} seconds".format(row_ct, elapsed)) + + # Clean up + snapshot.close() + + +def process(snapshot, partition): + """Processes the requests of a query in an separate process.""" + print("Started processing partition.") + row_ct = 0 + for row in snapshot.process_read_batch(partition): + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + row_ct += 1 + return time.time(), row_ct + + +# [END spanner_batch_client] + + +if __name__ == "__main__": + parser = argparse.ArgumentParser( + description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter + ) + parser.add_argument("instance_id", help="Your Cloud Spanner instance ID.") + parser.add_argument( + "database_id", help="Your Cloud Spanner database ID.", default="example_db" + ) + + args = parser.parse_args() + + if args.command == "run_batch_query": + run_batch_query(args.instance_id, args.database_id) diff --git a/samples/samples/conftest.py b/samples/samples/conftest.py new file mode 100644 index 0000000000..b34e9d16b1 --- /dev/null +++ b/samples/samples/conftest.py @@ -0,0 +1,368 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +""" Shared pytest fixtures.""" + +import time +import uuid + +from google.api_core import exceptions +from google.cloud import spanner_admin_database_v1 +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect +from google.cloud.spanner_v1 import backup, client, database, instance +import pytest +from test_utils import retry +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin + +INSTANCE_CREATION_TIMEOUT = 560 # seconds + +OPERATION_TIMEOUT_SECONDS = 120 # seconds + +retry_429 = retry.RetryErrors(exceptions.ResourceExhausted, delay=15) + + +@pytest.fixture(scope="module") +def sample_name(): + """Sample testcase modules must define this fixture. + + The name is used to label the instance created by the sample, to + aid in debugging leaked instances. + """ + raise NotImplementedError("Define 'sample_name' fixture in sample test driver") + + +@pytest.fixture(scope="module") +def database_dialect(): + """Database dialect to be used for this sample. + + The dialect is used to initialize the dialect for the database. + It can either be GoogleStandardSql or PostgreSql. + """ + # By default, we consider GOOGLE_STANDARD_SQL dialect. Other specific tests + # can override this if required. + return DatabaseDialect.GOOGLE_STANDARD_SQL + + +@pytest.fixture(scope="session") +def spanner_client(): + """Shared client used across all samples in a session.""" + return client.Client() + + +def scrub_instance_ignore_not_found(to_scrub): + """Helper for func:`cleanup_old_instances`""" + try: + for backup_pb in to_scrub.list_backups(): + backup.Backup.from_pb(backup_pb, to_scrub).delete() + + retry_429(to_scrub.delete)() + except exceptions.NotFound: + pass + + +@pytest.fixture(scope="session") +def cleanup_old_instances(spanner_client): + """Delete instances, created by samples, that are older than an hour.""" + cutoff = int(time.time()) - 1 * 60 * 60 + instance_filter = "labels.cloud_spanner_samples:true" + + for instance_pb in spanner_client.list_instances(filter_=instance_filter): + inst = instance.Instance.from_pb(instance_pb, spanner_client) + + if "created" in inst.labels: + create_time = int(inst.labels["created"]) + + if create_time <= cutoff: + scrub_instance_ignore_not_found(inst) + + +@pytest.fixture(scope="module") +def instance_id(): + """Unique id for the instance used in samples.""" + return f"test-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def multi_region_instance_id(): + """Unique id for the multi-region instance used in samples.""" + return f"multi-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def instance_config(spanner_client): + return "{}/instanceConfigs/{}".format( + spanner_client.project_name, "regional-us-central1" + ) + + +@pytest.fixture(scope="module") +def multi_region_instance_config(spanner_client): + return "{}/instanceConfigs/{}".format(spanner_client.project_name, "nam3") + + +@pytest.fixture(scope="module") +def proto_descriptor_file(): + import os + + dirname = os.path.dirname(__file__) + filename = os.path.join(dirname, "testdata/descriptors.pb") + file = open(filename, "rb") + yield file.read() + file.close() + + +@pytest.fixture(scope="module") +def sample_instance( + spanner_client, + cleanup_old_instances, + instance_id, + instance_config, + sample_name, +): + operation = spanner_client.instance_admin_api.create_instance( + parent=spanner_client.project_name, + instance_id=instance_id, + instance=spanner_instance_admin.Instance( + config=instance_config, + display_name="This is a display name.", + node_count=1, + labels={ + "cloud_spanner_samples": "true", + "sample_name": sample_name, + "created": str(int(time.time())), + }, + edition=spanner_instance_admin.Instance.Edition.ENTERPRISE_PLUS, # Optional + ), + ) + operation.result(INSTANCE_CREATION_TIMEOUT) # block until completion + + sample_instance = spanner_client.instance(instance_id) + + # Eventual consistency check + retry_found = retry.RetryResult(bool) + retry_found(sample_instance.exists)() + + yield sample_instance + + for database_pb in sample_instance.list_databases(): + database.Database.from_pb(database_pb, sample_instance).drop() + + for backup_pb in sample_instance.list_backups(): + backup.Backup.from_pb(backup_pb, sample_instance).delete() + + sample_instance.delete() + + +@pytest.fixture(scope="module") +def multi_region_instance( + spanner_client, + cleanup_old_instances, + multi_region_instance_id, + multi_region_instance_config, + sample_name, +): + multi_region_instance = spanner_client.instance( + multi_region_instance_id, + multi_region_instance_config, + labels={ + "cloud_spanner_samples": "true", + "sample_name": sample_name, + "created": str(int(time.time())), + }, + ) + op = retry_429(multi_region_instance.create)() + op.result(INSTANCE_CREATION_TIMEOUT) # block until completion + + # Eventual consistency check + retry_found = retry.RetryResult(bool) + retry_found(multi_region_instance.exists)() + + yield multi_region_instance + + for database_pb in multi_region_instance.list_databases(): + database.Database.from_pb(database_pb, multi_region_instance).drop() + + for backup_pb in multi_region_instance.list_backups(): + backup.Backup.from_pb(backup_pb, multi_region_instance).delete() + + multi_region_instance.delete() + + +@pytest.fixture(scope="module") +def database_id(): + """Id for the database used in samples. + + Sample testcase modules can override as needed. + """ + return "my-database-id" + + +@pytest.fixture(scope="module") +def proto_columns_database( + spanner_client, + sample_instance, + proto_columns_database_id, + proto_columns_database_ddl, + database_dialect, +): + if database_dialect == DatabaseDialect.GOOGLE_STANDARD_SQL: + sample_database = sample_instance.database( + proto_columns_database_id, + ddl_statements=proto_columns_database_ddl, + ) + + if not sample_database.exists(): + operation = sample_database.create() + operation.result(OPERATION_TIMEOUT_SECONDS) + + yield sample_database + + sample_database.drop() + + +@pytest.fixture(scope="module") +def bit_reverse_sequence_database_id(): + """Id for the database used in bit reverse sequence samples. + + Sample testcase modules can override as needed. + """ + return "sequence-database-id" + + +@pytest.fixture(scope="module") +def database_ddl(): + """Sequence of DDL statements used to set up the database. + + Sample testcase modules can override as needed. + """ + return [] + + +def create_sample_database( + spanner_client, sample_instance, database_id, database_ddl, database_dialect +): + if database_dialect == DatabaseDialect.POSTGRESQL: + sample_database = sample_instance.database( + database_id, + database_dialect=DatabaseDialect.POSTGRESQL, + ) + + if not sample_database.exists(): + operation = sample_database.create() + operation.result(OPERATION_TIMEOUT_SECONDS) + + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database=sample_database.name, + statements=database_ddl, + ) + + operation = spanner_client.database_admin_api.update_database_ddl(request) + operation.result(OPERATION_TIMEOUT_SECONDS) + + yield sample_database + + sample_database.drop() + return + + sample_database = sample_instance.database( + database_id, + ddl_statements=database_ddl, + ) + + if not sample_database.exists(): + operation = sample_database.create() + operation.result(OPERATION_TIMEOUT_SECONDS) + + yield sample_database + + sample_database.drop() + + +@pytest.fixture(scope="module") +def sample_database( + spanner_client, sample_instance, database_id, database_ddl, database_dialect +): + yield from create_sample_database( + spanner_client, sample_instance, database_id, database_ddl, database_dialect + ) + + +@pytest.fixture(scope="module") +def sample_multi_region_database( + spanner_client, multi_region_instance, database_id, database_ddl, database_dialect +): + yield from create_sample_database( + spanner_client, + multi_region_instance, + database_id, + database_ddl, + database_dialect, + ) + + +@pytest.fixture(scope="module") +def bit_reverse_sequence_database( + spanner_client, sample_instance, bit_reverse_sequence_database_id, database_dialect +): + if database_dialect == DatabaseDialect.POSTGRESQL: + bit_reverse_sequence_database = sample_instance.database( + bit_reverse_sequence_database_id, + database_dialect=DatabaseDialect.POSTGRESQL, + ) + + if not bit_reverse_sequence_database.exists(): + operation = bit_reverse_sequence_database.create() + operation.result(OPERATION_TIMEOUT_SECONDS) + + yield bit_reverse_sequence_database + + bit_reverse_sequence_database.drop() + return + + bit_reverse_sequence_database = sample_instance.database( + bit_reverse_sequence_database_id + ) + + if not bit_reverse_sequence_database.exists(): + operation = bit_reverse_sequence_database.create() + operation.result(OPERATION_TIMEOUT_SECONDS) + + yield bit_reverse_sequence_database + + bit_reverse_sequence_database.drop() + + +@pytest.fixture(scope="module") +def kms_key_name(spanner_client): + return "projects/{}/locations/{}/keyRings/{}/cryptoKeys/{}".format( + spanner_client.project, + "us-central1", + "spanner-test-keyring", + "spanner-test-cmek", + ) + + +@pytest.fixture(scope="module") +def kms_key_names(spanner_client): + kms_key_names_list = [] + # this list of cloud-regions correspond to `nam3` + for cloud_region in ["us-east1", "us-east4", "us-central1"]: + kms_key_names_list.append( + "projects/{}/locations/{}/keyRings/{}/cryptoKeys/{}".format( + spanner_client.project, + cloud_region, + "spanner-test-keyring", + "spanner-test-cmek", + ) + ) + return kms_key_names_list diff --git a/samples/samples/graph_snippets.py b/samples/samples/graph_snippets.py new file mode 100644 index 0000000000..e557290b19 --- /dev/null +++ b/samples/samples/graph_snippets.py @@ -0,0 +1,407 @@ +#!/usr/bin/env python + +# Copyright 2024 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to do basic graph operations using +Cloud Spanner. + +For more information, see the README.rst under /spanner. +""" + +import argparse + +from google.cloud import spanner + +OPERATION_TIMEOUT_SECONDS = 240 + + +# [START spanner_create_database_with_property_graph] +def create_database_with_property_graph(instance_id, database_id): + """Creates a database, tables and a property graph for sample data.""" + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.CreateDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + create_statement=f"CREATE DATABASE `{database_id}`", + extra_statements=[ + """CREATE TABLE Person ( + id INT64 NOT NULL, + name STRING(MAX), + birthday TIMESTAMP, + country STRING(MAX), + city STRING(MAX), + ) PRIMARY KEY (id)""", + """CREATE TABLE Account ( + id INT64 NOT NULL, + create_time TIMESTAMP, + is_blocked BOOL, + nick_name STRING(MAX), + ) PRIMARY KEY (id)""", + """CREATE TABLE PersonOwnAccount ( + id INT64 NOT NULL, + account_id INT64 NOT NULL, + create_time TIMESTAMP, + FOREIGN KEY (account_id) + REFERENCES Account (id) + ) PRIMARY KEY (id, account_id), + INTERLEAVE IN PARENT Person ON DELETE CASCADE""", + """CREATE TABLE AccountTransferAccount ( + id INT64 NOT NULL, + to_id INT64 NOT NULL, + amount FLOAT64, + create_time TIMESTAMP NOT NULL, + order_number STRING(MAX), + FOREIGN KEY (to_id) REFERENCES Account (id) + ) PRIMARY KEY (id, to_id, create_time), + INTERLEAVE IN PARENT Account ON DELETE CASCADE""", + """CREATE OR REPLACE PROPERTY GRAPH FinGraph + NODE TABLES (Account, Person) + EDGE TABLES ( + PersonOwnAccount + SOURCE KEY(id) REFERENCES Person(id) + DESTINATION KEY(account_id) REFERENCES Account(id) + LABEL Owns, + AccountTransferAccount + SOURCE KEY(id) REFERENCES Account(id) + DESTINATION KEY(to_id) REFERENCES Account(id) + LABEL Transfers)""", + ], + ) + + operation = database_admin_api.create_database(request=request) + + print("Waiting for operation to complete...") + database = operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created database {} on instance {}".format( + database.name, + database_admin_api.instance_path(spanner_client.project, instance_id), + ) + ) + + +# [END spanner_create_database_with_property_graph] + + +# [START spanner_insert_graph_data] +def insert_data(instance_id, database_id): + """Inserts sample data into the given database. + + The database and tables must already exist and can be created using + `create_database_with_property_graph`. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.batch() as batch: + batch.insert( + table="Account", + columns=("id", "create_time", "is_blocked", "nick_name"), + values=[ + (7, "2020-01-10T06:22:20.12Z", False, "Vacation Fund"), + (16, "2020-01-27T17:55:09.12Z", True, "Vacation Fund"), + (20, "2020-02-18T05:44:20.12Z", False, "Rainy Day Fund"), + ], + ) + + batch.insert( + table="Person", + columns=("id", "name", "birthday", "country", "city"), + values=[ + (1, "Alex", "1991-12-21T00:00:00.12Z", "Australia", " Adelaide"), + (2, "Dana", "1980-10-31T00:00:00.12Z", "Czech_Republic", "Moravia"), + (3, "Lee", "1986-12-07T00:00:00.12Z", "India", "Kollam"), + ], + ) + + batch.insert( + table="AccountTransferAccount", + columns=("id", "to_id", "amount", "create_time", "order_number"), + values=[ + (7, 16, 300.0, "2020-08-29T15:28:58.12Z", "304330008004315"), + (7, 16, 100.0, "2020-10-04T16:55:05.12Z", "304120005529714"), + (16, 20, 300.0, "2020-09-25T02:36:14.12Z", "103650009791820"), + (20, 7, 500.0, "2020-10-04T16:55:05.12Z", "304120005529714"), + (20, 16, 200.0, "2020-10-17T03:59:40.12Z", "302290001255747"), + ], + ) + + batch.insert( + table="PersonOwnAccount", + columns=("id", "account_id", "create_time"), + values=[ + (1, 7, "2020-01-10T06:22:20.12Z"), + (2, 20, "2020-01-27T17:55:09.12Z"), + (3, 16, "2020-02-18T05:44:20.12Z"), + ], + ) + + print("Inserted data.") + + +# [END spanner_insert_graph_data] + + +# [START spanner_insert_graph_data_with_dml] +def insert_data_with_dml(instance_id, database_id): + """Inserts sample data into the given database using a DML statement.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def insert_accounts(transaction): + row_ct = transaction.execute_update( + "INSERT INTO Account (id, create_time, is_blocked) " + " VALUES" + " (1, CAST('2000-08-10 08:18:48.463959-07:52' AS TIMESTAMP), false)," + " (2, CAST('2000-08-12 07:13:16.463959-03:41' AS TIMESTAMP), true)" + ) + + print("{} record(s) inserted into Account.".format(row_ct)) + + def insert_transfers(transaction): + row_ct = transaction.execute_update( + "INSERT INTO AccountTransferAccount (id, to_id, create_time, amount) " + " VALUES" + " (1, 2, CAST('2000-09-11 03:11:18.463959-06:36' AS TIMESTAMP), 100)," + " (1, 1, CAST('2000-09-12 04:09:34.463959-05:12' AS TIMESTAMP), 200) " + ) + + print("{} record(s) inserted into AccountTransferAccount.".format(row_ct)) + + database.run_in_transaction(insert_accounts) + database.run_in_transaction(insert_transfers) + + +# [END spanner_insert_graph_data_with_dml] + + +# [START spanner_update_graph_data_with_dml] +def update_data_with_dml(instance_id, database_id): + """Updates sample data from the database using a DML statement.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def update_accounts(transaction): + row_ct = transaction.execute_update( + "UPDATE Account SET is_blocked = false WHERE id = 2" + ) + + print("{} Account record(s) updated.".format(row_ct)) + + def update_transfers(transaction): + row_ct = transaction.execute_update( + "UPDATE AccountTransferAccount SET amount = 300 WHERE id = 1 AND to_id = 2" + ) + + print("{} AccountTransferAccount record(s) updated.".format(row_ct)) + + database.run_in_transaction(update_accounts) + database.run_in_transaction(update_transfers) + + +# [END spanner_update_graph_data_with_dml] + + +# [START spanner_update_graph_data_with_graph_query_in_dml] +def update_data_with_graph_query_in_dml(instance_id, database_id): + """Updates sample data from the database using a DML statement.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def update_accounts(transaction): + row_ct = transaction.execute_update( + "UPDATE Account SET is_blocked = true " + "WHERE id IN {" + " GRAPH FinGraph" + " MATCH (a:Account WHERE a.id = 1)-[:TRANSFERS]->{1,2}(b:Account)" + " RETURN b.id}" + ) + + print("{} Account record(s) updated.".format(row_ct)) + + database.run_in_transaction(update_accounts) + + +# [END spanner_update_graph_data_with_graph_query_in_dml] + + +# [START spanner_query_graph_data] +def query_data(instance_id, database_id): + """Queries sample data from the database using GQL.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + """Graph FinGraph + MATCH (a:Person)-[o:Owns]->()-[t:Transfers]->()<-[p:Owns]-(b:Person) + RETURN a.name AS sender, b.name AS receiver, t.amount, t.create_time AS transfer_at""" + ) + + for row in results: + print("sender: {}, receiver: {}, amount: {}, transfer_at: {}".format(*row)) + + +# [END spanner_query_graph_data] + + +# [START spanner_query_graph_data_with_parameter] +def query_data_with_parameter(instance_id, database_id): + """Queries sample data from the database using SQL with a parameter.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + """Graph FinGraph + MATCH (a:Person)-[o:Owns]->()-[t:Transfers]->()<-[p:Owns]-(b:Person) + WHERE t.amount >= @min + RETURN a.name AS sender, b.name AS receiver, t.amount, t.create_time AS transfer_at""", + params={"min": 500}, + param_types={"min": spanner.param_types.INT64}, + ) + + for row in results: + print("sender: {}, receiver: {}, amount: {}, transfer_at: {}".format(*row)) + + +# [END spanner_query_graph_data_with_parameter] + + +# [START spanner_delete_graph_data_with_dml] +def delete_data_with_dml(instance_id, database_id): + """Deletes sample data from the database using a DML statement.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def delete_transfers(transaction): + row_ct = transaction.execute_update( + "DELETE FROM AccountTransferAccount WHERE id = 1 AND to_id = 2" + ) + + print("{} AccountTransferAccount record(s) deleted.".format(row_ct)) + + def delete_accounts(transaction): + row_ct = transaction.execute_update("DELETE FROM Account WHERE id = 2") + + print("{} Account record(s) deleted.".format(row_ct)) + + database.run_in_transaction(delete_transfers) + database.run_in_transaction(delete_accounts) + + +# [END spanner_delete_graph_data_with_dml] + + +# [START spanner_delete_graph_data] +def delete_data(instance_id, database_id): + """Deletes sample data from the given database. + + The database, table, and data must already exist and can be created using + `create_database` and `insert_data`. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Delete individual rows + ownerships_to_delete = spanner.KeySet(keys=[[1, 7], [2, 20]]) + + # Delete a range of rows where the column key is >=1 and <8 + transfers_range = spanner.KeyRange(start_closed=[1], end_open=[8]) + transfers_to_delete = spanner.KeySet(ranges=[transfers_range]) + + # Delete Account/Person rows, which will also delete the remaining + # AccountTransferAccount and PersonOwnAccount rows because + # AccountTransferAccount and PersonOwnAccount are defined with + # ON DELETE CASCADE + remaining_nodes = spanner.KeySet(all_=True) + + with database.batch() as batch: + batch.delete("PersonOwnAccount", ownerships_to_delete) + batch.delete("AccountTransferAccount", transfers_to_delete) + batch.delete("Account", remaining_nodes) + batch.delete("Person", remaining_nodes) + + print("Deleted data.") + + +# [END spanner_delete_graph_data] + + +if __name__ == "__main__": # noqa: C901 + parser = argparse.ArgumentParser( + description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter + ) + parser.add_argument("instance_id", help="Your Cloud Spanner instance ID.") + parser.add_argument( + "--database-id", help="Your Cloud Spanner database ID.", default="example_db" + ) + + subparsers = parser.add_subparsers(dest="command") + subparsers.add_parser( + "create_database_with_property_graph", + help=create_database_with_property_graph.__doc__, + ) + subparsers.add_parser("insert_data", help=insert_data.__doc__) + subparsers.add_parser("insert_data_with_dml", help=insert_data_with_dml.__doc__) + subparsers.add_parser("update_data_with_dml", help=update_data_with_dml.__doc__) + subparsers.add_parser( + "update_data_with_graph_query_in_dml", + help=update_data_with_graph_query_in_dml.__doc__, + ) + subparsers.add_parser("query_data", help=query_data.__doc__) + subparsers.add_parser( + "query_data_with_parameter", help=query_data_with_parameter.__doc__ + ) + subparsers.add_parser("delete_data", help=delete_data.__doc__) + subparsers.add_parser("delete_data_with_dml", help=delete_data_with_dml.__doc__) + + args = parser.parse_args() + + if args.command == "create_database_with_property_graph": + create_database_with_property_graph(args.instance_id, args.database_id) + elif args.command == "insert_data": + insert_data(args.instance_id, args.database_id) + elif args.command == "insert_data_with_dml": + insert_data_with_dml(args.instance_id, args.database_id) + elif args.command == "update_data_with_dml": + update_data_with_dml(args.instance_id, args.database_id) + elif args.command == "update_data_with_graph_query_in_dml": + update_data_with_graph_query_in_dml(args.instance_id, args.database_id) + elif args.command == "query_data": + query_data(args.instance_id, args.database_id) + elif args.command == "query_data_with_parameter": + query_data_with_parameter(args.instance_id, args.database_id) + elif args.command == "delete_data_with_dml": + delete_data_with_dml(args.instance_id, args.database_id) + elif args.command == "delete_data": + delete_data(args.instance_id, args.database_id) diff --git a/samples/samples/graph_snippets_test.py b/samples/samples/graph_snippets_test.py new file mode 100644 index 0000000000..bd49260007 --- /dev/null +++ b/samples/samples/graph_snippets_test.py @@ -0,0 +1,213 @@ +# Copyright 2024 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# import time +import uuid +import pytest + +from google.api_core import exceptions + +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect +from test_utils.retry import RetryErrors + +import graph_snippets + +retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + +CREATE_TABLE_PERSON = """\ +CREATE TABLE Person ( + id INT64 NOT NULL, + name STRING(MAX), + birthday TIMESTAMP, + country STRING(MAX), + city STRING(MAX), +) PRIMARY KEY (id) +""" + +CREATE_TABLE_ACCOUNT = """\ + CREATE TABLE Account ( + id INT64 NOT NULL, + create_time TIMESTAMP, + is_blocked BOOL, + nick_name STRING(MAX), + ) PRIMARY KEY (id) +""" + +CREATE_TABLE_PERSON_OWN_ACCOUNT = """\ +CREATE TABLE PersonOwnAccount ( + id INT64 NOT NULL, + account_id INT64 NOT NULL, + create_time TIMESTAMP, + FOREIGN KEY (account_id) + REFERENCES Account (id) + ) PRIMARY KEY (id, account_id), + INTERLEAVE IN PARENT Person ON DELETE CASCADE +""" + +CREATE_TABLE_ACCOUNT_TRANSFER_ACCOUNT = """\ +CREATE TABLE AccountTransferAccount ( + id INT64 NOT NULL, + to_id INT64 NOT NULL, + amount FLOAT64, + create_time TIMESTAMP NOT NULL, + order_number STRING(MAX), + FOREIGN KEY (to_id) REFERENCES Account (id) + ) PRIMARY KEY (id, to_id, create_time), + INTERLEAVE IN PARENT Account ON DELETE CASCADE +""" + +CREATE_PROPERTY_GRAPH = """ +CREATE OR REPLACE PROPERTY GRAPH FinGraph + NODE TABLES (Account, Person) + EDGE TABLES ( + PersonOwnAccount + SOURCE KEY(id) REFERENCES Person(id) + DESTINATION KEY(account_id) REFERENCES Account(id) + LABEL Owns, + AccountTransferAccount + SOURCE KEY(id) REFERENCES Account(id) + DESTINATION KEY(to_id) REFERENCES Account(id) + LABEL Transfers) +""" + + +@pytest.fixture(scope="module") +def sample_name(): + return "snippets" + + +@pytest.fixture(scope="module") +def database_dialect(): + """Spanner dialect to be used for this sample. + + The dialect is used to initialize the dialect for the database. + It can either be GoogleStandardSql or PostgreSql. + """ + return DatabaseDialect.GOOGLE_STANDARD_SQL + + +@pytest.fixture(scope="module") +def database_id(): + return f"test-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def create_database_id(): + return f"create-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_ddl(): + """Sequence of DDL statements used to set up the database. + + Sample testcase modules can override as needed. + """ + return [ + CREATE_TABLE_PERSON, + CREATE_TABLE_ACCOUNT, + CREATE_TABLE_PERSON_OWN_ACCOUNT, + CREATE_TABLE_ACCOUNT_TRANSFER_ACCOUNT, + CREATE_PROPERTY_GRAPH, + ] + + +def test_create_database_explicit(sample_instance, create_database_id): + graph_snippets.create_database_with_property_graph( + sample_instance.instance_id, create_database_id + ) + database = sample_instance.database(create_database_id) + database.drop() + + +@pytest.mark.dependency(name="insert_data") +def test_insert_data(capsys, instance_id, sample_database): + graph_snippets.insert_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Inserted data" in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_query_data(capsys, instance_id, sample_database): + graph_snippets.query_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert ( + "sender: Dana, receiver: Alex, amount: 500.0, transfer_at: 2020-10-04 16:55:05.120000+00:00" + in out + ) + assert ( + "sender: Lee, receiver: Dana, amount: 300.0, transfer_at: 2020-09-25 02:36:14.120000+00:00" + in out + ) + assert ( + "sender: Alex, receiver: Lee, amount: 300.0, transfer_at: 2020-08-29 15:28:58.120000+00:00" + in out + ) + assert ( + "sender: Alex, receiver: Lee, amount: 100.0, transfer_at: 2020-10-04 16:55:05.120000+00:00" + in out + ) + assert ( + "sender: Dana, receiver: Lee, amount: 200.0, transfer_at: 2020-10-17 03:59:40.120000+00:00" + in out + ) + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_query_data_with_parameter(capsys, instance_id, sample_database): + graph_snippets.query_data_with_parameter(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert ( + "sender: Dana, receiver: Alex, amount: 500.0, transfer_at: 2020-10-04 16:55:05.120000+00:00" + in out + ) + + +@pytest.mark.dependency(name="insert_data_with_dml", depends=["insert_data"]) +def test_insert_data_with_dml(capsys, instance_id, sample_database): + graph_snippets.insert_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "2 record(s) inserted into Account." in out + assert "2 record(s) inserted into AccountTransferAccount." in out + + +@pytest.mark.dependency(name="update_data_with_dml", depends=["insert_data_with_dml"]) +def test_update_data_with_dml(capsys, instance_id, sample_database): + graph_snippets.update_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 Account record(s) updated." in out + assert "1 AccountTransferAccount record(s) updated." in out + + +@pytest.mark.dependency(depends=["update_data_with_dml"]) +def test_update_data_with_graph_query_in_dml(capsys, instance_id, sample_database): + graph_snippets.update_data_with_graph_query_in_dml( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert "2 Account record(s) updated." in out + + +@pytest.mark.dependency(depends=["update_data_with_dml"]) +def test_delete_data_with_graph_query_in_dml(capsys, instance_id, sample_database): + graph_snippets.delete_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 AccountTransferAccount record(s) deleted." in out + assert "1 Account record(s) deleted." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_delete_data(capsys, instance_id, sample_database): + graph_snippets.delete_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Deleted data." in out diff --git a/samples/samples/noxfile.py b/samples/samples/noxfile.py new file mode 100644 index 0000000000..97dc6241e7 --- /dev/null +++ b/samples/samples/noxfile.py @@ -0,0 +1,292 @@ +# Copyright 2019 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import print_function + +import glob +import os +from pathlib import Path +import sys +from typing import Callable, Dict, Optional + +import nox + + +# WARNING - WARNING - WARNING - WARNING - WARNING +# WARNING - WARNING - WARNING - WARNING - WARNING +# DO NOT EDIT THIS FILE EVER! +# WARNING - WARNING - WARNING - WARNING - WARNING +# WARNING - WARNING - WARNING - WARNING - WARNING + +BLACK_VERSION = "black==23.7.0" +ISORT_VERSION = "isort==5.10.1" + +# Copy `noxfile_config.py` to your directory and modify it instead. + +# `TEST_CONFIG` dict is a configuration hook that allows users to +# modify the test configurations. The values here should be in sync +# with `noxfile_config.py`. Users will copy `noxfile_config.py` into +# their directory and modify it. + +TEST_CONFIG = { + # You can opt out from the test for specific Python versions. + "ignored_versions": [], + # Old samples are opted out of enforcing Python type hints + # All new samples should feature them + "enforce_type_hints": False, + # An envvar key for determining the project id to use. Change it + # to 'BUILD_SPECIFIC_GCLOUD_PROJECT' if you want to opt in using a + # build specific Cloud project. You can also use your own string + # to use your own Cloud project. + "gcloud_project_env": "GOOGLE_CLOUD_PROJECT", + # 'gcloud_project_env': 'BUILD_SPECIFIC_GCLOUD_PROJECT', + # If you need to use a specific version of pip, + # change pip_version_override to the string representation + # of the version number, for example, "20.2.4" + "pip_version_override": None, + # A dictionary you want to inject into your test. Don't put any + # secrets here. These values will override predefined values. + "envs": {}, +} + + +try: + # Ensure we can import noxfile_config in the project's directory. + sys.path.append(".") + from noxfile_config import TEST_CONFIG_OVERRIDE +except ImportError as e: + print("No user noxfile_config found: detail: {}".format(e)) + TEST_CONFIG_OVERRIDE = {} + +# Update the TEST_CONFIG with the user supplied values. +TEST_CONFIG.update(TEST_CONFIG_OVERRIDE) + + +def get_pytest_env_vars() -> Dict[str, str]: + """Returns a dict for pytest invocation.""" + ret = {} + + # Override the GCLOUD_PROJECT and the alias. + env_key = TEST_CONFIG["gcloud_project_env"] + # This should error out if not set. + ret["GOOGLE_CLOUD_PROJECT"] = os.environ[env_key] + + # Apply user supplied envs. + ret.update(TEST_CONFIG["envs"]) + return ret + + +# DO NOT EDIT - automatically generated. +# All versions used to test samples. +ALL_VERSIONS = ["3.9", "3.10", "3.11", "3.12", "3.13"] + +# Any default versions that should be ignored. +IGNORED_VERSIONS = TEST_CONFIG["ignored_versions"] + +TESTED_VERSIONS = sorted([v for v in ALL_VERSIONS if v not in IGNORED_VERSIONS]) + +INSTALL_LIBRARY_FROM_SOURCE = os.environ.get("INSTALL_LIBRARY_FROM_SOURCE", False) in ( + "True", + "true", +) + +# Error if a python version is missing +nox.options.error_on_missing_interpreters = True + +# +# Style Checks +# + + +# Linting with flake8. +# +# We ignore the following rules: +# E203: whitespace before ‘:’ +# E266: too many leading ‘#’ for block comment +# E501: line too long +# I202: Additional newline in a section of imports +# +# We also need to specify the rules which are ignored by default: +# ['E226', 'W504', 'E126', 'E123', 'W503', 'E24', 'E704', 'E121'] +FLAKE8_COMMON_ARGS = [ + "--show-source", + "--builtin=gettext", + "--max-complexity=20", + "--exclude=.nox,.cache,env,lib,generated_pb2,*_pb2.py,*_pb2_grpc.py", + "--ignore=E121,E123,E126,E203,E226,E24,E266,E501,E704,W503,W504,I202", + "--max-line-length=88", +] + + +@nox.session +def lint(session: nox.sessions.Session) -> None: + if not TEST_CONFIG["enforce_type_hints"]: + session.install("flake8") + else: + session.install("flake8", "flake8-annotations") + + args = FLAKE8_COMMON_ARGS + [ + ".", + ] + session.run("flake8", *args) + + +# +# Black +# + + +@nox.session +def blacken(session: nox.sessions.Session) -> None: + """Run black. Format code to uniform standard.""" + session.install(BLACK_VERSION) + python_files = [path for path in os.listdir(".") if path.endswith(".py")] + + session.run("black", *python_files) + + +# +# format = isort + black +# + +@nox.session +def format(session: nox.sessions.Session) -> None: + """ + Run isort to sort imports. Then run black + to format code to uniform standard. + """ + session.install(BLACK_VERSION, ISORT_VERSION) + python_files = [path for path in os.listdir(".") if path.endswith(".py")] + + # Use the --fss option to sort imports using strict alphabetical order. + # See https://pycqa.github.io/isort/docs/configuration/options.html#force-sort-within-sections + session.run("isort", "--fss", *python_files) + session.run("black", *python_files) + + +# +# Sample Tests +# + + +PYTEST_COMMON_ARGS = ["--junitxml=sponge_log.xml"] + + +def _session_tests( + session: nox.sessions.Session, post_install: Callable = None +) -> None: + # check for presence of tests + test_list = glob.glob("**/*_test.py", recursive=True) + glob.glob("**/test_*.py", recursive=True) + test_list.extend(glob.glob("**/tests", recursive=True)) + + if len(test_list) == 0: + print("No tests found, skipping directory.") + return + + if TEST_CONFIG["pip_version_override"]: + pip_version = TEST_CONFIG["pip_version_override"] + session.install(f"pip=={pip_version}") + """Runs py.test for a particular project.""" + concurrent_args = [] + if os.path.exists("requirements.txt"): + if os.path.exists("constraints.txt"): + session.install("-r", "requirements.txt", "-c", "constraints.txt") + else: + session.install("-r", "requirements.txt") + with open("requirements.txt") as rfile: + packages = rfile.read() + + if os.path.exists("requirements-test.txt"): + if os.path.exists("constraints-test.txt"): + session.install( + "-r", "requirements-test.txt", "-c", "constraints-test.txt" + ) + else: + session.install("-r", "requirements-test.txt") + with open("requirements-test.txt") as rtfile: + packages += rtfile.read() + + if INSTALL_LIBRARY_FROM_SOURCE: + session.install("-e", _get_repo_root()) + + if post_install: + post_install(session) + + if "pytest-parallel" in packages: + concurrent_args.extend(['--workers', 'auto', '--tests-per-worker', 'auto']) + elif "pytest-xdist" in packages: + concurrent_args.extend(['-n', 'auto']) + + session.run( + "pytest", + *(PYTEST_COMMON_ARGS + session.posargs + concurrent_args), + # Pytest will return 5 when no tests are collected. This can happen + # on travis where slow and flaky tests are excluded. + # See http://doc.pytest.org/en/latest/_modules/_pytest/main.html + success_codes=[0, 5], + env=get_pytest_env_vars(), + ) + + +@nox.session(python=ALL_VERSIONS) +def py(session: nox.sessions.Session) -> None: + """Runs py.test for a sample using the specified version of Python.""" + if session.python in TESTED_VERSIONS: + _session_tests(session) + else: + session.skip( + "SKIPPED: {} tests are disabled for this sample.".format(session.python) + ) + + +# +# Readmegen +# + + +def _get_repo_root() -> Optional[str]: + """ Returns the root folder of the project. """ + # Get root of this repository. Assume we don't have directories nested deeper than 10 items. + p = Path(os.getcwd()) + for i in range(10): + if p is None: + break + if Path(p / ".git").exists(): + return str(p) + # .git is not available in repos cloned via Cloud Build + # setup.py is always in the library's root, so use that instead + # https://github.com/googleapis/synthtool/issues/792 + if Path(p / "setup.py").exists(): + return str(p) + p = p.parent + raise Exception("Unable to detect repository root.") + + +GENERATED_READMES = sorted([x for x in Path(".").rglob("*.rst.in")]) + + +@nox.session +@nox.parametrize("path", GENERATED_READMES) +def readmegen(session: nox.sessions.Session, path: str) -> None: + """(Re-)generates the readme for a sample.""" + session.install("jinja2", "pyyaml") + dir_ = os.path.dirname(path) + + if os.path.exists(os.path.join(dir_, "requirements.txt")): + session.install("-r", os.path.join(dir_, "requirements.txt")) + + in_file = os.path.join(dir_, "README.rst.in") + session.run( + "python", _get_repo_root() + "/scripts/readme-gen/readme_gen.py", in_file + ) diff --git a/samples/samples/pg_snippets.py b/samples/samples/pg_snippets.py new file mode 100644 index 0000000000..432d68a8ce --- /dev/null +++ b/samples/samples/pg_snippets.py @@ -0,0 +1,1925 @@ +#!/usr/bin/env python + +# Copyright 2022 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to do basic operations using Cloud +Spanner PostgreSql dialect. + +For more information, see the README.rst under /spanner. +""" +import argparse +import base64 +import datetime +import decimal +import time + +from google.cloud import spanner, spanner_admin_database_v1 +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect +from google.cloud.spanner_v1 import param_types +from google.cloud.spanner_v1.data_types import JsonObject + +OPERATION_TIMEOUT_SECONDS = 240 + + +# [START spanner_postgresql_create_instance] +def create_instance(instance_id): + """Creates an instance.""" + spanner_client = spanner.Client() + + config_name = "{}/instanceConfigs/regional-us-central1".format( + spanner_client.project_name + ) + + instance = spanner_client.instance( + instance_id, + configuration_name=config_name, + display_name="This is a display name.", + node_count=1, + labels={ + "cloud_spanner_samples": "true", + "sample_name": "snippets-create_instance-explicit", + "created": str(int(time.time())), + }, + ) + + operation = instance.create() + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Created instance {}".format(instance_id)) + + +# [END spanner_postgresql_create_instance] + + +# [START spanner_postgresql_create_database] +def create_database(instance_id, database_id): + """Creates a PostgreSql database and tables for sample data.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.CreateDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + create_statement=f'CREATE DATABASE "{database_id}"', + database_dialect=DatabaseDialect.POSTGRESQL, + ) + + operation = database_admin_api.create_database(request=request) + + print("Waiting for operation to complete...") + database = operation.result(OPERATION_TIMEOUT_SECONDS) + + create_table_using_ddl(database.name) + print("Created database {} on instance {}".format(database_id, instance_id)) + + +def create_table_using_ddl(database_name): + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_name, + statements=[ + """CREATE TABLE Singers ( + SingerId bigint NOT NULL, + FirstName character varying(1024), + LastName character varying(1024), + SingerInfo bytea, + FullName character varying(2048) + GENERATED ALWAYS AS (FirstName || ' ' || LastName) STORED, + PRIMARY KEY (SingerId) + )""", + """CREATE TABLE Albums ( + SingerId bigint NOT NULL, + AlbumId bigint NOT NULL, + AlbumTitle character varying(1024), + PRIMARY KEY (SingerId, AlbumId) + ) INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + ], + ) + operation = spanner_client.database_admin_api.update_database_ddl(request) + operation.result(OPERATION_TIMEOUT_SECONDS) + + +# [END spanner_postgresql_create_database] + + +# [START spanner_postgresql_insert_data] +def insert_data(instance_id, database_id): + """Inserts sample data into the given database. + + The database and table must already exist and can be created using + `create_database`. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.batch() as batch: + batch.insert( + table="Singers", + columns=("SingerId", "FirstName", "LastName"), + values=[ + (1, "Marc", "Richards"), + (2, "Catalina", "Smith"), + (3, "Alice", "Trentor"), + (4, "Lea", "Martin"), + (5, "David", "Lomond"), + ], + ) + + batch.insert( + table="Albums", + columns=("SingerId", "AlbumId", "AlbumTitle"), + values=[ + (1, 1, "Total Junk"), + (1, 2, "Go, Go, Go"), + (2, 1, "Green"), + (2, 2, "Forever Hold Your Peace"), + (2, 3, "Terrified"), + ], + ) + + print("Inserted data.") + + +# [END spanner_postgresql_insert_data] + + +# [START spanner_postgresql_delete_data] +def delete_data(instance_id, database_id): + """Deletes sample data from the given database. + + The database, table, and data must already exist and can be created using + `create_database` and `insert_data`. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Delete individual rows + albums_to_delete = spanner.KeySet(keys=[[2, 1], [2, 3]]) + + # Delete a range of rows where the column key is >=3 and <5 + singers_range = spanner.KeyRange(start_closed=[3], end_open=[5]) + singers_to_delete = spanner.KeySet(ranges=[singers_range]) + + # Delete remaining Singers rows, which will also delete the remaining + # Albums rows because Albums was defined with ON DELETE CASCADE + remaining_singers = spanner.KeySet(all_=True) + + with database.batch() as batch: + batch.delete("Albums", albums_to_delete) + batch.delete("Singers", singers_to_delete) + batch.delete("Singers", remaining_singers) + + print("Deleted data.") + + +# [END spanner_postgresql_delete_data] + + +# [START spanner_postgresql_query_data] +def query_data(instance_id, database_id): + """Queries sample data from the database using SQL.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, AlbumTitle FROM Albums" + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + +# [END spanner_postgresql_query_data] + + +# [START spanner_postgresql_read_data] +def read_data(instance_id, database_id): + """Reads sample data from the database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", columns=("SingerId", "AlbumId", "AlbumTitle"), keyset=keyset + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + +# [END spanner_postgresql_read_data] + + +# [START spanner_postgresql_add_column] +def add_column(instance_id, database_id): + """Adds a new column to the Albums table in the example database.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=["ALTER TABLE Albums ADD COLUMN MarketingBudget BIGINT"], + ) + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the MarketingBudget column.") + + +# [END spanner_postgresql_add_column] + + +# [START spanner_postgresql_update_data] +def update_data(instance_id, database_id): + """Updates sample data in the database. + + This updates the `MarketingBudget` column which must be created before + running this sample. You can add the column by running the `add_column` + sample or by running this DDL statement against your database: + + ALTER TABLE Albums ADD COLUMN MarketingBudget INT64 + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.batch() as batch: + batch.update( + table="Albums", + columns=("SingerId", "AlbumId", "MarketingBudget"), + values=[(1, 1, 100000), (2, 2, 500000)], + ) + + print("Updated data.") + + +# [END spanner_postgresql_update_data] + + +# [START spanner_postgresql_read_write_transaction] +def read_write_transaction(instance_id, database_id): + """Performs a read-write transaction to update two sample records in the + database. + + This will transfer 200,000 from the `MarketingBudget` field for the second + Album to the first Album. If the `MarketingBudget` is too low, it will + raise an exception. + + Before running this sample, you will need to run the `update_data` sample + to populate the fields. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def update_albums(transaction): + # Read the second album budget. + second_album_keyset = spanner.KeySet(keys=[(2, 2)]) + second_album_result = transaction.read( + table="Albums", + columns=("MarketingBudget",), + keyset=second_album_keyset, + limit=1, + ) + second_album_row = list(second_album_result)[0] + second_album_budget = second_album_row[0] + + transfer_amount = 200000 + + if second_album_budget < transfer_amount: + # Raising an exception will automatically roll back the + # transaction. + raise ValueError("The second album doesn't have enough funds to transfer") + + # Read the first album's budget. + first_album_keyset = spanner.KeySet(keys=[(1, 1)]) + first_album_result = transaction.read( + table="Albums", + columns=("MarketingBudget",), + keyset=first_album_keyset, + limit=1, + ) + first_album_row = list(first_album_result)[0] + first_album_budget = first_album_row[0] + + # Update the budgets. + second_album_budget -= transfer_amount + first_album_budget += transfer_amount + print( + "Setting first album's budget to {} and the second album's " + "budget to {}.".format(first_album_budget, second_album_budget) + ) + + # Update the rows. + transaction.update( + table="Albums", + columns=("SingerId", "AlbumId", "MarketingBudget"), + values=[(1, 1, first_album_budget), (2, 2, second_album_budget)], + ) + + database.run_in_transaction(update_albums) + + print("Transaction complete.") + + +# [END spanner_postgresql_read_write_transaction] + + +# [START spanner_postgresql_query_data_with_new_column] +def query_data_with_new_column(instance_id, database_id): + """Queries sample data from the database using SQL. + + This sample uses the `MarketingBudget` column. You can add the column + by running the `add_column` sample or by running this DDL statement against + your database: + + ALTER TABLE Albums ADD COLUMN MarketingBudget INT64 + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, MarketingBudget FROM Albums" + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, MarketingBudget: {}".format(*row)) + + +# [END spanner_postgresql_query_data_with_new_column] + + +# [START spanner_postgresql_create_index] +def add_index(instance_id, database_id): + """Adds a simple index to the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + operation = database.update_ddl( + ["CREATE INDEX AlbumsByAlbumTitle ON Albums(AlbumTitle)"] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the AlbumsByAlbumTitle index.") + + +# [END spanner_postgresql_create_index] + + +# [START spanner_postgresql_read_data_with_index] +def read_data_with_index(instance_id, database_id): + """Reads sample data from the database using an index. + + The index must exist before running this sample. You can add the index + by running the `add_index` sample or by running this DDL statement against + your database: + + CREATE INDEX AlbumsByAlbumTitle ON Albums(AlbumTitle) + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", + columns=("AlbumId", "AlbumTitle"), + keyset=keyset, + index="AlbumsByAlbumTitle", + ) + + for row in results: + print("AlbumId: {}, AlbumTitle: {}".format(*row)) + + +# [END spanner_postgresql_read_data_with_index] + + +# [START spanner_postgresql_create_storing_index] +def add_storing_index(instance_id, database_id): + """Adds an storing index to the example database.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "CREATE INDEX AlbumsByAlbumTitle2 ON Albums(AlbumTitle)" + "INCLUDE (MarketingBudget)" + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the AlbumsByAlbumTitle2 index.") + + +# [END spanner_postgresql_create_storing_index] + + +# [START spanner_postgresql_read_data_with_storing_index] +def read_data_with_storing_index(instance_id, database_id): + """Reads sample data from the database using an index with a storing + clause. + + The index must exist before running this sample. You can add the index + by running the `add_scoring_index` sample or by running this DDL statement + against your database: + + CREATE INDEX AlbumsByAlbumTitle2 ON Albums(AlbumTitle) + INCLUDE (MarketingBudget) + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", + columns=("AlbumId", "AlbumTitle", "MarketingBudget"), + keyset=keyset, + index="AlbumsByAlbumTitle2", + ) + + for row in results: + print("AlbumId: {}, AlbumTitle: {}, " "MarketingBudget: {}".format(*row)) + + +# [END spanner_postgresql_read_data_with_storing_index] + + +# [START spanner_postgresql_read_only_transaction] +def read_only_transaction(instance_id, database_id): + """Reads data inside of a read-only transaction. + + Within the read-only transaction, or "snapshot", the application sees + consistent view of the database at a particular timestamp. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot(multi_use=True) as snapshot: + # Read using SQL. + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, AlbumTitle FROM Albums" + ) + + print("Results from first read:") + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + # Perform another read using the `read` method. Even if the data + # is updated in-between the reads, the snapshot ensures that both + # return the same data. + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", columns=("SingerId", "AlbumId", "AlbumTitle"), keyset=keyset + ) + + print("Results from second read:") + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + +# [END spanner_postgresql_read_only_transaction] + + +def insert_with_dml(instance_id, database_id): + """Inserts data with a DML statement into the database.""" + # [START spanner_postgresql_dml_getting_started_insert] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def insert_singers(transaction): + row_ct = transaction.execute_update( + "INSERT INTO Singers (SingerId, FirstName, LastName) VALUES " + "(12, 'Melissa', 'Garcia'), " + "(13, 'Russell', 'Morales'), " + "(14, 'Jacqueline', 'Long'), " + "(15, 'Dylan', 'Shaw')" + ) + print("{} record(s) inserted.".format(row_ct)) + + database.run_in_transaction(insert_singers) + # [END spanner_postgresql_dml_getting_started_insert] + + +def insert_with_dml_returning(instance_id, database_id): + """Inserts sample data into the given database using a DML statement having a RETURNING clause.""" + # [START spanner_postgresql_dml_insert_returning] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Insert records into the SINGERS table and returns the + # generated column FullName of the inserted records using + # 'RETURNING FullName'. + # It is also possible to return all columns of all the + # inserted records by using 'RETURNING *'. + def insert_singers(transaction): + results = transaction.execute_sql( + "INSERT INTO Singers (SingerId, FirstName, LastName) VALUES " + "(21, 'Luann', 'Chizoba'), " + "(22, 'Denis', 'Patricio'), " + "(23, 'Felxi', 'Ronan'), " + "(24, 'Dominik', 'Martyna') " + "RETURNING FullName" + ) + for result in results: + print("FullName: {}".format(*result)) + print("{} record(s) inserted.".format(results.stats.row_count_exact)) + + database.run_in_transaction(insert_singers) + # [END spanner_postgresql_dml_insert_returning] + + +def query_data_with_parameter(instance_id, database_id): + """Queries sample data from the database using SQL with a parameter.""" + # [START spanner_postgresql_query_with_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, FirstName, LastName FROM Singers " "WHERE LastName = $1", + params={"p1": "Garcia"}, + param_types={"p1": spanner.param_types.STRING}, + ) + + for row in results: + print("SingerId: {}, FirstName: {}, LastName: {}".format(*row)) + # [END spanner_postgresql_query_with_parameter] + + +def write_with_dml_transaction(instance_id, database_id): + """Transfers part of a marketing budget from one album to another.""" + # [START spanner_postgresql_dml_getting_started_update] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def transfer_budget(transaction): + # Transfer marketing budget from one album to another. Performed in a + # single transaction to ensure that the transfer is atomic. + second_album_result = transaction.execute_sql( + "SELECT MarketingBudget from Albums " "WHERE SingerId = 2 and AlbumId = 2" + ) + second_album_row = list(second_album_result)[0] + second_album_budget = second_album_row[0] + + transfer_amount = 200000 + + # Transaction will only be committed if this condition still holds at + # the time of commit. Otherwise it will be aborted and the callable + # will be rerun by the client library + if second_album_budget >= transfer_amount: + first_album_result = transaction.execute_sql( + "SELECT MarketingBudget from Albums " + "WHERE SingerId = 1 and AlbumId = 1" + ) + first_album_row = list(first_album_result)[0] + first_album_budget = first_album_row[0] + + second_album_budget -= transfer_amount + first_album_budget += transfer_amount + + # Update first album + transaction.execute_update( + "UPDATE Albums " + "SET MarketingBudget = $1 " + "WHERE SingerId = 1 and AlbumId = 1", + params={"p1": first_album_budget}, + param_types={"p1": spanner.param_types.INT64}, + ) + + # Update second album + transaction.execute_update( + "UPDATE Albums " + "SET MarketingBudget = $1 " + "WHERE SingerId = 2 and AlbumId = 2", + params={"p1": second_album_budget}, + param_types={"p1": spanner.param_types.INT64}, + ) + + print( + "Transferred {} from Album2's budget to Album1's".format( + transfer_amount + ) + ) + + database.run_in_transaction(transfer_budget) + # [END spanner_postgresql_dml_getting_started_update] + + +# [START spanner_postgresql_read_stale_data] +def read_stale_data(instance_id, database_id): + """Reads sample data from the database. The data is exactly 15 seconds + stale.""" + import datetime + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + staleness = datetime.timedelta(seconds=15) + + with database.snapshot(exact_staleness=staleness) as snapshot: + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", + columns=("SingerId", "AlbumId", "MarketingBudget"), + keyset=keyset, + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, MarketingBudget: {}".format(*row)) + + +# [END spanner_postgresql_read_stale_data] + + +# [START spanner_postgresql_update_data_with_timestamp_column] +def update_data_with_timestamp(instance_id, database_id): + """Updates Performances tables in the database with the COMMIT_TIMESTAMP + column. + + This updates the `MarketingBudget` column which must be created before + running this sample. You can add the column by running the `add_column` + sample or by running this DDL statement against your database: + + ALTER TABLE Albums ADD COLUMN MarketingBudget BIGINT + + In addition this update expects the LastUpdateTime column added by + applying this DDL statement against your database: + + ALTER TABLE Albums ADD COLUMN LastUpdateTime SPANNER.COMMIT_TIMESTAMP + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.batch() as batch: + batch.update( + table="Albums", + columns=("SingerId", "AlbumId", "MarketingBudget", "LastUpdateTime"), + values=[ + (1, 1, 1000000, spanner.COMMIT_TIMESTAMP), + (2, 2, 750000, spanner.COMMIT_TIMESTAMP), + ], + ) + + print("Updated data.") + + +# [END spanner_postgresql_update_data_with_timestamp_column] + + +# [START spanner_postgresql_add_timestamp_column] +def add_timestamp_column(instance_id, database_id): + """Adds a new TIMESTAMP column to the Albums table in the example database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + operation = database.update_ddl( + ["ALTER TABLE Albums ADD COLUMN LastUpdateTime SPANNER.COMMIT_TIMESTAMP"] + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Albums" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_postgresql_add_timestamp_column] + + +# [START spanner_postgresql_query_data_with_timestamp_column] +def query_data_with_timestamp(instance_id, database_id): + """Queries sample data from the database using SQL. + + This updates the `LastUpdateTime` column which must be created before + running this sample. You can add the column by running the + `add_timestamp_column` sample or by running this DDL statement + against your database: + + ALTER TABLE Performances ADD COLUMN LastUpdateTime TIMESTAMP + OPTIONS (allow_commit_timestamp=true) + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, MarketingBudget FROM Albums " + "ORDER BY LastUpdateTime DESC" + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, MarketingBudget: {}".format(*row)) + + +# [END spanner_postgresql_query_data_with_timestamp_column] + + +# [START spanner_postgresql_create_table_with_timestamp_column] +def create_table_with_timestamp(instance_id, database_id): + """Creates a table with a COMMIT_TIMESTAMP column.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + request = spanner_admin_database_v1.UpdateDatabaseDdlRequest( + database=database.name, + statements=[ + """CREATE TABLE Performances ( + SingerId BIGINT NOT NULL, + VenueId BIGINT NOT NULL, + EventDate Date, + Revenue BIGINT, + LastUpdateTime SPANNER.COMMIT_TIMESTAMP NOT NULL, +PRIMARY KEY (SingerId, VenueId, EventDate)) +INTERLEAVE IN PARENT Singers ON DELETE CASCADE""" + ], + ) + operation = spanner_client.database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Performances table on database {} on instance {}".format( + database_id, instance_id + ) + ) + + +# [END spanner_postgresql_create_table_with_timestamp_column] + + +# [START spanner_postgresql_insert_data_with_timestamp_column] +def insert_data_with_timestamp(instance_id, database_id): + """Inserts data with a COMMIT_TIMESTAMP field into a table.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.batch() as batch: + batch.insert( + table="Performances", + columns=("SingerId", "VenueId", "EventDate", "Revenue", "LastUpdateTime"), + values=[ + (1, 4, "2017-10-05", 11000, spanner.COMMIT_TIMESTAMP), + (1, 19, "2017-11-02", 15000, spanner.COMMIT_TIMESTAMP), + (2, 42, "2017-12-23", 7000, spanner.COMMIT_TIMESTAMP), + ], + ) + + print("Inserted data.") + + +# [END spanner_postgresql_insert_data_with_timestamp_column] + + +def insert_data_with_dml(instance_id, database_id): + """Inserts sample data into the given database using a DML statement.""" + # [START spanner_postgresql_dml_standard_insert] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def insert_singers(transaction): + row_ct = transaction.execute_update( + "INSERT INTO Singers (SingerId, FirstName, LastName) " + " VALUES (10, 'Virginia', 'Watson')" + ) + + print("{} record(s) inserted.".format(row_ct)) + + database.run_in_transaction(insert_singers) + # [END spanner_postgresql_dml_standard_insert] + + +def update_data_with_dml(instance_id, database_id): + """Updates sample data from the database using a DML statement.""" + # [START spanner_postgresql_dml_standard_update] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def update_albums(transaction): + row_ct = transaction.execute_update( + "UPDATE Albums " + "SET MarketingBudget = MarketingBudget * 2 " + "WHERE SingerId = 1 and AlbumId = 1" + ) + + print("{} record(s) updated.".format(row_ct)) + + database.run_in_transaction(update_albums) + # [END spanner_postgresql_dml_standard_update] + + +def update_data_with_dml_returning(instance_id, database_id): + """Updates sample data from the database using a DML statement having a RETURNING clause.""" + # [START spanner_postgresql_dml_update_returning] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Update MarketingBudget column for records satisfying + # a particular condition and returns the modified + # MarketingBudget column of the updated records using + # 'RETURNING MarketingBudget'. + # It is also possible to return all columns of all the + # updated records by using 'RETURNING *'. + def update_albums(transaction): + results = transaction.execute_sql( + "UPDATE Albums " + "SET MarketingBudget = MarketingBudget * 2 " + "WHERE SingerId = 1 and AlbumId = 1 " + "RETURNING MarketingBudget" + ) + for result in results: + print("MarketingBudget: {}".format(*result)) + print("{} record(s) updated.".format(results.stats.row_count_exact)) + + database.run_in_transaction(update_albums) + # [END spanner_postgresql_dml_update_returning] + + +def delete_data_with_dml(instance_id, database_id): + """Deletes sample data from the database using a DML statement.""" + # [START spanner_postgresql_dml_standard_delete] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def delete_singers(transaction): + row_ct = transaction.execute_update( + "DELETE FROM Singers WHERE FirstName = 'Alice'" + ) + + print("{} record(s) deleted.".format(row_ct)) + + database.run_in_transaction(delete_singers) + # [END spanner_postgresql_dml_standard_delete] + + +def delete_data_with_dml_returning(instance_id, database_id): + """Deletes sample data from the database using a DML statement having a RETURNING clause.""" + # [START spanner_postgresql_dml_delete_returning] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Delete records from SINGERS table satisfying a + # particular condition and returns the SingerId + # and FullName column of the deleted records using + # 'RETURNING SingerId, FullName'. + # It is also possible to return all columns of all the + # deleted records by using 'RETURNING *'. + def delete_singers(transaction): + results = transaction.execute_sql( + "DELETE FROM Singers WHERE FirstName = 'David' " + "RETURNING SingerId, FullName" + ) + for result in results: + print("SingerId: {}, FullName: {}".format(*result)) + print("{} record(s) deleted.".format(results.stats.row_count_exact)) + + database.run_in_transaction(delete_singers) + # [END spanner_postgresql_dml_delete_returning] + + +def dml_write_read_transaction(instance_id, database_id): + """First inserts data then reads it from within a transaction using DML.""" + # [START spanner_postgresql_dml_write_then_read] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def write_then_read(transaction): + # Insert record. + row_ct = transaction.execute_update( + "INSERT INTO Singers (SingerId, FirstName, LastName) " + " VALUES (11, 'Timothy', 'Campbell')" + ) + print("{} record(s) inserted.".format(row_ct)) + + # Read newly inserted record. + results = transaction.execute_sql( + "SELECT FirstName, LastName FROM Singers WHERE SingerId = 11" + ) + for result in results: + print("FirstName: {}, LastName: {}".format(*result)) + + database.run_in_transaction(write_then_read) + # [END spanner_postgresql_dml_write_then_read] + + +def update_data_with_partitioned_dml(instance_id, database_id): + """Update sample data with a partitioned DML statement.""" + # [START spanner_postgresql_dml_partitioned_update] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + row_ct = database.execute_partitioned_dml( + "UPDATE Albums SET MarketingBudget = 100000 WHERE SingerId > 1" + ) + + print("{} records updated.".format(row_ct)) + # [END spanner_postgresql_dml_partitioned_update] + + +def delete_data_with_partitioned_dml(instance_id, database_id): + """Delete sample data with a partitioned DML statement.""" + # [START spanner_postgresql_dml_partitioned_delete] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + row_ct = database.execute_partitioned_dml("DELETE FROM Singers WHERE SingerId > 10") + + print("{} record(s) deleted.".format(row_ct)) + # [END spanner_postgresql_dml_partitioned_delete] + + +def update_with_batch_dml(instance_id, database_id): + """Updates sample data in the database using Batch DML.""" + # [START spanner_postgresql_dml_batch_update] + from google.rpc.code_pb2 import OK + + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + insert_statement = ( + "INSERT INTO Albums " + "(SingerId, AlbumId, AlbumTitle, MarketingBudget) " + "VALUES (1, 3, 'Test Album Title', 10000)" + ) + + update_statement = ( + "UPDATE Albums " + "SET MarketingBudget = MarketingBudget * 2 " + "WHERE SingerId = 1 and AlbumId = 3" + ) + + def update_albums(transaction): + status, row_cts = transaction.batch_update([insert_statement, update_statement]) + + if status.code != OK: + # Do handling here. + # Note: the exception will still be raised when + # `commit` is called by `run_in_transaction`. + return + + print("Executed {} SQL statements using Batch DML.".format(len(row_cts))) + + database.run_in_transaction(update_albums) + # [END spanner_postgresql_dml_batch_update] + + +def create_table_with_datatypes(instance_id, database_id): + """Creates a table with supported datatypes.""" + # [START spanner_postgresql_create_table_with_datatypes] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + """CREATE TABLE Venues ( + VenueId BIGINT NOT NULL, + VenueName character varying(100), + VenueInfo BYTEA, + Capacity BIGINT, + OutdoorVenue BOOL, + PopularityScore FLOAT8, + Revenue NUMERIC, + LastUpdateTime SPANNER.COMMIT_TIMESTAMP NOT NULL, + PRIMARY KEY (VenueId))""" + ], + ) + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Venues table on database {} on instance {}".format( + database_id, instance_id + ) + ) + # [END spanner_postgresql_create_table_with_datatypes] + + +def insert_datatypes_data(instance_id, database_id): + """Inserts data with supported datatypes into a table.""" + # [START spanner_postgresql_insert_datatypes_data] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleBytes1 = base64.b64encode("Hello World 1".encode()) + exampleBytes2 = base64.b64encode("Hello World 2".encode()) + exampleBytes3 = base64.b64encode("Hello World 3".encode()) + with database.batch() as batch: + batch.insert( + table="Venues", + columns=( + "VenueId", + "VenueName", + "VenueInfo", + "Capacity", + "OutdoorVenue", + "PopularityScore", + "Revenue", + "LastUpdateTime", + ), + values=[ + ( + 4, + "Venue 4", + exampleBytes1, + 1800, + False, + 0.85543, + decimal.Decimal("215100.10"), + spanner.COMMIT_TIMESTAMP, + ), + ( + 19, + "Venue 19", + exampleBytes2, + 6300, + True, + 0.98716, + decimal.Decimal("1200100.00"), + spanner.COMMIT_TIMESTAMP, + ), + ( + 42, + "Venue 42", + exampleBytes3, + 3000, + False, + 0.72598, + decimal.Decimal("390650.99"), + spanner.COMMIT_TIMESTAMP, + ), + ], + ) + + print("Inserted data.") + # [END spanner_postgresql_insert_datatypes_data] + + +def query_data_with_bool(instance_id, database_id): + """Queries sample data using SQL with a BOOL parameter.""" + # [START spanner_postgresql_query_with_bool_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleBool = True + param = {"p1": exampleBool} + param_type = {"p1": param_types.BOOL} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, OutdoorVenue FROM Venues " + "WHERE OutdoorVenue = $1", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, OutdoorVenue: {}".format(*row)) + # [END spanner_postgresql_query_with_bool_parameter] + + +def query_data_with_bytes(instance_id, database_id): + """Queries sample data using SQL with a BYTES parameter.""" + # [START spanner_postgresql_query_with_bytes_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleBytes = base64.b64encode("Hello World 1".encode()) + param = {"p1": exampleBytes} + param_type = {"p1": param_types.BYTES} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName FROM Venues " "WHERE VenueInfo = $1", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}".format(*row)) + # [END spanner_postgresql_query_with_bytes_parameter] + + +def query_data_with_float(instance_id, database_id): + """Queries sample data using SQL with a FLOAT8 parameter.""" + # [START spanner_postgresql_query_with_float_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleFloat = 0.8 + param = {"p1": exampleFloat} + param_type = {"p1": param_types.FLOAT64} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, PopularityScore FROM Venues " + "WHERE PopularityScore > $1", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, PopularityScore: {}".format(*row)) + # [END spanner_postgresql_query_with_float_parameter] + + +def query_data_with_int(instance_id, database_id): + """Queries sample data using SQL with a BIGINT parameter.""" + # [START spanner_postgresql_query_with_int_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleInt = 3000 + param = {"p1": exampleInt} + param_type = {"p1": param_types.INT64} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, Capacity FROM Venues " "WHERE Capacity >= $1", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, Capacity: {}".format(*row)) + # [END spanner_postgresql_query_with_int_parameter] + + +def query_data_with_string(instance_id, database_id): + """Queries sample data using SQL with a STRING parameter.""" + # [START spanner_postgresql_query_with_string_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleString = "Venue 42" + param = {"p1": exampleString} + param_type = {"p1": param_types.STRING} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName FROM Venues " "WHERE VenueName = $1", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}".format(*row)) + # [END spanner_postgresql_query_with_string_parameter] + + +def query_data_with_timestamp_parameter(instance_id, database_id): + """Queries sample data using SQL with a TIMESTAMPTZ parameter.""" + # [START spanner_postgresql_query_with_timestamp_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + example_timestamp = datetime.datetime.utcnow().isoformat() + "Z" + # [END spanner_postgresql_query_with_timestamp_parameter] + # Avoid time drift on the local machine. + # https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4197. + example_timestamp = ( + datetime.datetime.utcnow() + datetime.timedelta(days=1) + ).isoformat() + "Z" + # [START spanner_postgresql_query_with_timestamp_parameter] + param = {"p1": example_timestamp} + param_type = {"p1": param_types.TIMESTAMP} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, LastUpdateTime FROM Venues " + "WHERE LastUpdateTime < $1", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, LastUpdateTime: {}".format(*row)) + # [END spanner_postgresql_query_with_timestamp_parameter] + + +# [START spanner_postgresql_update_data_with_numeric_column] +def update_data_with_numeric(instance_id, database_id): + """Updates Venues tables in the database with the NUMERIC + column. + + This updates the `Revenue` column which must be created before + running this sample. You can add the column by running the + `add_numeric_column` sample or by running this DDL statement + against your database: + + ALTER TABLE Venues ADD COLUMN Revenue NUMERIC + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.batch() as batch: + batch.update( + table="Venues", + columns=("VenueId", "Revenue"), + values=[ + (4, decimal.Decimal("35000")), + (19, decimal.Decimal("104500")), + (42, decimal.Decimal("99999999999999999999999999999.99")), + ], + ) + + print("Updated data.") + + +# [END spanner_postgresql_update_data_with_numeric_column] + + +def query_data_with_numeric_parameter(instance_id, database_id): + """Queries sample data using SQL with a NUMERIC parameter.""" + # [START spanner_postgresql_query_with_numeric_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + example_numeric = decimal.Decimal("300000") + param = {"p1": example_numeric} + param_type = {"p1": param_types.PG_NUMERIC} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, Revenue FROM Venues WHERE Revenue < $1", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, Revenue: {}".format(*row)) + # [END spanner_postgresql_query_with_numeric_parameter] + + +def create_client_with_query_options(instance_id, database_id): + """Create a client with query options.""" + # [START spanner_postgresql_create_client_with_query_options] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client( + query_options={ + "optimizer_version": "1", + "optimizer_statistics_package": "latest", + } + ) + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, LastUpdateTime FROM Venues" + ) + + for row in results: + print("VenueId: {}, VenueName: {}, LastUpdateTime: {}".format(*row)) + # [END spanner_postgresql_create_client_with_query_options] + + +def query_data_with_query_options(instance_id, database_id): + """Queries sample data using SQL with query options.""" + # [START spanner_postgresql_query_with_query_options] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, LastUpdateTime FROM Venues", + query_options={ + "optimizer_version": "1", + "optimizer_statistics_package": "latest", + }, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, LastUpdateTime: {}".format(*row)) + # [END spanner_postgresql_query_with_query_options] + + +# [START spanner_postgresql_jsonb_add_column] +def add_jsonb_column(instance_id, database_id): + """ + Alters Venues tables in the database adding a JSONB column. + You can create the table by running the `create_table_with_datatypes` + sample or by running this DDL statement against your database: + CREATE TABLE Venues ( + VenueId BIGINT NOT NULL, + VenueName character varying(100), + VenueInfo BYTEA, + Capacity BIGINT, + OutdoorVenue BOOL, + PopularityScore FLOAT8, + Revenue NUMERIC, + LastUpdateTime SPANNER.COMMIT_TIMESTAMP NOT NULL, + PRIMARY KEY (VenueId)) + """ + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=["ALTER TABLE Venues ADD COLUMN VenueDetails JSONB"], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Venues" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_postgresql_jsonb_add_column] + + +# [START spanner_postgresql_jsonb_update_data] +def update_data_with_jsonb(instance_id, database_id): + """Updates Venues tables in the database with the JSONB + column. + This updates the `VenueDetails` column which must be created before + running this sample. You can add the column by running the + `add_jsonb_column` sample or by running this DDL statement + against your database: + ALTER TABLE Venues ADD COLUMN VenueDetails JSONB + """ + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + """ + PG JSONB takes the last value in the case of duplicate keys. + PG JSONB sorts first by key length and then lexicographically with + equivalent key length. + """ + + with database.batch() as batch: + batch.update( + table="Venues", + columns=("VenueId", "VenueDetails"), + values=[ + ( + 4, + JsonObject( + [ + JsonObject({"name": None, "open": True}), + JsonObject({"name": "room 2", "open": False}), + ] + ), + ), + (19, JsonObject(rating=9, open=True)), + ( + 42, + JsonObject( + { + "name": None, + "open": {"Monday": True, "Tuesday": False}, + "tags": ["large", "airy"], + } + ), + ), + ], + ) + + print("Updated data.") + + +# [END spanner_postgresql_jsonb_update_data] + + +# [START spanner_postgresql_jsonb_query_parameter] +def query_data_with_jsonb_parameter(instance_id, database_id): + """Queries sample data using SQL with a JSONB parameter.""" + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + param = {"p1": 2} + param_type = {"p1": param_types.INT64} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT venueid, venuedetails FROM Venues" + + " WHERE CAST(venuedetails ->> 'rating' AS INTEGER) > $1", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueDetails: {}".format(*row)) + + +# [END spanner_postgresql_jsonb_query_parameter] + + +# [START spanner_postgresql_create_sequence] +def create_sequence(instance_id, database_id): + """Creates the Sequence and insert data""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "CREATE SEQUENCE Seq BIT_REVERSED_POSITIVE", + """CREATE TABLE Customers ( + CustomerId BIGINT DEFAULT nextval('Seq'), + CustomerName character varying(1024), + PRIMARY KEY (CustomerId) + )""", + ], + ) + operation = database_admin_api.update_database_ddl(request) + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Seq sequence and Customers table, where the key column CustomerId uses the sequence as a default value on database {} on instance {}".format( + database_id, instance_id + ) + ) + + def insert_customers(transaction): + results = transaction.execute_sql( + "INSERT INTO Customers (CustomerName) VALUES " + "('Alice'), " + "('David'), " + "('Marc') " + "RETURNING CustomerId" + ) + for result in results: + print("Inserted customer record with Customer Id: {}".format(*result)) + print( + "Number of customer records inserted is {}".format( + results.stats.row_count_exact + ) + ) + + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + database.run_in_transaction(insert_customers) + + +# [END spanner_postgresql_create_sequence] + + +# [START spanner_postgresql_alter_sequence] +def alter_sequence(instance_id, database_id): + """Alters the Sequence and insert data""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=["ALTER SEQUENCE Seq SKIP RANGE 1000 5000000"], + ) + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Altered Seq sequence to skip an inclusive range between 1000 and 5000000 on database {} on instance {}".format( + database_id, instance_id + ) + ) + + def insert_customers(transaction): + results = transaction.execute_sql( + "INSERT INTO Customers (CustomerName) VALUES " + "('Lea'), " + "('Cataline'), " + "('Smith') " + "RETURNING CustomerId" + ) + for result in results: + print("Inserted customer record with Customer Id: {}".format(*result)) + print( + "Number of customer records inserted is {}".format( + results.stats.row_count_exact + ) + ) + + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + database.run_in_transaction(insert_customers) + + +# [END spanner_postgresql_alter_sequence] + + +# [START spanner_postgresql_drop_sequence] +def drop_sequence(instance_id, database_id): + """Drops the Sequence""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "ALTER TABLE Customers ALTER COLUMN CustomerId DROP DEFAULT", + "DROP SEQUENCE Seq", + ], + ) + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Altered Customers table to drop DEFAULT from CustomerId column and dropped the Seq sequence on database {} on instance {}".format( + database_id, instance_id + ) + ) + + +# [END spanner_postgresql_drop_sequence] + +if __name__ == "__main__": # noqa: C901 + parser = argparse.ArgumentParser( + description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter + ) + parser.add_argument("instance_id", help="Your Cloud Spanner instance ID.") + parser.add_argument( + "--database-id", help="Your Cloud Spanner database ID.", default="example_db" + ) + + subparsers = parser.add_subparsers(dest="command") + subparsers.add_parser("create_instance", help=create_instance.__doc__) + subparsers.add_parser("create_database", help=create_database.__doc__) + subparsers.add_parser("insert_data", help=insert_data.__doc__) + subparsers.add_parser("delete_data", help=delete_data.__doc__) + subparsers.add_parser("query_data", help=query_data.__doc__) + subparsers.add_parser("read_data", help=read_data.__doc__) + subparsers.add_parser("read_stale_data", help=read_stale_data.__doc__) + subparsers.add_parser("add_column", help=add_column.__doc__) + subparsers.add_parser("update_data", help=update_data.__doc__) + subparsers.add_parser( + "query_data_with_new_column", help=query_data_with_new_column.__doc__ + ) + subparsers.add_parser("read_write_transaction", help=read_write_transaction.__doc__) + subparsers.add_parser("read_only_transaction", help=read_only_transaction.__doc__) + subparsers.add_parser("add_index", help=add_index.__doc__) + subparsers.add_parser("read_data_with_index", help=read_data_with_index.__doc__) + subparsers.add_parser("add_storing_index", help=add_storing_index.__doc__) + subparsers.add_parser( + "read_data_with_storing_index", help=read_data_with_storing_index.__doc__ + ) + subparsers.add_parser( + "create_table_with_timestamp", help=create_table_with_timestamp.__doc__ + ) + subparsers.add_parser( + "insert_data_with_timestamp", help=insert_data_with_timestamp.__doc__ + ) + subparsers.add_parser("add_timestamp_column", help=add_timestamp_column.__doc__) + subparsers.add_parser( + "update_data_with_timestamp", help=update_data_with_timestamp.__doc__ + ) + subparsers.add_parser( + "query_data_with_timestamp", help=query_data_with_timestamp.__doc__ + ) + subparsers.add_parser("insert_data_with_dml", help=insert_data_with_dml.__doc__) + subparsers.add_parser("update_data_with_dml", help=update_data_with_dml.__doc__) + subparsers.add_parser( + "update_data_with_dml", help=update_data_with_dml_returning.__doc__ + ) + subparsers.add_parser("delete_data_with_dml", help=delete_data_with_dml.__doc__) + subparsers.add_parser( + "delete_data_with_dml_returning", help=delete_data_with_dml_returning.__doc__ + ) + subparsers.add_parser( + "dml_write_read_transaction", help=dml_write_read_transaction.__doc__ + ) + subparsers.add_parser("insert_with_dml", help=insert_with_dml.__doc__) + subparsers.add_parser( + "insert_with_dml_returning", help=insert_with_dml_returning.__doc__ + ) + subparsers.add_parser( + "query_data_with_parameter", help=query_data_with_parameter.__doc__ + ) + subparsers.add_parser( + "write_with_dml_transaction", help=write_with_dml_transaction.__doc__ + ) + subparsers.add_parser( + "update_data_with_partitioned_dml", + help=update_data_with_partitioned_dml.__doc__, + ) + subparsers.add_parser( + "delete_data_with_partitioned_dml", + help=delete_data_with_partitioned_dml.__doc__, + ) + subparsers.add_parser("update_with_batch_dml", help=update_with_batch_dml.__doc__) + subparsers.add_parser( + "create_table_with_datatypes", help=create_table_with_datatypes.__doc__ + ) + subparsers.add_parser("insert_datatypes_data", help=insert_datatypes_data.__doc__) + subparsers.add_parser("query_data_with_bool", help=query_data_with_bool.__doc__) + subparsers.add_parser("query_data_with_bytes", help=query_data_with_bytes.__doc__) + subparsers.add_parser("query_data_with_float", help=query_data_with_float.__doc__) + subparsers.add_parser("query_data_with_int", help=query_data_with_int.__doc__) + subparsers.add_parser("query_data_with_string", help=query_data_with_string.__doc__) + subparsers.add_parser( + "query_data_with_timestamp_parameter", + help=query_data_with_timestamp_parameter.__doc__, + ) + subparsers.add_parser( + "update_data_with_numeric", + help=update_data_with_numeric.__doc__, + ) + subparsers.add_parser( + "query_data_with_numeric_parameter", + help=query_data_with_numeric_parameter.__doc__, + ) + subparsers.add_parser( + "query_data_with_query_options", help=query_data_with_query_options.__doc__ + ) + subparsers.add_parser( + "create_client_with_query_options", + help=create_client_with_query_options.__doc__, + ) + subparsers.add_parser("create_sequence", help=create_sequence.__doc__) + subparsers.add_parser("alter_sequence", help=alter_sequence.__doc__) + subparsers.add_parser("drop_sequence", help=drop_sequence.__doc__) + + args = parser.parse_args() + + if args.command == "create_instance": + create_instance(args.instance_id) + elif args.command == "create_database": + create_database(args.instance_id, args.database_id) + elif args.command == "insert_data": + insert_data(args.instance_id, args.database_id) + elif args.command == "delete_data": + delete_data(args.instance_id, args.database_id) + elif args.command == "query_data": + query_data(args.instance_id, args.database_id) + elif args.command == "read_data": + read_data(args.instance_id, args.database_id) + elif args.command == "read_stale_data": + read_stale_data(args.instance_id, args.database_id) + elif args.command == "add_column": + add_column(args.instance_id, args.database_id) + elif args.command == "update_data": + update_data(args.instance_id, args.database_id) + elif args.command == "query_data_with_new_column": + query_data_with_new_column(args.instance_id, args.database_id) + elif args.command == "read_write_transaction": + read_write_transaction(args.instance_id, args.database_id) + elif args.command == "read_only_transaction": + read_only_transaction(args.instance_id, args.database_id) + elif args.command == "add_index": + add_index(args.instance_id, args.database_id) + elif args.command == "read_data_with_index": + read_data_with_index(args.instance_id, args.database_id) + elif args.command == "add_storing_index": + add_storing_index(args.instance_id, args.database_id) + elif args.command == "read_data_with_storing_index": + read_data_with_storing_index(args.instance_id, args.database_id) + elif args.command == "create_table_with_timestamp": + create_table_with_timestamp(args.instance_id, args.database_id) + elif args.command == "insert_data_with_timestamp": + insert_data_with_timestamp(args.instance_id, args.database_id) + elif args.command == "add_timestamp_column": + add_timestamp_column(args.instance_id, args.database_id) + elif args.command == "update_data_with_timestamp": + update_data_with_timestamp(args.instance_id, args.database_id) + elif args.command == "query_data_with_timestamp": + query_data_with_timestamp(args.instance_id, args.database_id) + elif args.command == "insert_data_with_dml": + insert_data_with_dml(args.instance_id, args.database_id) + elif args.command == "update_data_with_dml": + update_data_with_dml(args.instance_id, args.database_id) + elif args.command == "update_data_with_dml_returning": + update_data_with_dml_returning(args.instance_id, args.database_id) + elif args.command == "delete_data_with_dml": + delete_data_with_dml(args.instance_id, args.database_id) + elif args.command == "delete_data_with_dml_returning": + delete_data_with_dml_returning(args.instance_id, args.database_id) + elif args.command == "dml_write_read_transaction": + dml_write_read_transaction(args.instance_id, args.database_id) + elif args.command == "insert_with_dml": + insert_with_dml(args.instance_id, args.database_id) + elif args.command == "insert_with_dml_returning": + insert_with_dml_returning(args.instance_id, args.database_id) + elif args.command == "query_data_with_parameter": + query_data_with_parameter(args.instance_id, args.database_id) + elif args.command == "write_with_dml_transaction": + write_with_dml_transaction(args.instance_id, args.database_id) + elif args.command == "update_data_with_partitioned_dml": + update_data_with_partitioned_dml(args.instance_id, args.database_id) + elif args.command == "delete_data_with_partitioned_dml": + delete_data_with_partitioned_dml(args.instance_id, args.database_id) + elif args.command == "update_with_batch_dml": + update_with_batch_dml(args.instance_id, args.database_id) + elif args.command == "create_table_with_datatypes": + create_table_with_datatypes(args.instance_id, args.database_id) + elif args.command == "insert_datatypes_data": + insert_datatypes_data(args.instance_id, args.database_id) + elif args.command == "query_data_with_bool": + query_data_with_bool(args.instance_id, args.database_id) + elif args.command == "query_data_with_bytes": + query_data_with_bytes(args.instance_id, args.database_id) + elif args.command == "query_data_with_float": + query_data_with_float(args.instance_id, args.database_id) + elif args.command == "query_data_with_int": + query_data_with_int(args.instance_id, args.database_id) + elif args.command == "query_data_with_string": + query_data_with_string(args.instance_id, args.database_id) + elif args.command == "query_data_with_timestamp_parameter": + query_data_with_timestamp_parameter(args.instance_id, args.database_id) + elif args.command == "update_data_with_numeric": + update_data_with_numeric(args.instance_id, args.database_id) + elif args.command == "query_data_with_numeric_parameter": + query_data_with_numeric_parameter(args.instance_id, args.database_id) + elif args.command == "query_data_with_query_options": + query_data_with_query_options(args.instance_id, args.database_id) + elif args.command == "create_client_with_query_options": + create_client_with_query_options(args.instance_id, args.database_id) diff --git a/samples/samples/pg_snippets_test.py b/samples/samples/pg_snippets_test.py new file mode 100644 index 0000000000..1b5d2971c1 --- /dev/null +++ b/samples/samples/pg_snippets_test.py @@ -0,0 +1,514 @@ +# Copyright 2022 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import time +import uuid + +import pytest +from google.api_core import exceptions +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect +from test_utils.retry import RetryErrors + +import pg_snippets as snippets + +CREATE_TABLE_SINGERS = """\ +CREATE TABLE Singers ( + SingerId BIGINT NOT NULL, + FirstName CHARACTER VARYING(1024), + LastName CHARACTER VARYING(1024), + SingerInfo BYTEA, + FullName CHARACTER VARYING(2048) + GENERATED ALWAYS AS (FirstName || ' ' || LastName) STORED, + PRIMARY KEY (SingerId) +) +""" + +CREATE_TABLE_ALBUMS = """\ +CREATE TABLE Albums ( + SingerId BIGINT NOT NULL, + AlbumId BIGINT NOT NULL, + AlbumTitle CHARACTER VARYING(1024), + PRIMARY KEY (SingerId, AlbumId) + ) INTERLEAVE IN PARENT Singers ON DELETE CASCADE +""" + +retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + + +@pytest.fixture(scope="module") +def sample_name(): + return "pg_snippets" + + +@pytest.fixture(scope="module") +def database_dialect(): + """Spanner dialect to be used for this sample. + + The dialect is used to initialize the dialect for the database. + It can either be GoogleStandardSql or PostgreSql. + """ + return DatabaseDialect.POSTGRESQL + + +@pytest.fixture(scope="module") +def create_instance_id(): + """Id for the low-cost instance.""" + return f"create-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def lci_instance_id(): + """Id for the low-cost instance.""" + return f"lci-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_id(): + return f"test-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def create_database_id(): + return f"create-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def cmek_database_id(): + return f"cmek-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def default_leader_database_id(): + return f"leader_db_{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_ddl(): + """Sequence of DDL statements used to set up the database. + + Sample testcase modules can override as needed. + """ + return [CREATE_TABLE_SINGERS, CREATE_TABLE_ALBUMS] + + +@pytest.fixture(scope="module") +def default_leader(): + """Default leader for multi-region instances.""" + return "us-east4" + + +def test_create_instance_explicit(spanner_client, create_instance_id): + # Rather than re-use 'sample_isntance', we create a new instance, to + # ensure that the 'create_instance' snippet is tested. + retry_429(snippets.create_instance)(create_instance_id) + instance = spanner_client.instance(create_instance_id) + retry_429(instance.delete)() + + +def test_create_database_explicit(sample_instance, create_database_id): + # Rather than re-use 'sample_database', we create a new database, to + # ensure that the 'create_database' snippet is tested. + snippets.create_database(sample_instance.instance_id, create_database_id) + database = sample_instance.database(create_database_id) + database.drop() + + +@pytest.mark.dependency(name="insert_data") +def test_insert_data(capsys, instance_id, sample_database): + snippets.insert_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Inserted data" in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_delete_data(capsys, instance_id, sample_database): + snippets.delete_data(instance_id, sample_database.database_id) + # put it back for other tests + snippets.insert_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Deleted data" in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_query_data(capsys, instance_id, sample_database): + snippets.query_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk" in out + + +@pytest.mark.dependency(name="add_column", depends=["insert_data"]) +def test_add_column(capsys, instance_id, sample_database): + snippets.add_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the MarketingBudget column." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_read_data(capsys, instance_id, sample_database): + snippets.read_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk" in out + + +@pytest.mark.dependency(name="update_data", depends=["add_column"]) +def test_update_data(capsys, instance_id, sample_database): + # Sleep for 15 seconds to ensure previous inserts will be + # 'stale' by the time test_read_stale_data is run. + time.sleep(15) + + snippets.update_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated data." in out + + +@pytest.mark.dependency(depends=["update_data"]) +def test_read_stale_data(capsys, instance_id, sample_database): + # This snippet relies on test_update_data inserting data + # at least 15 seconds after the previous insert + snippets.read_stale_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, MarketingBudget: None" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_read_write_transaction(capsys, instance_id, sample_database): + snippets.read_write_transaction(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Transaction complete" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_query_data_with_new_column(capsys, instance_id, sample_database): + snippets.query_data_with_new_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, MarketingBudget: 300000" in out + assert "SingerId: 2, AlbumId: 2, MarketingBudget: 300000" in out + + +@pytest.mark.dependency(name="add_index", depends=["insert_data"]) +def test_add_index(capsys, instance_id, sample_database): + snippets.add_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the AlbumsByAlbumTitle index" in out + + +@pytest.mark.dependency(depends=["add_index"]) +def test_read_data_with_index(capsys, instance_id, sample_database): + snippets.read_data_with_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Go, Go, Go" in out + assert "Forever Hold Your Peace" in out + assert "Green" in out + + +@pytest.mark.dependency(name="add_storing_index", depends=["insert_data"]) +def test_add_storing_index(capsys, instance_id, sample_database): + snippets.add_storing_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the AlbumsByAlbumTitle2 index." in out + + +@pytest.mark.dependency(depends=["add_storing_index"]) +def test_read_data_with_storing_index(capsys, instance_id, sample_database): + snippets.read_data_with_storing_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "300000" in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_read_only_transaction(capsys, instance_id, sample_database): + snippets.read_only_transaction(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + # Snippet does two reads, so entry should be listed twice + assert out.count("SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk") == 2 + + +@pytest.mark.dependency(name="add_timestamp_column", depends=["insert_data"]) +def test_add_timestamp_column(capsys, instance_id, sample_database): + snippets.add_timestamp_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert 'Altered table "Albums" on database ' in out + + +@pytest.mark.dependency(depends=["add_timestamp_column"]) +def test_update_data_with_timestamp(capsys, instance_id, sample_database): + snippets.update_data_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated data" in out + + +@pytest.mark.dependency(depends=["add_timestamp_column"]) +def test_query_data_with_timestamp(capsys, instance_id, sample_database): + snippets.query_data_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, MarketingBudget: 1000000" in out + assert "SingerId: 2, AlbumId: 2, MarketingBudget: 750000" in out + + +@pytest.mark.dependency(name="create_table_with_timestamp") +def test_create_table_with_timestamp(capsys, instance_id, sample_database): + snippets.create_table_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created Performances table on database" in out + + +@pytest.mark.dependency(depends=["create_table_with_timestamp"]) +def test_insert_data_with_timestamp(capsys, instance_id, sample_database): + snippets.insert_data_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Inserted data." in out + + +@pytest.mark.dependency(name="insert_data_with_dml") +def test_insert_data_with_dml(capsys, instance_id, sample_database): + snippets.insert_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) inserted." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_update_data_with_dml(capsys, instance_id, sample_database): + snippets.update_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) updated." in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_update_data_with_dml_returning(capsys, instance_id, sample_database): + snippets.update_data_with_dml_returning(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) updated." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_delete_data_with_dml(capsys, instance_id, sample_database): + snippets.delete_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) deleted." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_delete_data_with_dml_returning(capsys, instance_id, sample_database): + snippets.delete_data_with_dml_returning(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) deleted." in out + + +@pytest.mark.dependency(name="dml_write_read_transaction") +def test_dml_write_read_transaction(capsys, instance_id, sample_database): + snippets.dml_write_read_transaction(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) inserted." in out + assert "FirstName: Timothy, LastName: Campbell" in out + + +@pytest.mark.dependency(name="insert_with_dml") +def test_insert_with_dml(capsys, instance_id, sample_database): + snippets.insert_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "4 record(s) inserted" in out + + +@pytest.mark.dependency(name="insert_with_dml_returning") +def test_insert_with_dml_returning(capsys, instance_id, sample_database): + snippets.insert_with_dml_returning(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "4 record(s) inserted" in out + + +@pytest.mark.dependency(depends=["insert_with_dml"]) +def test_query_data_with_parameter(capsys, instance_id, sample_database): + snippets.query_data_with_parameter(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 12, FirstName: Melissa, LastName: Garcia" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_write_with_dml_transaction(capsys, instance_id, sample_database): + snippets.write_with_dml_transaction(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Transferred 200000 from Album2's budget to Album1's" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def update_data_with_partitioned_dml(capsys, instance_id, sample_database): + snippets.update_data_with_partitioned_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "3 record(s) updated" in out + + +@pytest.mark.dependency(depends=["insert_with_dml", "insert_with_dml_returning"]) +def test_delete_data_with_partitioned_dml(capsys, instance_id, sample_database): + snippets.delete_data_with_partitioned_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "9 record(s) deleted" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_update_with_batch_dml(capsys, instance_id, sample_database): + snippets.update_with_batch_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Executed 2 SQL statements using Batch DML" in out + + +@pytest.mark.dependency(name="create_table_with_datatypes") +def test_create_table_with_datatypes(capsys, instance_id, sample_database): + snippets.create_table_with_datatypes(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created Venues table on database" in out + + +@pytest.mark.dependency( + name="insert_datatypes_data", + depends=["create_table_with_datatypes"], +) +def test_insert_datatypes_data(capsys, instance_id, sample_database): + snippets.insert_datatypes_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Inserted data." in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_bool(capsys, instance_id, sample_database): + snippets.query_data_with_bool(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 19, VenueName: Venue 19, OutdoorVenue: True" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_bytes(capsys, instance_id, sample_database): + snippets.query_data_with_bytes(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_float(capsys, instance_id, sample_database): + snippets.query_data_with_float(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, PopularityScore: 0.8" in out + assert "VenueId: 19, VenueName: Venue 19, PopularityScore: 0.9" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_int(capsys, instance_id, sample_database): + snippets.query_data_with_int(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 19, VenueName: Venue 19, Capacity: 6300" in out + assert "VenueId: 42, VenueName: Venue 42, Capacity: 3000" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_string(capsys, instance_id, sample_database): + snippets.query_data_with_string(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 42, VenueName: Venue 42" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_update_data_with_numeric(capsys, instance_id, sample_database): + snippets.update_data_with_numeric(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated data" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_numeric_parameter(capsys, instance_id, sample_database): + snippets.query_data_with_numeric_parameter(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, Revenue: 35000" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_timestamp_parameter(capsys, instance_id, sample_database): + snippets.query_data_with_timestamp_parameter( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, LastUpdateTime:" in out + assert "VenueId: 19, VenueName: Venue 19, LastUpdateTime:" in out + assert "VenueId: 42, VenueName: Venue 42, LastUpdateTime:" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_query_options(capsys, instance_id, sample_database): + snippets.query_data_with_query_options(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, LastUpdateTime:" in out + assert "VenueId: 19, VenueName: Venue 19, LastUpdateTime:" in out + assert "VenueId: 42, VenueName: Venue 42, LastUpdateTime:" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_create_client_with_query_options(capsys, instance_id, sample_database): + snippets.create_client_with_query_options(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, LastUpdateTime:" in out + assert "VenueId: 19, VenueName: Venue 19, LastUpdateTime:" in out + assert "VenueId: 42, VenueName: Venue 42, LastUpdateTime:" in out + + +@pytest.mark.dependency(name="add_jsonb_column", depends=["insert_datatypes_data"]) +def test_add_jsonb_column(capsys, instance_id, sample_database): + snippets.add_jsonb_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Waiting for operation to complete..." in out + assert 'Altered table "Venues" on database ' in out + + +@pytest.mark.dependency(name="update_data_with_jsonb", depends=["add_jsonb_column"]) +def test_update_data_with_jsonb(capsys, instance_id, sample_database): + snippets.update_data_with_jsonb(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated data." in out + + +@pytest.mark.dependency(depends=["update_data_with_jsonb"]) +def test_query_data_with_jsonb_parameter(capsys, instance_id, sample_database): + snippets.query_data_with_jsonb_parameter(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 19, VenueDetails: {'open': True, 'rating': 9}" in out + + +def test_create_sequence(capsys, instance_id, bit_reverse_sequence_database): + snippets.create_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Created Seq sequence and Customers table, where the key column CustomerId uses the sequence as a default value on database" + in out + ) + assert "Number of customer records inserted is 3" in out + assert "Inserted customer record with Customer Id:" in out + + +@pytest.mark.dependency(depends=["create_sequence"]) +def test_alter_sequence(capsys, instance_id, bit_reverse_sequence_database): + snippets.alter_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Altered Seq sequence to skip an inclusive range between 1000 and 5000000 on database" + in out + ) + assert "Number of customer records inserted is 3" in out + assert "Inserted customer record with Customer Id:" in out + + +@pytest.mark.dependency(depends=["alter_sequence"]) +def test_drop_sequence(capsys, instance_id, bit_reverse_sequence_database): + snippets.drop_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Altered Customers table to drop DEFAULT from CustomerId column and dropped the Seq sequence on database" + in out + ) diff --git a/samples/samples/quickstart.py b/samples/samples/quickstart.py new file mode 100644 index 0000000000..f2d355d931 --- /dev/null +++ b/samples/samples/quickstart.py @@ -0,0 +1,47 @@ +#!/usr/bin/env python + +# Copyright 2016 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +def run_quickstart(instance_id, database_id): + # [START spanner_quickstart] + # Imports the Google Cloud Client Library. + from google.cloud import spanner + + # Your Cloud Spanner instance ID. + # instance_id = "my-instance-id" + # + # Your Cloud Spanner database ID. + # database_id = "my-database-id" + # Instantiate a client. + spanner_client = spanner.Client() + + # Get a Cloud Spanner instance by ID. + instance = spanner_client.instance(instance_id) + + # Get a Cloud Spanner database by ID. + database = instance.database(database_id) + + # Execute a simple SQL statement. + with database.snapshot() as snapshot: + results = snapshot.execute_sql("SELECT 1") + + for row in results: + print(row) + # [END spanner_quickstart] + + +if __name__ == "__main__": + run_quickstart() diff --git a/samples/samples/quickstart_test.py b/samples/samples/quickstart_test.py new file mode 100644 index 0000000000..3726e7aef6 --- /dev/null +++ b/samples/samples/quickstart_test.py @@ -0,0 +1,29 @@ +# Copyright 2016 Google Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +import quickstart + + +@pytest.fixture(scope="module") +def sample_name(): + return "quickstart" + + +def test_quickstart(capsys, instance_id, sample_database): + quickstart.run_quickstart(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + + assert "[1]" in out diff --git a/samples/samples/requirements-test.txt b/samples/samples/requirements-test.txt new file mode 100644 index 0000000000..921628caad --- /dev/null +++ b/samples/samples/requirements-test.txt @@ -0,0 +1,4 @@ +pytest==8.4.1 +pytest-dependency==0.6.0 +mock==5.2.0 +google-cloud-testutils==1.6.4 diff --git a/samples/samples/requirements.txt b/samples/samples/requirements.txt new file mode 100644 index 0000000000..58cf3064bb --- /dev/null +++ b/samples/samples/requirements.txt @@ -0,0 +1,2 @@ +google-cloud-spanner==3.57.0 +futures==3.4.0; python_version < "3" diff --git a/samples/samples/snippets.py b/samples/samples/snippets.py new file mode 100644 index 0000000000..96d8fd3f89 --- /dev/null +++ b/samples/samples/snippets.py @@ -0,0 +1,4034 @@ +#!/usr/bin/env python + +# Copyright 2016 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""This application demonstrates how to do basic operations using Cloud +Spanner. + +For more information, see the README.rst under /spanner. +""" + +import argparse +import base64 +import datetime +import decimal +import json +import logging +import time + +from google.cloud import spanner +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.cloud.spanner_v1 import DirectedReadOptions, param_types +from google.cloud.spanner_v1.data_types import JsonObject +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import struct_pb2 # type: ignore + +from testdata import singer_pb2 + +OPERATION_TIMEOUT_SECONDS = 240 + + +# [START spanner_create_instance] +def create_instance(instance_id): + """Creates an instance.""" + from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin + + spanner_client = spanner.Client() + + config_name = "{}/instanceConfigs/regional-us-central1".format( + spanner_client.project_name + ) + + operation = spanner_client.instance_admin_api.create_instance( + parent=spanner_client.project_name, + instance_id=instance_id, + instance=spanner_instance_admin.Instance( + config=config_name, + display_name="This is a display name.", + node_count=1, + labels={ + "cloud_spanner_samples": "true", + "sample_name": "snippets-create_instance-explicit", + "created": str(int(time.time())), + }, + edition=spanner_instance_admin.Instance.Edition.STANDARD, # Optional + ), + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Created instance {}".format(instance_id)) + + +# [END spanner_create_instance] + + +# [START spanner_update_instance] +def update_instance(instance_id): + """Updates an instance.""" + from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin + + spanner_client = spanner.Client() + + name = "{}/instances/{}".format(spanner_client.project_name, instance_id) + + operation = spanner_client.instance_admin_api.update_instance( + instance=spanner_instance_admin.Instance( + name=name, + labels={ + "sample_name": "snippets-update_instance-explicit", + }, + edition=spanner_instance_admin.Instance.Edition.ENTERPRISE, # Optional + ), + field_mask=field_mask_pb2.FieldMask(paths=["labels", "edition"]), + ) + + print("Waiting for operation to complete...") + operation.result(900) + + print("Updated instance {}".format(instance_id)) + + +# [END spanner_update_instance] + + +# [START spanner_create_instance_with_processing_units] +def create_instance_with_processing_units(instance_id, processing_units): + """Creates an instance.""" + from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin + + spanner_client = spanner.Client() + + config_name = "{}/instanceConfigs/regional-us-central1".format( + spanner_client.project_name + ) + + request = spanner_instance_admin.CreateInstanceRequest( + parent=spanner_client.project_name, + instance_id=instance_id, + instance=spanner_instance_admin.Instance( + config=config_name, + display_name="This is a display name.", + processing_units=processing_units, + labels={ + "cloud_spanner_samples": "true", + "sample_name": "snippets-create_instance_with_processing_units", + "created": str(int(time.time())), + }, + edition=spanner_instance_admin.Instance.Edition.ENTERPRISE_PLUS, + ), + ) + + operation = spanner_client.instance_admin_api.create_instance(request=request) + + print("Waiting for operation to complete...") + instance = operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created instance {} with {} processing units".format( + instance_id, instance.processing_units + ) + ) + + +# [END spanner_create_instance_with_processing_units] + + +# [START spanner_get_instance_config] +def get_instance_config(instance_config): + """Gets the leader options for the instance configuration.""" + spanner_client = spanner.Client() + config_name = "{}/instanceConfigs/{}".format( + spanner_client.project_name, instance_config + ) + config = spanner_client.instance_admin_api.get_instance_config(name=config_name) + print( + "Available leader options for instance config {}: {}".format( + instance_config, config.leader_options + ) + ) + + +# [END spanner_get_instance_config] + + +# [START spanner_list_instance_configs] +def list_instance_config(): + """Lists the available instance configurations.""" + from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin + + spanner_client = spanner.Client() + + request = spanner_instance_admin.ListInstanceConfigsRequest( + parent=spanner_client.project_name + ) + for config in spanner_client.instance_admin_api.list_instance_configs( + request=request + ): + print( + "Available leader options for instance config {}: {}".format( + config.name, config.leader_options + ) + ) + + +# [END spanner_list_instance_configs] + + +# [START spanner_create_instance_partition] +def create_instance_partition(instance_id, instance_partition_id): + """Creates an instance partition.""" + from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin + + spanner_client = spanner.Client() + instance_admin_api = spanner_client.instance_admin_api + + config_name = "{}/instanceConfigs/nam3".format(spanner_client.project_name) + + operation = spanner_client.instance_admin_api.create_instance_partition( + parent=instance_admin_api.instance_path(spanner_client.project, instance_id), + instance_partition_id=instance_partition_id, + instance_partition=spanner_instance_admin.InstancePartition( + config=config_name, + display_name="Test instance partition", + node_count=1, + ), + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Created instance partition {}".format(instance_partition_id)) + + +# [END spanner_create_instance_partition] + + +# [START spanner_list_databases] +def list_databases(instance_id): + """Lists databases and their leader options.""" + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.ListDatabasesRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id) + ) + + for database in database_admin_api.list_databases(request=request): + print( + "Database {} has default leader {}".format( + database.name, database.default_leader + ) + ) + + +# [END spanner_list_databases] + + +# [START spanner_create_database] +def create_database(instance_id, database_id): + """Creates a database and tables for sample data.""" + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.CreateDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + create_statement=f"CREATE DATABASE `{database_id}`", + extra_statements=[ + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX), + FullName STRING(2048) AS ( + ARRAY_TO_STRING([FirstName, LastName], " ") + ) STORED + ) PRIMARY KEY (SingerId)""", + """CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) + ) PRIMARY KEY (SingerId, AlbumId), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + ], + ) + + operation = database_admin_api.create_database(request=request) + + print("Waiting for operation to complete...") + database = operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created database {} on instance {}".format( + database.name, + database_admin_api.instance_path(spanner_client.project, instance_id), + ) + ) + + +# [END spanner_create_database] + + +# [START spanner_update_database] +def update_database(instance_id, database_id): + """Updates the drop protection setting for a database.""" + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseRequest( + database=spanner_database_admin.Database( + name=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + enable_drop_protection=True, + ), + update_mask={"paths": ["enable_drop_protection"]}, + ) + operation = database_admin_api.update_database(request=request) + print( + "Waiting for update operation for {}/databases/{} to complete...".format( + database_admin_api.instance_path(spanner_client.project, instance_id), + database_id, + ) + ) + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Updated database {}/databases/{}.".format( + database_admin_api.instance_path(spanner_client.project, instance_id), + database_id, + ) + ) + + +# [END spanner_update_database] + + +# [START spanner_create_database_with_encryption_key] +def create_database_with_encryption_key(instance_id, database_id, kms_key_name): + """Creates a database with tables using a Customer Managed Encryption Key (CMEK).""" + from google.cloud.spanner_admin_database_v1 import EncryptionConfig + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.CreateDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + create_statement=f"CREATE DATABASE `{database_id}`", + extra_statements=[ + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX) + ) PRIMARY KEY (SingerId)""", + """CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) + ) PRIMARY KEY (SingerId, AlbumId), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + ], + encryption_config=EncryptionConfig(kms_key_name=kms_key_name), + ) + + operation = database_admin_api.create_database(request=request) + + print("Waiting for operation to complete...") + database = operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Database {} created with encryption key {}".format( + database.name, database.encryption_config.kms_key_name + ) + ) + + +# [END spanner_create_database_with_encryption_key] + + +# [START spanner_create_database_with_MR_CMEK] +def create_database_with_multiple_kms_keys(instance_id, database_id, kms_key_names): + """Creates a database with tables using multiple KMS keys(CMEK).""" + from google.cloud.spanner_admin_database_v1 import EncryptionConfig + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.CreateDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + create_statement=f"CREATE DATABASE `{database_id}`", + extra_statements=[ + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX) + ) PRIMARY KEY (SingerId)""", + """CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) + ) PRIMARY KEY (SingerId, AlbumId), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + ], + encryption_config=EncryptionConfig(kms_key_names=kms_key_names), + ) + + operation = database_admin_api.create_database(request=request) + + print("Waiting for operation to complete...") + database = operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Database {} created with multiple KMS keys {}".format( + database.name, database.encryption_config.kms_key_names + ) + ) + + +# [END spanner_create_database_with_MR_CMEK] + + +# [START spanner_create_database_with_default_leader] +def create_database_with_default_leader(instance_id, database_id, default_leader): + """Creates a database with tables with a default leader.""" + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.CreateDatabaseRequest( + parent=database_admin_api.instance_path(spanner_client.project, instance_id), + create_statement=f"CREATE DATABASE `{database_id}`", + extra_statements=[ + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX) + ) PRIMARY KEY (SingerId)""", + """CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) + ) PRIMARY KEY (SingerId, AlbumId), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + "ALTER DATABASE {}" + " SET OPTIONS (default_leader = '{}')".format(database_id, default_leader), + ], + ) + operation = database_admin_api.create_database(request=request) + + print("Waiting for operation to complete...") + database = operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Database {} created with default leader {}".format( + database.name, database.default_leader + ) + ) + + +# [END spanner_create_database_with_default_leader] + + +# [START spanner_update_database_with_default_leader] +def update_database_with_default_leader(instance_id, database_id, default_leader): + """Updates a database with tables with a default leader.""" + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "ALTER DATABASE {}" + " SET OPTIONS (default_leader = '{}')".format(database_id, default_leader) + ], + ) + operation = database_admin_api.update_database_ddl(request) + + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Database {} updated with default leader {}".format(database_id, default_leader) + ) + + +# [END spanner_update_database_with_default_leader] + + +# [START spanner_get_database_ddl] +def get_database_ddl(instance_id, database_id): + """Gets the database DDL statements.""" + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + ddl = database_admin_api.get_database_ddl( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ) + ) + print("Retrieved database DDL for {}".format(database_id)) + for statement in ddl.statements: + print(statement) + + +# [END spanner_get_database_ddl] + + +# [START spanner_query_information_schema_database_options] +def query_information_schema_database_options(instance_id, database_id): + """Queries the default leader of a database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT OPTION_VALUE AS default_leader " + "FROM INFORMATION_SCHEMA.DATABASE_OPTIONS " + "WHERE SCHEMA_NAME = '' AND OPTION_NAME = 'default_leader'" + ) + for result in results: + print("Database {} has default leader {}".format(database_id, result[0])) + + +# [END spanner_query_information_schema_database_options] + + +# [START spanner_insert_data] +def insert_data(instance_id, database_id): + """Inserts sample data into the given database. + + The database and table must already exist and can be created using + `create_database`. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.batch() as batch: + batch.insert( + table="Singers", + columns=("SingerId", "FirstName", "LastName"), + values=[ + (1, "Marc", "Richards"), + (2, "Catalina", "Smith"), + (3, "Alice", "Trentor"), + (4, "Lea", "Martin"), + (5, "David", "Lomond"), + ], + ) + + batch.insert( + table="Albums", + columns=("SingerId", "AlbumId", "AlbumTitle"), + values=[ + (1, 1, "Total Junk"), + (1, 2, "Go, Go, Go"), + (2, 1, "Green"), + (2, 2, "Forever Hold Your Peace"), + (2, 3, "Terrified"), + ], + ) + + print("Inserted data.") + + +# [END spanner_insert_data] + + +# [START spanner_batch_write_at_least_once] +def batch_write(instance_id, database_id): + """Inserts sample data into the given database via BatchWrite API. + + The database and table must already exist and can be created using + `create_database`. + """ + from google.rpc.code_pb2 import OK + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.mutation_groups() as groups: + group1 = groups.group() + group1.insert_or_update( + table="Singers", + columns=("SingerId", "FirstName", "LastName"), + values=[ + (16, "Scarlet", "Terry"), + ], + ) + + group2 = groups.group() + group2.insert_or_update( + table="Singers", + columns=("SingerId", "FirstName", "LastName"), + values=[ + (17, "Marc", ""), + (18, "Catalina", "Smith"), + ], + ) + group2.insert_or_update( + table="Albums", + columns=("SingerId", "AlbumId", "AlbumTitle"), + values=[ + (17, 1, "Total Junk"), + (18, 2, "Go, Go, Go"), + ], + ) + + for response in groups.batch_write(): + if response.status.code == OK: + print( + "Mutation group indexes {} have been applied with commit timestamp {}".format( + response.indexes, response.commit_timestamp + ) + ) + else: + print( + "Mutation group indexes {} could not be applied with error {}".format( + response.indexes, response.status + ) + ) + + +# [END spanner_batch_write_at_least_once] + + +# [START spanner_delete_data] +def delete_data(instance_id, database_id): + """Deletes sample data from the given database. + + The database, table, and data must already exist and can be created using + `create_database` and `insert_data`. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Delete individual rows + albums_to_delete = spanner.KeySet(keys=[[2, 1], [2, 3]]) + + # Delete a range of rows where the column key is >=3 and <5 + singers_range = spanner.KeyRange(start_closed=[3], end_open=[5]) + singers_to_delete = spanner.KeySet(ranges=[singers_range]) + + # Delete remaining Singers rows, which will also delete the remaining + # Albums rows because Albums was defined with ON DELETE CASCADE + remaining_singers = spanner.KeySet(all_=True) + + with database.batch() as batch: + batch.delete("Albums", albums_to_delete) + batch.delete("Singers", singers_to_delete) + batch.delete("Singers", remaining_singers) + + print("Deleted data.") + + +# [END spanner_delete_data] + + +# [START spanner_query_data] +def query_data(instance_id, database_id): + """Queries sample data from the database using SQL.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, AlbumTitle FROM Albums" + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + +# [END spanner_query_data] + + +# [START spanner_read_data] +def read_data(instance_id, database_id): + """Reads sample data from the database.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", columns=("SingerId", "AlbumId", "AlbumTitle"), keyset=keyset + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + +# [END spanner_read_data] + + +# [START spanner_read_stale_data] +def read_stale_data(instance_id, database_id): + """Reads sample data from the database. The data is exactly 15 seconds + stale.""" + import datetime + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + staleness = datetime.timedelta(seconds=15) + + with database.snapshot(exact_staleness=staleness) as snapshot: + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", + columns=("SingerId", "AlbumId", "MarketingBudget"), + keyset=keyset, + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, MarketingBudget: {}".format(*row)) + + +# [END spanner_read_stale_data] + + +# [START spanner_query_data_with_new_column] +def query_data_with_new_column(instance_id, database_id): + """Queries sample data from the database using SQL. + + This sample uses the `MarketingBudget` column. You can add the column + by running the `add_column` sample or by running this DDL statement against + your database: + + ALTER TABLE Albums ADD COLUMN MarketingBudget INT64 + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, MarketingBudget FROM Albums" + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, MarketingBudget: {}".format(*row)) + + +# [END spanner_query_data_with_new_column] + + +# [START spanner_create_index] +def add_index(instance_id, database_id): + """Adds a simple index to the example database.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=["CREATE INDEX AlbumsByAlbumTitle ON Albums(AlbumTitle)"], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the AlbumsByAlbumTitle index.") + + +# [END spanner_create_index] + + +# [START spanner_query_data_with_index] +def query_data_with_index( + instance_id, database_id, start_title="Aardvark", end_title="Goo" +): + """Queries sample data from the database using SQL and an index. + + The index must exist before running this sample. You can add the index + by running the `add_index` sample or by running this DDL statement against + your database: + + CREATE INDEX AlbumsByAlbumTitle ON Albums(AlbumTitle) + + This sample also uses the `MarketingBudget` column. You can add the column + by running the `add_column` sample or by running this DDL statement against + your database: + + ALTER TABLE Albums ADD COLUMN MarketingBudget INT64 + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + params = {"start_title": start_title, "end_title": end_title} + param_types = { + "start_title": spanner.param_types.STRING, + "end_title": spanner.param_types.STRING, + } + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT AlbumId, AlbumTitle, MarketingBudget " + "FROM Albums@{FORCE_INDEX=AlbumsByAlbumTitle} " + "WHERE AlbumTitle >= @start_title AND AlbumTitle < @end_title", + params=params, + param_types=param_types, + ) + + for row in results: + print("AlbumId: {}, AlbumTitle: {}, " "MarketingBudget: {}".format(*row)) + + +# [END spanner_query_data_with_index] + + +# [START spanner_read_data_with_index] +def read_data_with_index(instance_id, database_id): + """Reads sample data from the database using an index. + + The index must exist before running this sample. You can add the index + by running the `add_index` sample or by running this DDL statement against + your database: + + CREATE INDEX AlbumsByAlbumTitle ON Albums(AlbumTitle) + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", + columns=("AlbumId", "AlbumTitle"), + keyset=keyset, + index="AlbumsByAlbumTitle", + ) + + for row in results: + print("AlbumId: {}, AlbumTitle: {}".format(*row)) + + +# [END spanner_read_data_with_index] + + +# [START spanner_create_storing_index] +def add_storing_index(instance_id, database_id): + """Adds an storing index to the example database.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "CREATE INDEX AlbumsByAlbumTitle2 ON Albums(AlbumTitle)" + "STORING (MarketingBudget)" + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the AlbumsByAlbumTitle2 index.") + + +# [END spanner_create_storing_index] + + +# [START spanner_read_data_with_storing_index] +def read_data_with_storing_index(instance_id, database_id): + """Reads sample data from the database using an index with a storing + clause. + + The index must exist before running this sample. You can add the index + by running the `add_scoring_index` sample or by running this DDL statement + against your database: + + CREATE INDEX AlbumsByAlbumTitle2 ON Albums(AlbumTitle) + STORING (MarketingBudget) + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", + columns=("AlbumId", "AlbumTitle", "MarketingBudget"), + keyset=keyset, + index="AlbumsByAlbumTitle2", + ) + + for row in results: + print("AlbumId: {}, AlbumTitle: {}, " "MarketingBudget: {}".format(*row)) + + +# [END spanner_read_data_with_storing_index] + + +# [START spanner_add_column] +def add_column(instance_id, database_id): + """Adds a new column to the Albums table in the example database.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "ALTER TABLE Albums ADD COLUMN MarketingBudget INT64", + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + print("Added the MarketingBudget column.") + + +# [END spanner_add_column] + + +# [START spanner_update_data] +def update_data(instance_id, database_id): + """Updates sample data in the database. + + This updates the `MarketingBudget` column which must be created before + running this sample. You can add the column by running the `add_column` + sample or by running this DDL statement against your database: + + ALTER TABLE Albums ADD COLUMN MarketingBudget INT64 + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.batch() as batch: + batch.update( + table="Albums", + columns=("SingerId", "AlbumId", "MarketingBudget"), + values=[(1, 1, 100000), (2, 2, 500000)], + ) + + print("Updated data.") + + +# [END spanner_update_data] + + +# [START spanner_read_write_transaction] +def read_write_transaction(instance_id, database_id): + """Performs a read-write transaction to update two sample records in the + database. + + This will transfer 200,000 from the `MarketingBudget` field for the second + Album to the first Album. If the `MarketingBudget` is too low, it will + raise an exception. + + Before running this sample, you will need to run the `update_data` sample + to populate the fields. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def update_albums(transaction): + # Read the second album budget. + second_album_keyset = spanner.KeySet(keys=[(2, 2)]) + second_album_result = transaction.read( + table="Albums", + columns=("MarketingBudget",), + keyset=second_album_keyset, + limit=1, + ) + second_album_row = list(second_album_result)[0] + second_album_budget = second_album_row[0] + + transfer_amount = 200000 + + if second_album_budget < transfer_amount: + # Raising an exception will automatically roll back the + # transaction. + raise ValueError("The second album doesn't have enough funds to transfer") + + # Read the first album's budget. + first_album_keyset = spanner.KeySet(keys=[(1, 1)]) + first_album_result = transaction.read( + table="Albums", + columns=("MarketingBudget",), + keyset=first_album_keyset, + limit=1, + ) + first_album_row = list(first_album_result)[0] + first_album_budget = first_album_row[0] + + # Update the budgets. + second_album_budget -= transfer_amount + first_album_budget += transfer_amount + print( + "Setting first album's budget to {} and the second album's " + "budget to {}.".format(first_album_budget, second_album_budget) + ) + + # Update the rows. + transaction.update( + table="Albums", + columns=("SingerId", "AlbumId", "MarketingBudget"), + values=[(1, 1, first_album_budget), (2, 2, second_album_budget)], + ) + + database.run_in_transaction(update_albums) + + print("Transaction complete.") + + +# [END spanner_read_write_transaction] + + +# [START spanner_read_only_transaction] +def read_only_transaction(instance_id, database_id): + """Reads data inside of a read-only transaction. + + Within the read-only transaction, or "snapshot", the application sees + consistent view of the database at a particular timestamp. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot(multi_use=True) as snapshot: + # Read using SQL. + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, AlbumTitle FROM Albums" + ) + + print("Results from first read:") + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + # Perform another read using the `read` method. Even if the data + # is updated in-between the reads, the snapshot ensures that both + # return the same data. + keyset = spanner.KeySet(all_=True) + results = snapshot.read( + table="Albums", columns=("SingerId", "AlbumId", "AlbumTitle"), keyset=keyset + ) + + print("Results from second read:") + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + +# [END spanner_read_only_transaction] + + +# [START spanner_create_table_with_timestamp_column] +def create_table_with_timestamp(instance_id, database_id): + """Creates a table with a COMMIT_TIMESTAMP column.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + """CREATE TABLE Performances ( + SingerId INT64 NOT NULL, + VenueId INT64 NOT NULL, + EventDate Date, + Revenue INT64, + LastUpdateTime TIMESTAMP NOT NULL + OPTIONS(allow_commit_timestamp=true) + ) PRIMARY KEY (SingerId, VenueId, EventDate), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""" + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Performances table on database {} on instance {}".format( + database_id, instance_id + ) + ) + + +# [END spanner_create_table_with_timestamp_column] + + +# [START spanner_insert_data_with_timestamp_column] +def insert_data_with_timestamp(instance_id, database_id): + """Inserts data with a COMMIT_TIMESTAMP field into a table.""" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.batch() as batch: + batch.insert( + table="Performances", + columns=("SingerId", "VenueId", "EventDate", "Revenue", "LastUpdateTime"), + values=[ + (1, 4, "2017-10-05", 11000, spanner.COMMIT_TIMESTAMP), + (1, 19, "2017-11-02", 15000, spanner.COMMIT_TIMESTAMP), + (2, 42, "2017-12-23", 7000, spanner.COMMIT_TIMESTAMP), + ], + ) + + print("Inserted data.") + + +# [END spanner_insert_data_with_timestamp_column] + + +# [START spanner_add_timestamp_column] +def add_timestamp_column(instance_id, database_id): + """Adds a new TIMESTAMP column to the Albums table in the example database.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "ALTER TABLE Albums ADD COLUMN LastUpdateTime TIMESTAMP " + "OPTIONS(allow_commit_timestamp=true)" + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Albums" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_add_timestamp_column] + + +# [START spanner_update_data_with_timestamp_column] +def update_data_with_timestamp(instance_id, database_id): + """Updates Performances tables in the database with the COMMIT_TIMESTAMP + column. + + This updates the `MarketingBudget` column which must be created before + running this sample. You can add the column by running the `add_column` + sample or by running this DDL statement against your database: + + ALTER TABLE Albums ADD COLUMN MarketingBudget INT64 + + In addition this update expects the LastUpdateTime column added by + applying this DDL statement against your database: + + ALTER TABLE Albums ADD COLUMN LastUpdateTime TIMESTAMP + OPTIONS(allow_commit_timestamp=true) + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.batch() as batch: + batch.update( + table="Albums", + columns=("SingerId", "AlbumId", "MarketingBudget", "LastUpdateTime"), + values=[ + (1, 1, 1000000, spanner.COMMIT_TIMESTAMP), + (2, 2, 750000, spanner.COMMIT_TIMESTAMP), + ], + ) + + print("Updated data.") + + +# [END spanner_update_data_with_timestamp_column] + + +# [START spanner_query_data_with_timestamp_column] +def query_data_with_timestamp(instance_id, database_id): + """Queries sample data from the database using SQL. + + This updates the `LastUpdateTime` column which must be created before + running this sample. You can add the column by running the + `add_timestamp_column` sample or by running this DDL statement + against your database: + + ALTER TABLE Performances ADD COLUMN LastUpdateTime TIMESTAMP + OPTIONS (allow_commit_timestamp=true) + + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, MarketingBudget FROM Albums " + "ORDER BY LastUpdateTime DESC" + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, MarketingBudget: {}".format(*row)) + + +# [END spanner_query_data_with_timestamp_column] + + +# [START spanner_add_numeric_column] +def add_numeric_column(instance_id, database_id): + """Adds a new NUMERIC column to the Venues table in the example database.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=["ALTER TABLE Venues ADD COLUMN Revenue NUMERIC"], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Venues" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_add_numeric_column] + + +# [START spanner_update_data_with_numeric_column] +def update_data_with_numeric(instance_id, database_id): + """Updates Venues tables in the database with the NUMERIC + column. + + This updates the `Revenue` column which must be created before + running this sample. You can add the column by running the + `add_numeric_column` sample or by running this DDL statement + against your database: + + ALTER TABLE Venues ADD COLUMN Revenue NUMERIC + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.batch() as batch: + batch.update( + table="Venues", + columns=("VenueId", "Revenue"), + values=[ + (4, decimal.Decimal("35000")), + (19, decimal.Decimal("104500")), + (42, decimal.Decimal("99999999999999999999999999999.99")), + ], + ) + + print("Updated data.") + + +# [END spanner_update_data_with_numeric_column] + + +# [START spanner_add_json_column] +def add_json_column(instance_id, database_id): + """Adds a new JSON column to the Venues table in the example database.""" + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=["ALTER TABLE Venues ADD COLUMN VenueDetails JSON"], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + 'Altered table "Venues" on database {} on instance {}.'.format( + database_id, instance_id + ) + ) + + +# [END spanner_add_json_column] + + +# [START spanner_update_data_with_json_column] +def update_data_with_json(instance_id, database_id): + """Updates Venues tables in the database with the JSON + column. + + This updates the `VenueDetails` column which must be created before + running this sample. You can add the column by running the + `add_json_column` sample or by running this DDL statement + against your database: + + ALTER TABLE Venues ADD COLUMN VenueDetails JSON + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.batch() as batch: + batch.update( + table="Venues", + columns=("VenueId", "VenueDetails"), + values=[ + ( + 4, + JsonObject( + [ + JsonObject({"name": "room 1", "open": True}), + JsonObject({"name": "room 2", "open": False}), + ] + ), + ), + (19, JsonObject(rating=9, open=True)), + ( + 42, + JsonObject( + { + "name": None, + "open": {"Monday": True, "Tuesday": False}, + "tags": ["large", "airy"], + } + ), + ), + ], + ) + + print("Updated data.") + + +# [END spanner_update_data_with_json_column] + + +# [START spanner_write_data_for_struct_queries] +def write_struct_data(instance_id, database_id): + """Inserts sample data that can be used to test STRUCT parameters + in queries. + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.batch() as batch: + batch.insert( + table="Singers", + columns=("SingerId", "FirstName", "LastName"), + values=[ + (6, "Elena", "Campbell"), + (7, "Gabriel", "Wright"), + (8, "Benjamin", "Martinez"), + (9, "Hannah", "Harris"), + ], + ) + + print("Inserted sample data for STRUCT queries") + + +# [END spanner_write_data_for_struct_queries] + + +def query_with_struct(instance_id, database_id): + """Query a table using STRUCT parameters.""" + # [START spanner_create_struct_with_data] + record_type = param_types.Struct( + [ + param_types.StructField("FirstName", param_types.STRING), + param_types.StructField("LastName", param_types.STRING), + ] + ) + record_value = ("Elena", "Campbell") + # [END spanner_create_struct_with_data] + + # [START spanner_query_data_with_struct] + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId FROM Singers WHERE " "(FirstName, LastName) = @name", + params={"name": record_value}, + param_types={"name": record_type}, + ) + + for row in results: + print("SingerId: {}".format(*row)) + # [END spanner_query_data_with_struct] + + +def query_with_array_of_struct(instance_id, database_id): + """Query a table using an array of STRUCT parameters.""" + # [START spanner_create_user_defined_struct] + name_type = param_types.Struct( + [ + param_types.StructField("FirstName", param_types.STRING), + param_types.StructField("LastName", param_types.STRING), + ] + ) + # [END spanner_create_user_defined_struct] + + # [START spanner_create_array_of_struct_with_data] + band_members = [ + ("Elena", "Campbell"), + ("Gabriel", "Wright"), + ("Benjamin", "Martinez"), + ] + # [END spanner_create_array_of_struct_with_data] + + # [START spanner_query_data_with_array_of_struct] + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId FROM Singers WHERE " + "STRUCT" + "(FirstName, LastName) IN UNNEST(@names)", + params={"names": band_members}, + param_types={"names": param_types.Array(name_type)}, + ) + + for row in results: + print("SingerId: {}".format(*row)) + # [END spanner_query_data_with_array_of_struct] + + +# [START spanner_field_access_on_struct_parameters] +def query_struct_field(instance_id, database_id): + """Query a table using field access on a STRUCT parameter.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + name_type = param_types.Struct( + [ + param_types.StructField("FirstName", param_types.STRING), + param_types.StructField("LastName", param_types.STRING), + ] + ) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId FROM Singers " "WHERE FirstName = @name.FirstName", + params={"name": ("Elena", "Campbell")}, + param_types={"name": name_type}, + ) + + for row in results: + print("SingerId: {}".format(*row)) + + +# [END spanner_field_access_on_struct_parameters] + + +# [START spanner_field_access_on_nested_struct_parameters] +def query_nested_struct_field(instance_id, database_id): + """Query a table using nested field access on a STRUCT parameter.""" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + song_info_type = param_types.Struct( + [ + param_types.StructField("SongName", param_types.STRING), + param_types.StructField( + "ArtistNames", + param_types.Array( + param_types.Struct( + [ + param_types.StructField("FirstName", param_types.STRING), + param_types.StructField("LastName", param_types.STRING), + ] + ) + ), + ), + ] + ) + + song_info = ("Imagination", [("Elena", "Campbell"), ("Hannah", "Harris")]) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, @song_info.SongName " + "FROM Singers WHERE " + "STRUCT" + "(FirstName, LastName) " + "IN UNNEST(@song_info.ArtistNames)", + params={"song_info": song_info}, + param_types={"song_info": song_info_type}, + ) + + for row in results: + print("SingerId: {} SongName: {}".format(*row)) + + +# [END spanner_field_access_on_nested_struct_parameters] + + +def insert_data_with_dml(instance_id, database_id): + """Inserts sample data into the given database using a DML statement.""" + # [START spanner_dml_standard_insert] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def insert_singers(transaction): + row_ct = transaction.execute_update( + "INSERT INTO Singers (SingerId, FirstName, LastName) " + " VALUES (10, 'Virginia', 'Watson')" + ) + + print("{} record(s) inserted.".format(row_ct)) + + database.run_in_transaction(insert_singers) + # [END spanner_dml_standard_insert] + + +# [START spanner_get_commit_stats] +def log_commit_stats(instance_id, database_id): + """Inserts sample data using DML and displays the commit statistics.""" + + # By default, commit statistics are logged via stdout at level Info. + # This sample uses a custom logger to access the commit statistics. + class CommitStatsSampleLogger(logging.Logger): + def __init__(self): + self.last_commit_stats = None + super().__init__("commit_stats_sample") + + def info(self, msg, *args, **kwargs): + if ( + "extra" in kwargs + and kwargs["extra"] + and "commit_stats" in kwargs["extra"] + ): + self.last_commit_stats = kwargs["extra"]["commit_stats"] + super().info(msg, *args, **kwargs) + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id, logger=CommitStatsSampleLogger()) + database.log_commit_stats = True + + def insert_singers(transaction): + row_ct = transaction.execute_update( + "INSERT Singers (SingerId, FirstName, LastName) " + " VALUES (110, 'Virginia', 'Watson')" + ) + + print("{} record(s) inserted.".format(row_ct)) + + database.run_in_transaction(insert_singers) + commit_stats = database.logger.last_commit_stats + print("{} mutation(s) in transaction.".format(commit_stats.mutation_count)) + + +# [END spanner_get_commit_stats] + + +def set_max_commit_delay(instance_id, database_id): + """Inserts sample data and sets a max commit delay.""" + # [START spanner_set_max_commit_delay] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def insert_singers(transaction): + row_ct = transaction.execute_update( + "INSERT Singers (SingerId, FirstName, LastName) " + " VALUES (111, 'Grace', 'Bennis')" + ) + + print("{} record(s) inserted.".format(row_ct)) + + database.run_in_transaction( + insert_singers, max_commit_delay=datetime.timedelta(milliseconds=100) + ) + # [END spanner_set_max_commit_delay] + + +def update_data_with_dml(instance_id, database_id): + """Updates sample data from the database using a DML statement.""" + # [START spanner_dml_standard_update] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def update_albums(transaction): + row_ct = transaction.execute_update( + "UPDATE Albums " + "SET MarketingBudget = MarketingBudget * 2 " + "WHERE SingerId = 1 and AlbumId = 1" + ) + + print("{} record(s) updated.".format(row_ct)) + + database.run_in_transaction(update_albums) + # [END spanner_dml_standard_update] + + +def update_data_with_dml_returning(instance_id, database_id): + """Updates sample data from the database using a DML statement having a THEN RETURN clause.""" + # [START spanner_dml_update_returning] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Update MarketingBudget column for records satisfying + # a particular condition and returns the modified + # MarketingBudget column of the updated records using + # 'THEN RETURN MarketingBudget'. + # It is also possible to return all columns of all the + # updated records by using 'THEN RETURN *'. + def update_albums(transaction): + results = transaction.execute_sql( + "UPDATE Albums " + "SET MarketingBudget = MarketingBudget * 2 " + "WHERE SingerId = 1 and AlbumId = 1 " + "THEN RETURN MarketingBudget" + ) + for result in results: + print("MarketingBudget: {}".format(*result)) + print("{} record(s) updated.".format(results.stats.row_count_exact)) + + database.run_in_transaction(update_albums) + # [END spanner_dml_update_returning] + + +def delete_data_with_dml(instance_id, database_id): + """Deletes sample data from the database using a DML statement.""" + # [START spanner_dml_standard_delete] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def delete_singers(transaction): + row_ct = transaction.execute_update( + "DELETE FROM Singers WHERE FirstName = 'Alice'" + ) + + print("{} record(s) deleted.".format(row_ct)) + + database.run_in_transaction(delete_singers) + # [END spanner_dml_standard_delete] + + +def delete_data_with_dml_returning(instance_id, database_id): + """Deletes sample data from the database using a DML statement having a THEN RETURN clause.""" + # [START spanner_dml_delete_returning] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Delete records from SINGERS table satisfying a + # particular condition and returns the SingerId + # and FullName column of the deleted records using + # 'THEN RETURN SingerId, FullName'. + # It is also possible to return all columns of all the + # deleted records by using 'THEN RETURN *'. + def delete_singers(transaction): + results = transaction.execute_sql( + "DELETE FROM Singers WHERE FirstName = 'David' " + "THEN RETURN SingerId, FullName" + ) + for result in results: + print("SingerId: {}, FullName: {}".format(*result)) + print("{} record(s) deleted.".format(results.stats.row_count_exact)) + + database.run_in_transaction(delete_singers) + # [END spanner_dml_delete_returning] + + +def update_data_with_dml_timestamp(instance_id, database_id): + """Updates data with Timestamp from the database using a DML statement.""" + # [START spanner_dml_standard_update_with_timestamp] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def update_albums(transaction): + row_ct = transaction.execute_update( + "UPDATE Albums " + "SET LastUpdateTime = PENDING_COMMIT_TIMESTAMP() " + "WHERE SingerId = 1" + ) + + print("{} record(s) updated.".format(row_ct)) + + database.run_in_transaction(update_albums) + # [END spanner_dml_standard_update_with_timestamp] + + +def dml_write_read_transaction(instance_id, database_id): + """First inserts data then reads it from within a transaction using DML.""" + # [START spanner_dml_write_then_read] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def write_then_read(transaction): + # Insert record. + row_ct = transaction.execute_update( + "INSERT INTO Singers (SingerId, FirstName, LastName) " + " VALUES (11, 'Timothy', 'Campbell')" + ) + print("{} record(s) inserted.".format(row_ct)) + + # Read newly inserted record. + results = transaction.execute_sql( + "SELECT FirstName, LastName FROM Singers WHERE SingerId = 11" + ) + for result in results: + print("FirstName: {}, LastName: {}".format(*result)) + + database.run_in_transaction(write_then_read) + # [END spanner_dml_write_then_read] + + +def update_data_with_dml_struct(instance_id, database_id): + """Updates data with a DML statement and STRUCT parameters.""" + # [START spanner_dml_structs] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + record_type = param_types.Struct( + [ + param_types.StructField("FirstName", param_types.STRING), + param_types.StructField("LastName", param_types.STRING), + ] + ) + record_value = ("Timothy", "Campbell") + + def write_with_struct(transaction): + row_ct = transaction.execute_update( + "UPDATE Singers SET LastName = 'Grant' " + "WHERE STRUCT" + "(FirstName, LastName) = @name", + params={"name": record_value}, + param_types={"name": record_type}, + ) + print("{} record(s) updated.".format(row_ct)) + + database.run_in_transaction(write_with_struct) + # [END spanner_dml_structs] + + +def insert_with_dml(instance_id, database_id): + """Inserts data with a DML statement into the database.""" + # [START spanner_dml_getting_started_insert] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def insert_singers(transaction): + row_ct = transaction.execute_update( + "INSERT INTO Singers (SingerId, FirstName, LastName) VALUES " + "(12, 'Melissa', 'Garcia'), " + "(13, 'Russell', 'Morales'), " + "(14, 'Jacqueline', 'Long'), " + "(15, 'Dylan', 'Shaw')" + ) + print("{} record(s) inserted.".format(row_ct)) + + database.run_in_transaction(insert_singers) + # [END spanner_dml_getting_started_insert] + + +def insert_with_dml_returning(instance_id, database_id): + """Inserts sample data into the given database using a DML statement having a THEN RETURN clause.""" + # [START spanner_dml_insert_returning] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # Insert records into the SINGERS table and returns the + # generated column FullName of the inserted records using + # 'THEN RETURN FullName'. + # It is also possible to return all columns of all the + # inserted records by using 'THEN RETURN *'. + def insert_singers(transaction): + results = transaction.execute_sql( + "INSERT INTO Singers (SingerId, FirstName, LastName) VALUES " + "(21, 'Luann', 'Chizoba'), " + "(22, 'Denis', 'Patricio'), " + "(23, 'Felxi', 'Ronan'), " + "(24, 'Dominik', 'Martyna') " + "THEN RETURN FullName" + ) + for result in results: + print("FullName: {}".format(*result)) + print("{} record(s) inserted.".format(results.stats.row_count_exact)) + + database.run_in_transaction(insert_singers) + # [END spanner_dml_insert_returning] + + +def query_data_with_parameter(instance_id, database_id): + """Queries sample data from the database using SQL with a parameter.""" + # [START spanner_query_with_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, FirstName, LastName FROM Singers " + "WHERE LastName = @lastName", + params={"lastName": "Garcia"}, + param_types={"lastName": spanner.param_types.STRING}, + ) + + for row in results: + print("SingerId: {}, FirstName: {}, LastName: {}".format(*row)) + # [END spanner_query_with_parameter] + + +def write_with_dml_transaction(instance_id, database_id): + """Transfers part of a marketing budget from one album to another.""" + # [START spanner_dml_getting_started_update] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def transfer_budget(transaction): + # Transfer marketing budget from one album to another. Performed in a + # single transaction to ensure that the transfer is atomic. + second_album_result = transaction.execute_sql( + "SELECT MarketingBudget from Albums " "WHERE SingerId = 2 and AlbumId = 2" + ) + second_album_row = list(second_album_result)[0] + second_album_budget = second_album_row[0] + + transfer_amount = 200000 + + # Transaction will only be committed if this condition still holds at + # the time of commit. Otherwise it will be aborted and the callable + # will be rerun by the client library + if second_album_budget >= transfer_amount: + first_album_result = transaction.execute_sql( + "SELECT MarketingBudget from Albums " + "WHERE SingerId = 1 and AlbumId = 1" + ) + first_album_row = list(first_album_result)[0] + first_album_budget = first_album_row[0] + + second_album_budget -= transfer_amount + first_album_budget += transfer_amount + + # Update first album + transaction.execute_update( + "UPDATE Albums " + "SET MarketingBudget = @AlbumBudget " + "WHERE SingerId = 1 and AlbumId = 1", + params={"AlbumBudget": first_album_budget}, + param_types={"AlbumBudget": spanner.param_types.INT64}, + ) + + # Update second album + transaction.execute_update( + "UPDATE Albums " + "SET MarketingBudget = @AlbumBudget " + "WHERE SingerId = 2 and AlbumId = 2", + params={"AlbumBudget": second_album_budget}, + param_types={"AlbumBudget": spanner.param_types.INT64}, + ) + + print( + "Transferred {} from Album2's budget to Album1's".format( + transfer_amount + ) + ) + + database.run_in_transaction(transfer_budget) + # [END spanner_dml_getting_started_update] + + +def update_data_with_partitioned_dml(instance_id, database_id): + """Update sample data with a partitioned DML statement.""" + # [START spanner_dml_partitioned_update] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + row_ct = database.execute_partitioned_dml( + "UPDATE Albums SET MarketingBudget = 100000 WHERE SingerId > 1" + ) + + print("{} records updated.".format(row_ct)) + # [END spanner_dml_partitioned_update] + + +def delete_data_with_partitioned_dml(instance_id, database_id): + """Delete sample data with a partitioned DML statement.""" + # [START spanner_dml_partitioned_delete] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + row_ct = database.execute_partitioned_dml("DELETE FROM Singers WHERE SingerId > 10") + + print("{} record(s) deleted.".format(row_ct)) + # [END spanner_dml_partitioned_delete] + + +def update_with_batch_dml(instance_id, database_id): + """Updates sample data in the database using Batch DML.""" + # [START spanner_dml_batch_update] + from google.rpc.code_pb2 import OK + + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + insert_statement = ( + "INSERT INTO Albums " + "(SingerId, AlbumId, AlbumTitle, MarketingBudget) " + "VALUES (1, 3, 'Test Album Title', 10000)" + ) + + update_statement = ( + "UPDATE Albums " + "SET MarketingBudget = MarketingBudget * 2 " + "WHERE SingerId = 1 and AlbumId = 3" + ) + + def update_albums(transaction): + status, row_cts = transaction.batch_update([insert_statement, update_statement]) + + if status.code != OK: + # Do handling here. + # Note: the exception will still be raised when + # `commit` is called by `run_in_transaction`. + return + + print("Executed {} SQL statements using Batch DML.".format(len(row_cts))) + + database.run_in_transaction(update_albums) + # [END spanner_dml_batch_update] + + +def create_table_with_datatypes(instance_id, database_id): + """Creates a table with supported datatypes.""" + # [START spanner_create_table_with_datatypes] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + """CREATE TABLE Venues ( + VenueId INT64 NOT NULL, + VenueName STRING(100), + VenueInfo BYTES(MAX), + Capacity INT64, + AvailableDates ARRAY, + LastContactDate DATE, + OutdoorVenue BOOL, + PopularityScore FLOAT64, + LastUpdateTime TIMESTAMP NOT NULL + OPTIONS(allow_commit_timestamp=true) + ) PRIMARY KEY (VenueId)""" + ], + ) + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Venues table on database {} on instance {}".format( + database_id, instance_id + ) + ) + # [END spanner_create_table_with_datatypes] + + +def insert_datatypes_data(instance_id, database_id): + """Inserts data with supported datatypes into a table.""" + # [START spanner_insert_datatypes_data] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleBytes1 = base64.b64encode("Hello World 1".encode()) + exampleBytes2 = base64.b64encode("Hello World 2".encode()) + exampleBytes3 = base64.b64encode("Hello World 3".encode()) + available_dates1 = ["2020-12-01", "2020-12-02", "2020-12-03"] + available_dates2 = ["2020-11-01", "2020-11-05", "2020-11-15"] + available_dates3 = ["2020-10-01", "2020-10-07"] + with database.batch() as batch: + batch.insert( + table="Venues", + columns=( + "VenueId", + "VenueName", + "VenueInfo", + "Capacity", + "AvailableDates", + "LastContactDate", + "OutdoorVenue", + "PopularityScore", + "LastUpdateTime", + ), + values=[ + ( + 4, + "Venue 4", + exampleBytes1, + 1800, + available_dates1, + "2018-09-02", + False, + 0.85543, + spanner.COMMIT_TIMESTAMP, + ), + ( + 19, + "Venue 19", + exampleBytes2, + 6300, + available_dates2, + "2019-01-15", + True, + 0.98716, + spanner.COMMIT_TIMESTAMP, + ), + ( + 42, + "Venue 42", + exampleBytes3, + 3000, + available_dates3, + "2018-10-01", + False, + 0.72598, + spanner.COMMIT_TIMESTAMP, + ), + ], + ) + + print("Inserted data.") + # [END spanner_insert_datatypes_data] + + +def query_data_with_array(instance_id, database_id): + """Queries sample data using SQL with an ARRAY parameter.""" + # [START spanner_query_with_array_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleArray = ["2020-10-01", "2020-11-01"] + param = {"available_dates": exampleArray} + param_type = {"available_dates": param_types.Array(param_types.DATE)} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, AvailableDate FROM Venues v," + "UNNEST(v.AvailableDates) as AvailableDate " + "WHERE AvailableDate in UNNEST(@available_dates)", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, AvailableDate: {}".format(*row)) + # [END spanner_query_with_array_parameter] + + +def query_data_with_bool(instance_id, database_id): + """Queries sample data using SQL with a BOOL parameter.""" + # [START spanner_query_with_bool_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleBool = True + param = {"outdoor_venue": exampleBool} + param_type = {"outdoor_venue": param_types.BOOL} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, OutdoorVenue FROM Venues " + "WHERE OutdoorVenue = @outdoor_venue", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, OutdoorVenue: {}".format(*row)) + # [END spanner_query_with_bool_parameter] + + +def query_data_with_bytes(instance_id, database_id): + """Queries sample data using SQL with a BYTES parameter.""" + # [START spanner_query_with_bytes_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleBytes = base64.b64encode("Hello World 1".encode()) + param = {"venue_info": exampleBytes} + param_type = {"venue_info": param_types.BYTES} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName FROM Venues " "WHERE VenueInfo = @venue_info", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}".format(*row)) + # [END spanner_query_with_bytes_parameter] + + +def query_data_with_date(instance_id, database_id): + """Queries sample data using SQL with a DATE parameter.""" + # [START spanner_query_with_date_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleDate = "2019-01-01" + param = {"last_contact_date": exampleDate} + param_type = {"last_contact_date": param_types.DATE} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, LastContactDate FROM Venues " + "WHERE LastContactDate < @last_contact_date", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, LastContactDate: {}".format(*row)) + # [END spanner_query_with_date_parameter] + + +def query_data_with_float(instance_id, database_id): + """Queries sample data using SQL with a FLOAT64 parameter.""" + # [START spanner_query_with_float_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleFloat = 0.8 + param = {"popularity_score": exampleFloat} + param_type = {"popularity_score": param_types.FLOAT64} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, PopularityScore FROM Venues " + "WHERE PopularityScore > @popularity_score", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, PopularityScore: {}".format(*row)) + # [END spanner_query_with_float_parameter] + + +def query_data_with_int(instance_id, database_id): + """Queries sample data using SQL with a INT64 parameter.""" + # [START spanner_query_with_int_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleInt = 3000 + param = {"capacity": exampleInt} + param_type = {"capacity": param_types.INT64} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, Capacity FROM Venues " + "WHERE Capacity >= @capacity", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, Capacity: {}".format(*row)) + # [END spanner_query_with_int_parameter] + + +def query_data_with_string(instance_id, database_id): + """Queries sample data using SQL with a STRING parameter.""" + # [START spanner_query_with_string_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + exampleString = "Venue 42" + param = {"venue_name": exampleString} + param_type = {"venue_name": param_types.STRING} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName FROM Venues " "WHERE VenueName = @venue_name", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}".format(*row)) + # [END spanner_query_with_string_parameter] + + +def query_data_with_numeric_parameter(instance_id, database_id): + """Queries sample data using SQL with a NUMERIC parameter.""" + # [START spanner_query_with_numeric_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + example_numeric = decimal.Decimal("100000") + param = {"revenue": example_numeric} + param_type = {"revenue": param_types.NUMERIC} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, Revenue FROM Venues " "WHERE Revenue < @revenue", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, Revenue: {}".format(*row)) + # [END spanner_query_with_numeric_parameter] + + +def query_data_with_json_parameter(instance_id, database_id): + """Queries sample data using SQL with a JSON parameter.""" + # [START spanner_query_with_json_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + example_json = json.dumps({"rating": 9}) + param = {"details": example_json} + param_type = {"details": param_types.JSON} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueDetails " + "FROM Venues " + "WHERE JSON_VALUE(VenueDetails, '$.rating') = " + "JSON_VALUE(@details, '$.rating')", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueDetails: {}".format(*row)) + # [END spanner_query_with_json_parameter] + + +def query_data_with_timestamp_parameter(instance_id, database_id): + """Queries sample data using SQL with a TIMESTAMP parameter.""" + # [START spanner_query_with_timestamp_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + example_timestamp = datetime.datetime.utcnow().isoformat() + "Z" + # [END spanner_query_with_timestamp_parameter] + # Avoid time drift on the local machine. + # https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4197. + example_timestamp = ( + datetime.datetime.utcnow() + datetime.timedelta(days=1) + ).isoformat() + "Z" + # [START spanner_query_with_timestamp_parameter] + param = {"last_update_time": example_timestamp} + param_type = {"last_update_time": param_types.TIMESTAMP} + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, LastUpdateTime FROM Venues " + "WHERE LastUpdateTime < @last_update_time", + params=param, + param_types=param_type, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, LastUpdateTime: {}".format(*row)) + # [END spanner_query_with_timestamp_parameter] + + +def query_data_with_query_options(instance_id, database_id): + """Queries sample data using SQL with query options.""" + # [START spanner_query_with_query_options] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, LastUpdateTime FROM Venues", + query_options={ + "optimizer_version": "1", + "optimizer_statistics_package": "latest", + }, + ) + + for row in results: + print("VenueId: {}, VenueName: {}, LastUpdateTime: {}".format(*row)) + # [END spanner_query_with_query_options] + + +def create_client_with_query_options(instance_id, database_id): + """Create a client with query options.""" + # [START spanner_create_client_with_query_options] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client( + query_options={ + "optimizer_version": "1", + "optimizer_statistics_package": "latest", + } + ) + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT VenueId, VenueName, LastUpdateTime FROM Venues" + ) + + for row in results: + print("VenueId: {}, VenueName: {}, LastUpdateTime: {}".format(*row)) + # [END spanner_create_client_with_query_options] + + +def set_transaction_tag(instance_id, database_id): + """Executes a transaction with a transaction tag.""" + # [START spanner_set_transaction_tag] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def update_venues(transaction): + # Sets the request tag to "app=concert,env=dev,action=update". + # This request tag will only be set on this request. + transaction.execute_update( + "UPDATE Venues SET Capacity = CAST(Capacity/4 AS INT64) WHERE OutdoorVenue = false", + request_options={"request_tag": "app=concert,env=dev,action=update"}, + ) + print("Venue capacities updated.") + + # Sets the request tag to "app=concert,env=dev,action=insert". + # This request tag will only be set on this request. + transaction.execute_update( + "INSERT INTO Venues (VenueId, VenueName, Capacity, OutdoorVenue, LastUpdateTime) " + "VALUES (@venueId, @venueName, @capacity, @outdoorVenue, PENDING_COMMIT_TIMESTAMP())", + params={ + "venueId": 81, + "venueName": "Venue 81", + "capacity": 1440, + "outdoorVenue": True, + }, + param_types={ + "venueId": param_types.INT64, + "venueName": param_types.STRING, + "capacity": param_types.INT64, + "outdoorVenue": param_types.BOOL, + }, + request_options={"request_tag": "app=concert,env=dev,action=insert"}, + ) + print("New venue inserted.") + + database.run_in_transaction(update_venues, transaction_tag="app=concert,env=dev") + + # [END spanner_set_transaction_tag] + + +def set_transaction_timeout(instance_id, database_id): + """Executes a transaction with a transaction timeout.""" + # [START spanner_transaction_timeout] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def read_then_write(transaction): + # Read records. + results = transaction.execute_sql( + "SELECT SingerId, FirstName, LastName FROM Singers ORDER BY LastName, FirstName" + ) + for result in results: + print("SingerId: {}, FirstName: {}, LastName: {}".format(*result)) + + # Insert a record. + row_ct = transaction.execute_update( + "INSERT INTO Singers (SingerId, FirstName, LastName) " + " VALUES (100, 'George', 'Washington')" + ) + print("{} record(s) inserted.".format(row_ct)) + + # configure transaction timeout to 60 seconds + database.run_in_transaction(read_then_write, timeout_secs=60) + + # [END spanner_transaction_timeout] + + +def set_statement_timeout(instance_id, database_id): + """Executes a transaction with a statement timeout.""" + # [START spanner_set_statement_timeout] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + def write(transaction): + # Insert a record and configure the statement timeout to 60 seconds + # This timeout can however ONLY BE SHORTER than the default timeout + # for the RPC. If you set a timeout that is longer than the default timeout, + # then the default timeout will be used. + row_ct = transaction.execute_update( + "INSERT INTO Singers (SingerId, FirstName, LastName) " + " VALUES (110, 'George', 'Washington')", + timeout=60, + ) + print("{} record(s) inserted.".format(row_ct)) + + database.run_in_transaction(write) + + # [END spanner_set_statement_timeout] + + +def set_request_tag(instance_id, database_id): + """Executes a snapshot read with a request tag.""" + # [START spanner_set_request_tag] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, AlbumTitle FROM Albums", + request_options={"request_tag": "app=concert,env=dev,action=select"}, + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + # [END spanner_set_request_tag] + + +# [START spanner_create_instance_config] +def create_instance_config(user_config_name, base_config_id): + """Creates the new user-managed instance configuration using base instance config.""" + + # user_config_name = `custom-nam11` + # base_config_id = `projects//instanceConfigs/nam11` + spanner_client = spanner.Client() + base_config = spanner_client.instance_admin_api.get_instance_config( + name=base_config_id + ) + + # The replicas for the custom instance configuration must include all the replicas of the base + # configuration, in addition to at least one from the list of optional replicas of the base + # configuration. + replicas = [] + for replica in base_config.replicas: + replicas.append(replica) + replicas.append(base_config.optional_replicas[0]) + operation = spanner_client.instance_admin_api.create_instance_config( + parent=spanner_client.project_name, + instance_config_id=user_config_name, + instance_config=spanner_instance_admin.InstanceConfig( + name="{}/instanceConfigs/{}".format( + spanner_client.project_name, user_config_name + ), + display_name="custom-python-samples", + config_type=spanner_instance_admin.InstanceConfig.Type.USER_MANAGED, + replicas=replicas, + base_config=base_config.name, + labels={"python_cloud_spanner_samples": "true"}, + ), + ) + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Created instance configuration {}".format(user_config_name)) + + +# [END spanner_create_instance_config] + + +# [START spanner_update_instance_config] +def update_instance_config(user_config_name): + """Updates the user-managed instance configuration.""" + + # user_config_name = `custom-nam11` + spanner_client = spanner.Client() + config = spanner_client.instance_admin_api.get_instance_config( + name="{}/instanceConfigs/{}".format( + spanner_client.project_name, user_config_name + ) + ) + config.display_name = "updated custom instance config" + config.labels["updated"] = "true" + operation = spanner_client.instance_admin_api.update_instance_config( + instance_config=config, + update_mask=field_mask_pb2.FieldMask(paths=["display_name", "labels"]), + ) + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + print("Updated instance configuration {}".format(user_config_name)) + + +# [END spanner_update_instance_config] + + +# [START spanner_delete_instance_config] +def delete_instance_config(user_config_id): + """Deleted the user-managed instance configuration.""" + spanner_client = spanner.Client() + spanner_client.instance_admin_api.delete_instance_config(name=user_config_id) + print("Instance config {} successfully deleted".format(user_config_id)) + + +# [END spanner_delete_instance_config] + + +# [START spanner_list_instance_config_operations] +def list_instance_config_operations(): + """List the user-managed instance configuration operations.""" + spanner_client = spanner.Client() + operations = spanner_client.instance_admin_api.list_instance_config_operations( + request=spanner_instance_admin.ListInstanceConfigOperationsRequest( + parent=spanner_client.project_name, + filter="(metadata.@type=type.googleapis.com/google.spanner.admin.instance.v1.CreateInstanceConfigMetadata)", + ) + ) + for op in operations: + metadata = spanner_instance_admin.CreateInstanceConfigMetadata.pb( + spanner_instance_admin.CreateInstanceConfigMetadata() + ) + op.metadata.Unpack(metadata) + print( + "List instance config operations {} is {}% completed.".format( + metadata.instance_config.name, metadata.progress.progress_percent + ) + ) + + +# [END spanner_list_instance_config_operations] + + +def add_and_drop_database_roles(instance_id, database_id): + """Showcases how to manage a user defined database role.""" + # [START spanner_add_and_drop_database_role] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + role_parent = "new_parent" + role_child = "new_child" + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "CREATE ROLE {}".format(role_parent), + "GRANT SELECT ON TABLE Singers TO ROLE {}".format(role_parent), + "CREATE ROLE {}".format(role_child), + "GRANT ROLE {} TO ROLE {}".format(role_parent, role_child), + ], + ) + operation = database_admin_api.update_database_ddl(request) + + operation.result(OPERATION_TIMEOUT_SECONDS) + print( + "Created roles {} and {} and granted privileges".format(role_parent, role_child) + ) + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "REVOKE ROLE {} FROM ROLE {}".format(role_parent, role_child), + "DROP ROLE {}".format(role_child), + ], + ) + operation = database_admin_api.update_database_ddl(request) + + operation.result(OPERATION_TIMEOUT_SECONDS) + print("Revoked privileges and dropped role {}".format(role_child)) + + # [END spanner_add_and_drop_database_role] + + +def read_data_with_database_role(instance_id, database_id): + """Showcases how a user defined database role is used by member.""" + # [START spanner_read_data_with_database_role] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + role = "new_parent" + database = instance.database(database_id, database_role=role) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql("SELECT * FROM Singers") + for row in results: + print("SingerId: {}, FirstName: {}, LastName: {}".format(*row)) + + # [END spanner_read_data_with_database_role] + + +def list_database_roles(instance_id, database_id): + """Showcases how to list Database Roles.""" + # [START spanner_list_database_roles] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.ListDatabaseRolesRequest( + parent=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ) + ) + # List database roles. + print("Database Roles are:") + for role in database_admin_api.list_database_roles(request): + print(role.name.split("/")[-1]) + # [END spanner_list_database_roles] + + +def enable_fine_grained_access( + instance_id, + database_id, + iam_member="user:alice@example.com", + database_role="new_parent", + title="condition title", +): + """Showcases how to enable fine grained access control.""" + # [START spanner_enable_fine_grained_access] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + # iam_member = "user:alice@example.com" + # database_role = "new_parent" + # title = "condition title" + + from google.iam.v1 import iam_policy_pb2, options_pb2, policy_pb2 + from google.type import expr_pb2 + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + # The policy in the response from getDatabaseIAMPolicy might use the policy version + # that you specified, or it might use a lower policy version. For example, if you + # specify version 3, but the policy has no conditional role bindings, the response + # uses version 1. Valid values are 0, 1, and 3. + request = iam_policy_pb2.GetIamPolicyRequest( + resource=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + options=options_pb2.GetPolicyOptions(requested_policy_version=3), + ) + policy = database_admin_api.get_iam_policy(request=request) + if policy.version < 3: + policy.version = 3 + + new_binding = policy_pb2.Binding( + role="roles/spanner.fineGrainedAccessUser", + members=[iam_member], + condition=expr_pb2.Expr( + title=title, + expression=f'resource.name.endsWith("/databaseRoles/{database_role}")', + ), + ) + + policy.version = 3 + policy.bindings.append(new_binding) + set_request = iam_policy_pb2.SetIamPolicyRequest( + resource=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + policy=policy, + ) + database_admin_api.set_iam_policy(set_request) + + new_policy = database_admin_api.get_iam_policy(request=request) + print( + f"Enabled fine-grained access in IAM. New policy has version {new_policy.version}" + ) + # [END spanner_enable_fine_grained_access] + + +# [START spanner_create_table_with_foreign_key_delete_cascade] +def create_table_with_foreign_key_delete_cascade(instance_id, database_id): + """Creates a table with foreign key delete cascade action""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + """CREATE TABLE Customers ( + CustomerId INT64 NOT NULL, + CustomerName STRING(62) NOT NULL, + ) PRIMARY KEY (CustomerId) + """, + """ + CREATE TABLE ShoppingCarts ( + CartId INT64 NOT NULL, + CustomerId INT64 NOT NULL, + CustomerName STRING(62) NOT NULL, + CONSTRAINT FKShoppingCartsCustomerId FOREIGN KEY (CustomerId) + REFERENCES Customers (CustomerId) ON DELETE CASCADE + ) PRIMARY KEY (CartId) + """, + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + """Created Customers and ShoppingCarts table with FKShoppingCartsCustomerId + foreign key constraint on database {} on instance {}""".format( + database_id, instance_id + ) + ) + + +# [END spanner_create_table_with_foreign_key_delete_cascade] + + +# [START spanner_alter_table_with_foreign_key_delete_cascade] +def alter_table_with_foreign_key_delete_cascade(instance_id, database_id): + """Alters a table with foreign key delete cascade action""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + """ALTER TABLE ShoppingCarts + ADD CONSTRAINT FKShoppingCartsCustomerName + FOREIGN KEY (CustomerName) + REFERENCES Customers(CustomerName) + ON DELETE CASCADE""" + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + """Altered ShoppingCarts table with FKShoppingCartsCustomerName + foreign key constraint on database {} on instance {}""".format( + database_id, instance_id + ) + ) + + +# [END spanner_alter_table_with_foreign_key_delete_cascade] + + +# [START spanner_drop_foreign_key_constraint_delete_cascade] +def drop_foreign_key_constraint_delete_cascade(instance_id, database_id): + """Alter table to drop foreign key delete cascade action""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + """ALTER TABLE ShoppingCarts + DROP CONSTRAINT FKShoppingCartsCustomerName""" + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + """Altered ShoppingCarts table to drop FKShoppingCartsCustomerName + foreign key constraint on database {} on instance {}""".format( + database_id, instance_id + ) + ) + + +# [END spanner_drop_foreign_key_constraint_delete_cascade] + + +# [START spanner_create_sequence] +def create_sequence(instance_id, database_id): + """Creates the Sequence and insert data""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "CREATE SEQUENCE Seq OPTIONS (sequence_kind = 'bit_reversed_positive')", + """CREATE TABLE Customers ( + CustomerId INT64 DEFAULT (GET_NEXT_SEQUENCE_VALUE(Sequence Seq)), + CustomerName STRING(1024) + ) PRIMARY KEY (CustomerId)""", + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created Seq sequence and Customers table, where the key column CustomerId uses the sequence as a default value on database {} on instance {}".format( + database_id, instance_id + ) + ) + + def insert_customers(transaction): + results = transaction.execute_sql( + "INSERT INTO Customers (CustomerName) VALUES " + "('Alice'), " + "('David'), " + "('Marc') " + "THEN RETURN CustomerId" + ) + for result in results: + print("Inserted customer record with Customer Id: {}".format(*result)) + print( + "Number of customer records inserted is {}".format( + results.stats.row_count_exact + ) + ) + + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + database.run_in_transaction(insert_customers) + + +# [END spanner_create_sequence] + + +# [START spanner_alter_sequence] +def alter_sequence(instance_id, database_id): + """Alters the Sequence and insert data""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "ALTER SEQUENCE Seq SET OPTIONS (skip_range_min = 1000, skip_range_max = 5000000)", + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Altered Seq sequence to skip an inclusive range between 1000 and 5000000 on database {} on instance {}".format( + database_id, instance_id + ) + ) + + def insert_customers(transaction): + results = transaction.execute_sql( + "INSERT INTO Customers (CustomerName) VALUES " + "('Lea'), " + "('Cataline'), " + "('Smith') " + "THEN RETURN CustomerId" + ) + for result in results: + print("Inserted customer record with Customer Id: {}".format(*result)) + print( + "Number of customer records inserted is {}".format( + results.stats.row_count_exact + ) + ) + + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + database.run_in_transaction(insert_customers) + + +# [END spanner_alter_sequence] + + +# [START spanner_drop_sequence] +def drop_sequence(instance_id, database_id): + """Drops the Sequence""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "ALTER TABLE Customers ALTER COLUMN CustomerId DROP DEFAULT", + "DROP SEQUENCE Seq", + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Altered Customers table to drop DEFAULT from CustomerId column and dropped the Seq sequence on database {} on instance {}".format( + database_id, instance_id + ) + ) + + +# [END spanner_drop_sequence] + + +def directed_read_options( + instance_id, + database_id, +): + """ + Shows how to run an execute sql request with directed read options. + Only one of exclude_replicas or include_replicas can be set + Each accepts a list of replicaSelections which contains location and type + * `location` - The location must be one of the regions within the + multi-region configuration of your database. + * `type_` - The type of the replica + Some examples of using replica_selectors are: + * `location:us-east1` --> The "us-east1" replica(s) of any available type + will be used to process the request. + * `type:READ_ONLY` --> The "READ_ONLY" type replica(s) in nearest + available location will be used to process the + request. + * `location:us-east1 type:READ_ONLY` --> The "READ_ONLY" type replica(s) + in location "us-east1" will be used to process + the request. + include_replicas also contains an option for auto_failover_disabled which when set + Spanner will not route requests to a replica outside the + include_replicas list when all the specified replicas are unavailable + or unhealthy. The default value is `false` + """ + # [START spanner_directed_read] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + directed_read_options_for_client = { + "exclude_replicas": { + "replica_selections": [ + { + "location": "us-east4", + }, + ], + }, + } + + # directed_read_options can be set at client level and will be used in all + # read-only transaction requests + spanner_client = spanner.Client( + directed_read_options=directed_read_options_for_client + ) + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + directed_read_options_for_request = { + "include_replicas": { + "replica_selections": [ + { + "type_": DirectedReadOptions.ReplicaSelection.Type.READ_ONLY, + }, + ], + "auto_failover_disabled": True, + }, + } + + with database.snapshot() as snapshot: + # Read rows while passing directed_read_options directly to the query. + # These will override the options passed at Client level. + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, AlbumTitle FROM Albums", + directed_read_options=directed_read_options_for_request, + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + # [END spanner_directed_read] + + +def isolation_level_options( + instance_id, + database_id, +): + from google.cloud.spanner_v1 import TransactionOptions, DefaultTransactionOptions + + """ + Shows how to run a Read Write transaction with isolation level options. + """ + # [START spanner_isolation_level] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + # The isolation level specified at the client-level will be applied to all RW transactions. + isolation_options_for_client = TransactionOptions.IsolationLevel.SERIALIZABLE + + spanner_client = spanner.Client( + default_transaction_options=DefaultTransactionOptions( + isolation_level=isolation_options_for_client + ) + ) + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + # The isolation level specified at the request level takes precedence over the isolation level configured at the client level. + isolation_options_for_transaction = ( + TransactionOptions.IsolationLevel.REPEATABLE_READ + ) + + def update_albums_with_isolation(transaction): + # Read an AlbumTitle. + results = transaction.execute_sql( + "SELECT AlbumTitle from Albums WHERE SingerId = 1 and AlbumId = 1" + ) + for result in results: + print("Current Album Title: {}".format(*result)) + + # Update the AlbumTitle. + row_ct = transaction.execute_update( + "UPDATE Albums SET AlbumTitle = 'A New Title' WHERE SingerId = 1 and AlbumId = 1" + ) + + print("{} record(s) updated.".format(row_ct)) + + database.run_in_transaction( + update_albums_with_isolation, isolation_level=isolation_options_for_transaction + ) + # [END spanner_isolation_level] + + +def set_custom_timeout_and_retry(instance_id, database_id): + """Executes a snapshot read with custom timeout and retry.""" + # [START spanner_set_custom_timeout_and_retry] + from google.api_core import exceptions as core_exceptions + from google.api_core import retry + + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + retry = retry.Retry( + # Customize retry with an initial wait time of 500 milliseconds. + initial=0.5, + # Customize retry with a maximum wait time of 16 seconds. + maximum=16, + # Customize retry with a wait time multiplier per iteration of 1.5. + multiplier=1.5, + # Customize retry with a timeout on + # how long a certain RPC may be retried in + # case the server returns an error. + timeout=60, + # Configure which errors should be retried. + predicate=retry.if_exception_type( + core_exceptions.ServiceUnavailable, + ), + ) + + # Set a custom retry and timeout setting. + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, AlbumId, AlbumTitle FROM Albums", + # Set custom retry setting for this request + retry=retry, + # Set custom timeout of 60 seconds for this request + timeout=60, + ) + + for row in results: + print("SingerId: {}, AlbumId: {}, AlbumTitle: {}".format(*row)) + + # [END spanner_set_custom_timeout_and_retry] + + +# [START spanner_create_instance_with_autoscaling_config] +def create_instance_with_autoscaling_config(instance_id): + """Creates a Cloud Spanner instance with an autoscaling configuration.""" + from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin + + spanner_client = spanner.Client() + + config_name = "{}/instanceConfigs/regional-us-central1".format( + spanner_client.project_name + ) + + autoscaling_config = spanner_instance_admin.AutoscalingConfig( + # Only one of minNodes/maxNodes or minProcessingUnits/maxProcessingUnits can be set. + autoscaling_limits=spanner_instance_admin.AutoscalingConfig.AutoscalingLimits( + min_nodes=1, + max_nodes=2, + ), + # highPriorityCpuUtilizationPercent and storageUtilizationPercent are both + # percentages and must lie between 0 and 100. + autoscaling_targets=spanner_instance_admin.AutoscalingConfig.AutoscalingTargets( + high_priority_cpu_utilization_percent=65, + storage_utilization_percent=95, + ), + ) + + # Creates a new instance with autoscaling configuration + # When autoscalingConfig is enabled, nodeCount and processingUnits fields + # need not be specified. + request = spanner_instance_admin.CreateInstanceRequest( + parent=spanner_client.project_name, + instance_id=instance_id, + instance=spanner_instance_admin.Instance( + config=config_name, + display_name="This is a display name.", + autoscaling_config=autoscaling_config, + labels={ + "cloud_spanner_samples": "true", + "sample_name": "snippets-create_instance_with_autoscaling_config", + "created": str(int(time.time())), + }, + edition=spanner_instance_admin.Instance.Edition.ENTERPRISE, # Optional + ), + ) + + operation = spanner_client.instance_admin_api.create_instance(request=request) + + print("Waiting for operation to complete...") + instance = operation.result(OPERATION_TIMEOUT_SECONDS) + + print( + "Created instance {} with {} autoscaling config".format( + instance_id, instance.autoscaling_config + ) + ) + + +# [END spanner_create_instance_with_autoscaling_config] + + +# [START spanner_create_instance_without_default_backup_schedule] +def create_instance_without_default_backup_schedules(instance_id): + spanner_client = spanner.Client() + config_name = "{}/instanceConfigs/regional-me-central2".format( + spanner_client.project_name + ) + + operation = spanner_client.instance_admin_api.create_instance( + parent=spanner_client.project_name, + instance_id=instance_id, + instance=spanner_instance_admin.Instance( + config=config_name, + display_name="This is a display name.", + node_count=1, + default_backup_schedule_type=spanner_instance_admin.Instance.DefaultBackupScheduleType.NONE, # Optional + ), + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Created instance {} without default backup schedules".format(instance_id)) + + +# [END spanner_create_instance_without_default_backup_schedule] + + +# [START spanner_update_instance_default_backup_schedule_type] +def update_instance_default_backup_schedule_type(instance_id): + spanner_client = spanner.Client() + + name = "{}/instances/{}".format(spanner_client.project_name, instance_id) + + operation = spanner_client.instance_admin_api.update_instance( + instance=spanner_instance_admin.Instance( + name=name, + default_backup_schedule_type=spanner_instance_admin.Instance.DefaultBackupScheduleType.AUTOMATIC, # Optional + ), + field_mask=field_mask_pb2.FieldMask(paths=["default_backup_schedule_type"]), + ) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Updated instance {} to have default backup schedules".format(instance_id)) + + +# [END spanner_update_instance_default_backup_schedule_type] + + +def add_proto_type_columns(instance_id, database_id): + # [START spanner_add_proto_type_columns] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + """Adds a new Proto Message column and Proto Enum column to the Singers table.""" + + import os + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + dirname = os.path.dirname(__file__) + filename = os.path.join(dirname, "testdata/descriptors.pb") + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + proto_descriptor_file = open(filename, "rb") + proto_descriptor = proto_descriptor_file.read() + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + """CREATE PROTO BUNDLE ( + examples.spanner.music.SingerInfo, + examples.spanner.music.Genre, + )""", + "ALTER TABLE Singers ADD COLUMN SingerInfo examples.spanner.music.SingerInfo", + "ALTER TABLE Singers ADD COLUMN SingerInfoArray ARRAY", + "ALTER TABLE Singers ADD COLUMN SingerGenre examples.spanner.music.Genre", + "ALTER TABLE Singers ADD COLUMN SingerGenreArray ARRAY", + ], + proto_descriptors=proto_descriptor, + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + proto_descriptor_file.close() + + print( + 'Altered table "Singers" on database {} on instance {} with proto descriptors.'.format( + database_id, instance_id + ) + ) + # [END spanner_add_proto_type_columns] + + +def update_data_with_proto_types(instance_id, database_id): + # [START spanner_update_data_with_proto_types] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + """Updates Singers tables in the database with the ProtoMessage + and ProtoEnum column. + + This updates the `SingerInfo`, `SingerInfoArray`, `SingerGenre` and + `SingerGenreArray` columns which must be created before + running this sample. You can add the column by running the + `add_proto_type_columns` sample or by running this DDL statement + against your database: + + ALTER TABLE Singers ADD COLUMN SingerInfo examples.spanner.music.SingerInfo\n + ALTER TABLE Singers ADD COLUMN SingerInfoArray ARRAY\n + ALTER TABLE Singers ADD COLUMN SingerGenre examples.spanner.music.Genre\n + ALTER TABLE Singers ADD COLUMN SingerGenreArray ARRAY\n + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + singer_info = singer_pb2.SingerInfo() + singer_info.singer_id = 2 + singer_info.birth_date = "February" + singer_info.nationality = "Country2" + singer_info.genre = singer_pb2.Genre.FOLK + + singer_info_array = [singer_info] + + singer_genre_array = [singer_pb2.Genre.FOLK] + + with database.batch() as batch: + batch.update( + table="Singers", + columns=( + "SingerId", + "SingerInfo", + "SingerInfoArray", + "SingerGenre", + "SingerGenreArray", + ), + values=[ + ( + 2, + singer_info, + singer_info_array, + singer_pb2.Genre.FOLK, + singer_genre_array, + ), + (3, None, None, None, None), + ], + ) + + print("Data updated.") + # [END spanner_update_data_with_proto_types] + + +def update_data_with_proto_types_with_dml(instance_id, database_id): + # [START spanner_update_data_with_proto_types_with_dml] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + """Updates Singers tables in the database with the ProtoMessage + and ProtoEnum column. + + This updates the `SingerInfo`, `SingerInfoArray`, `SingerGenre` and `SingerGenreArray` columns which must be created before + running this sample. You can add the column by running the + `add_proto_type_columns` sample or by running this DDL statement + against your database: + + ALTER TABLE Singers ADD COLUMN SingerInfo examples.spanner.music.SingerInfo\n + ALTER TABLE Singers ADD COLUMN SingerInfoArray ARRAY\n + ALTER TABLE Singers ADD COLUMN SingerGenre examples.spanner.music.Genre\n + ALTER TABLE Singers ADD COLUMN SingerGenreArray ARRAY\n + """ + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + singer_info = singer_pb2.SingerInfo() + singer_info.singer_id = 1 + singer_info.birth_date = "January" + singer_info.nationality = "Country1" + singer_info.genre = singer_pb2.Genre.ROCK + + singer_info_array = [singer_info, None] + + singer_genre_array = [singer_pb2.Genre.ROCK, None] + + def update_singers_with_proto_types(transaction): + row_ct = transaction.execute_update( + "UPDATE Singers " + "SET SingerInfo = @singerInfo, SingerInfoArray=@singerInfoArray, " + "SingerGenre=@singerGenre, SingerGenreArray=@singerGenreArray " + "WHERE SingerId = 1", + params={ + "singerInfo": singer_info, + "singerInfoArray": singer_info_array, + "singerGenre": singer_pb2.Genre.ROCK, + "singerGenreArray": singer_genre_array, + }, + param_types={ + "singerInfo": param_types.ProtoMessage(singer_info), + "singerInfoArray": param_types.Array( + param_types.ProtoMessage(singer_info) + ), + "singerGenre": param_types.ProtoEnum(singer_pb2.Genre), + "singerGenreArray": param_types.Array( + param_types.ProtoEnum(singer_pb2.Genre) + ), + }, + ) + + print("{} record(s) updated.".format(row_ct)) + + database.run_in_transaction(update_singers_with_proto_types) + + def update_singers_with_proto_field(transaction): + row_ct = transaction.execute_update( + "UPDATE Singers " + "SET SingerInfo.nationality = @singerNationality " + "WHERE SingerId = 1", + params={ + "singerNationality": "Country2", + }, + param_types={ + "singerNationality": param_types.STRING, + }, + ) + + print("{} record(s) updated.".format(row_ct)) + + database.run_in_transaction(update_singers_with_proto_field) + # [END spanner_update_data_with_proto_types_with_dml] + + +def query_data_with_proto_types_parameter(instance_id, database_id): + # [START spanner_query_with_proto_types_parameter] + # instance_id = "your-spanner-instance" + # database_id = "your-spanner-db-id" + + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_id) + database = instance.database(database_id) + + with database.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT SingerId, SingerInfo, SingerInfo.nationality, SingerInfoArray, " + "SingerGenre, SingerGenreArray FROM Singers " + "WHERE SingerInfo.Nationality=@country " + "and SingerGenre=@singerGenre", + params={ + "country": "Country2", + "singerGenre": singer_pb2.Genre.FOLK, + }, + param_types={ + "country": param_types.STRING, + "singerGenre": param_types.ProtoEnum(singer_pb2.Genre), + }, + # column_info is an optional parameter and is used to deserialize + # the proto message and enum object back from bytearray and + # int respectively. + # If column_info is not passed for proto messages and enums, then + # the data types for these columns will be bytes and int + # respectively. + column_info={ + "SingerInfo": singer_pb2.SingerInfo(), + "SingerInfoArray": singer_pb2.SingerInfo(), + "SingerGenre": singer_pb2.Genre, + "SingerGenreArray": singer_pb2.Genre, + }, + ) + + for row in results: + print( + "SingerId: {}, SingerInfo: {}, SingerInfoNationality: {}, " + "SingerInfoArray: {}, SingerGenre: {}, SingerGenreArray: {}".format( + *row + ) + ) + # [END spanner_query_with_proto_types_parameter] + + +# [START spanner_database_add_split_points] +def add_split_points(instance_id, database_id): + """Adds split points to table and index.""" + + from google.cloud.spanner_admin_database_v1.types import spanner_database_admin + + spanner_client = spanner.Client() + database_admin_api = spanner_client.database_admin_api + + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + statements=[ + "CREATE INDEX IF NOT EXISTS SingersByFirstLastName ON Singers(FirstName, LastName)" + ], + ) + + operation = database_admin_api.update_database_ddl(request) + + print("Waiting for operation to complete...") + operation.result(OPERATION_TIMEOUT_SECONDS) + + print("Added the SingersByFirstLastName index.") + + addSplitPointRequest = spanner_database_admin.AddSplitPointsRequest( + database=database_admin_api.database_path( + spanner_client.project, instance_id, database_id + ), + # Table split + # Index split without table key part + # Index split with table key part: first key is the index key and second the table key + split_points=[ + spanner_database_admin.SplitPoints( + table="Singers", + keys=[ + spanner_database_admin.SplitPoints.Key( + key_parts=struct_pb2.ListValue( + values=[struct_pb2.Value(string_value="42")] + ) + ) + ], + ), + spanner_database_admin.SplitPoints( + index="SingersByFirstLastName", + keys=[ + spanner_database_admin.SplitPoints.Key( + key_parts=struct_pb2.ListValue( + values=[ + struct_pb2.Value(string_value="John"), + struct_pb2.Value(string_value="Doe"), + ] + ) + ) + ], + ), + spanner_database_admin.SplitPoints( + index="SingersByFirstLastName", + keys=[ + spanner_database_admin.SplitPoints.Key( + key_parts=struct_pb2.ListValue( + values=[ + struct_pb2.Value(string_value="Jane"), + struct_pb2.Value(string_value="Doe"), + ] + ) + ), + spanner_database_admin.SplitPoints.Key( + key_parts=struct_pb2.ListValue( + values=[struct_pb2.Value(string_value="38")] + ) + ), + ], + ), + ], + ) + + operation = database_admin_api.add_split_points(addSplitPointRequest) + + print("Added split points.") + + +# [END spanner_database_add_split_points] + + +if __name__ == "__main__": # noqa: C901 + parser = argparse.ArgumentParser( + description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter + ) + parser.add_argument("instance_id", help="Your Cloud Spanner instance ID.") + parser.add_argument( + "--database-id", help="Your Cloud Spanner database ID.", default="example_db" + ) + + subparsers = parser.add_subparsers(dest="command") + subparsers.add_parser("create_instance", help=create_instance.__doc__) + subparsers.add_parser("update_instance", help=update_instance.__doc__) + subparsers.add_parser("create_database", help=create_database.__doc__) + subparsers.add_parser("insert_data", help=insert_data.__doc__) + subparsers.add_parser("batch_write", help=batch_write.__doc__) + subparsers.add_parser("delete_data", help=delete_data.__doc__) + subparsers.add_parser("query_data", help=query_data.__doc__) + subparsers.add_parser("read_data", help=read_data.__doc__) + subparsers.add_parser("read_stale_data", help=read_stale_data.__doc__) + subparsers.add_parser("add_column", help=add_column.__doc__) + subparsers.add_parser("update_data", help=update_data.__doc__) + subparsers.add_parser("set_max_commit_delay", help=set_max_commit_delay.__doc__) + subparsers.add_parser( + "set_transaction_timeout", help=set_transaction_timeout.__doc__ + ) + subparsers.add_parser("set_statement_timeout", help=set_statement_timeout.__doc__) + subparsers.add_parser( + "query_data_with_new_column", help=query_data_with_new_column.__doc__ + ) + subparsers.add_parser("read_write_transaction", help=read_write_transaction.__doc__) + subparsers.add_parser("read_only_transaction", help=read_only_transaction.__doc__) + subparsers.add_parser("add_index", help=add_index.__doc__) + query_data_with_index_parser = subparsers.add_parser( + "query_data_with_index", help=query_data_with_index.__doc__ + ) + query_data_with_index_parser.add_argument("--start_title", default="Aardvark") + query_data_with_index_parser.add_argument("--end_title", default="Goo") + subparsers.add_parser("read_data_with_index", help=read_data_with_index.__doc__) + subparsers.add_parser("add_storing_index", help=add_storing_index.__doc__) + subparsers.add_parser( + "read_data_with_storing_index", help=read_data_with_storing_index.__doc__ + ) + subparsers.add_parser( + "create_table_with_timestamp", help=create_table_with_timestamp.__doc__ + ) + subparsers.add_parser( + "insert_data_with_timestamp", help=insert_data_with_timestamp.__doc__ + ) + subparsers.add_parser("add_timestamp_column", help=add_timestamp_column.__doc__) + subparsers.add_parser( + "update_data_with_timestamp", help=update_data_with_timestamp.__doc__ + ) + subparsers.add_parser( + "query_data_with_timestamp", help=query_data_with_timestamp.__doc__ + ) + subparsers.add_parser("write_struct_data", help=write_struct_data.__doc__) + subparsers.add_parser("query_with_struct", help=query_with_struct.__doc__) + subparsers.add_parser( + "query_with_array_of_struct", help=query_with_array_of_struct.__doc__ + ) + subparsers.add_parser("query_struct_field", help=query_struct_field.__doc__) + subparsers.add_parser( + "query_nested_struct_field", help=query_nested_struct_field.__doc__ + ) + subparsers.add_parser("insert_data_with_dml", help=insert_data_with_dml.__doc__) + subparsers.add_parser("log_commit_stats", help=log_commit_stats.__doc__) + subparsers.add_parser("update_data_with_dml", help=update_data_with_dml.__doc__) + subparsers.add_parser( + "update_data_with_dml_returning", help=update_data_with_dml_returning.__doc__ + ) + subparsers.add_parser("delete_data_with_dml", help=delete_data_with_dml.__doc__) + subparsers.add_parser( + "delete_data_with_dml_returning", help=delete_data_with_dml_returning.__doc__ + ) + subparsers.add_parser( + "update_data_with_dml_timestamp", help=update_data_with_dml_timestamp.__doc__ + ) + subparsers.add_parser( + "dml_write_read_transaction", help=dml_write_read_transaction.__doc__ + ) + subparsers.add_parser( + "update_data_with_dml_struct", help=update_data_with_dml_struct.__doc__ + ) + subparsers.add_parser("insert_with_dml", help=insert_with_dml.__doc__) + subparsers.add_parser( + "insert_with_dml_returning", help=insert_with_dml_returning.__doc__ + ) + subparsers.add_parser( + "query_data_with_parameter", help=query_data_with_parameter.__doc__ + ) + subparsers.add_parser( + "write_with_dml_transaction", help=write_with_dml_transaction.__doc__ + ) + subparsers.add_parser( + "update_data_with_partitioned_dml", + help=update_data_with_partitioned_dml.__doc__, + ) + subparsers.add_parser( + "delete_data_with_partitioned_dml", + help=delete_data_with_partitioned_dml.__doc__, + ) + subparsers.add_parser("update_with_batch_dml", help=update_with_batch_dml.__doc__) + subparsers.add_parser( + "create_table_with_datatypes", help=create_table_with_datatypes.__doc__ + ) + subparsers.add_parser("insert_datatypes_data", help=insert_datatypes_data.__doc__) + subparsers.add_parser("query_data_with_array", help=query_data_with_array.__doc__) + subparsers.add_parser("query_data_with_bool", help=query_data_with_bool.__doc__) + subparsers.add_parser("query_data_with_bytes", help=query_data_with_bytes.__doc__) + subparsers.add_parser("query_data_with_date", help=query_data_with_date.__doc__) + subparsers.add_parser("query_data_with_float", help=query_data_with_float.__doc__) + subparsers.add_parser("query_data_with_int", help=query_data_with_int.__doc__) + subparsers.add_parser("query_data_with_string", help=query_data_with_string.__doc__) + subparsers.add_parser( + "query_data_with_timestamp_parameter", + help=query_data_with_timestamp_parameter.__doc__, + ) + subparsers.add_parser( + "query_data_with_query_options", help=query_data_with_query_options.__doc__ + ) + subparsers.add_parser( + "create_client_with_query_options", + help=create_client_with_query_options.__doc__, + ) + subparsers.add_parser( + "add_and_drop_database_roles", help=add_and_drop_database_roles.__doc__ + ) + subparsers.add_parser( + "read_data_with_database_role", help=read_data_with_database_role.__doc__ + ) + subparsers.add_parser("list_database_roles", help=list_database_roles.__doc__) + subparsers.add_parser("create_sequence", help=create_sequence.__doc__) + subparsers.add_parser("alter_sequence", help=alter_sequence.__doc__) + subparsers.add_parser("drop_sequence", help=drop_sequence.__doc__) + + enable_fine_grained_access_parser = subparsers.add_parser( + "enable_fine_grained_access", help=enable_fine_grained_access.__doc__ + ) + enable_fine_grained_access_parser.add_argument( + "--iam_member", default="user:alice@example.com" + ) + enable_fine_grained_access_parser.add_argument( + "--database_role", default="new_parent" + ) + enable_fine_grained_access_parser.add_argument("--title", default="condition title") + subparsers.add_parser("directed_read_options", help=directed_read_options.__doc__) + subparsers.add_parser( + "isolation_level_options", help=isolation_level_options.__doc__ + ) + subparsers.add_parser( + "set_custom_timeout_and_retry", help=set_custom_timeout_and_retry.__doc__ + ) + subparsers.add_parser("add_proto_type_columns", help=add_proto_type_columns.__doc__) + subparsers.add_parser( + "update_data_with_proto_types", help=update_data_with_proto_types.__doc__ + ) + subparsers.add_parser( + "update_data_with_proto_types_with_dml", + help=update_data_with_proto_types_with_dml.__doc__, + ) + subparsers.add_parser( + "query_data_with_proto_types_parameter", + help=query_data_with_proto_types_parameter.__doc__, + ) + subparsers.add_parser( + "add_split_points", + help=add_split_points.__doc__, + ) + + args = parser.parse_args() + + if args.command == "create_instance": + create_instance(args.instance_id) + if args.command == "update_instance": + update_instance(args.instance_id) + elif args.command == "create_database": + create_database(args.instance_id, args.database_id) + elif args.command == "insert_data": + insert_data(args.instance_id, args.database_id) + elif args.command == "batch_write": + batch_write(args.instance_id, args.database_id) + elif args.command == "delete_data": + delete_data(args.instance_id, args.database_id) + elif args.command == "query_data": + query_data(args.instance_id, args.database_id) + elif args.command == "read_data": + read_data(args.instance_id, args.database_id) + elif args.command == "read_stale_data": + read_stale_data(args.instance_id, args.database_id) + elif args.command == "add_column": + add_column(args.instance_id, args.database_id) + elif args.command == "update_data": + update_data(args.instance_id, args.database_id) + elif args.command == "set_max_commit_delay": + set_max_commit_delay(args.instance_id, args.database_id) + elif args.command == "set_transaction_timeout": + set_transaction_timeout(args.instance_id, args.database_id) + elif args.command == "set_statement_timeout": + set_statement_timeout(args.instance_id, args.database_id) + elif args.command == "query_data_with_new_column": + query_data_with_new_column(args.instance_id, args.database_id) + elif args.command == "read_write_transaction": + read_write_transaction(args.instance_id, args.database_id) + elif args.command == "read_only_transaction": + read_only_transaction(args.instance_id, args.database_id) + elif args.command == "add_index": + add_index(args.instance_id, args.database_id) + elif args.command == "query_data_with_index": + query_data_with_index( + args.instance_id, args.database_id, args.start_title, args.end_title + ) + elif args.command == "read_data_with_index": + read_data_with_index(args.instance_id, args.database_id) + elif args.command == "add_storing_index": + add_storing_index(args.instance_id, args.database_id) + elif args.command == "read_data_with_storing_index": + read_data_with_storing_index(args.instance_id, args.database_id) + elif args.command == "create_table_with_timestamp": + create_table_with_timestamp(args.instance_id, args.database_id) + elif args.command == "insert_data_with_timestamp": + insert_data_with_timestamp(args.instance_id, args.database_id) + elif args.command == "add_timestamp_column": + add_timestamp_column(args.instance_id, args.database_id) + elif args.command == "update_data_with_timestamp": + update_data_with_timestamp(args.instance_id, args.database_id) + elif args.command == "query_data_with_timestamp": + query_data_with_timestamp(args.instance_id, args.database_id) + elif args.command == "write_struct_data": + write_struct_data(args.instance_id, args.database_id) + elif args.command == "query_with_struct": + query_with_struct(args.instance_id, args.database_id) + elif args.command == "query_with_array_of_struct": + query_with_array_of_struct(args.instance_id, args.database_id) + elif args.command == "query_struct_field": + query_struct_field(args.instance_id, args.database_id) + elif args.command == "query_nested_struct_field": + query_nested_struct_field(args.instance_id, args.database_id) + elif args.command == "insert_data_with_dml": + insert_data_with_dml(args.instance_id, args.database_id) + elif args.command == "log_commit_stats": + log_commit_stats(args.instance_id, args.database_id) + elif args.command == "update_data_with_dml": + update_data_with_dml(args.instance_id, args.database_id) + elif args.command == "update_data_with_dml_returning": + update_data_with_dml_returning(args.instance_id, args.database_id) + elif args.command == "delete_data_with_dml": + delete_data_with_dml(args.instance_id, args.database_id) + elif args.command == "delete_data_with_dml_returning": + delete_data_with_dml_returning(args.instance_id, args.database_id) + elif args.command == "update_data_with_dml_timestamp": + update_data_with_dml_timestamp(args.instance_id, args.database_id) + elif args.command == "dml_write_read_transaction": + dml_write_read_transaction(args.instance_id, args.database_id) + elif args.command == "update_data_with_dml_struct": + update_data_with_dml_struct(args.instance_id, args.database_id) + elif args.command == "insert_with_dml": + insert_with_dml(args.instance_id, args.database_id) + elif args.command == "insert_with_dml_returning": + insert_with_dml_returning(args.instance_id, args.database_id) + elif args.command == "query_data_with_parameter": + query_data_with_parameter(args.instance_id, args.database_id) + elif args.command == "write_with_dml_transaction": + write_with_dml_transaction(args.instance_id, args.database_id) + elif args.command == "update_data_with_partitioned_dml": + update_data_with_partitioned_dml(args.instance_id, args.database_id) + elif args.command == "delete_data_with_partitioned_dml": + delete_data_with_partitioned_dml(args.instance_id, args.database_id) + elif args.command == "update_with_batch_dml": + update_with_batch_dml(args.instance_id, args.database_id) + elif args.command == "create_table_with_datatypes": + create_table_with_datatypes(args.instance_id, args.database_id) + elif args.command == "insert_datatypes_data": + insert_datatypes_data(args.instance_id, args.database_id) + elif args.command == "query_data_with_array": + query_data_with_array(args.instance_id, args.database_id) + elif args.command == "query_data_with_bool": + query_data_with_bool(args.instance_id, args.database_id) + elif args.command == "query_data_with_bytes": + query_data_with_bytes(args.instance_id, args.database_id) + elif args.command == "query_data_with_date": + query_data_with_date(args.instance_id, args.database_id) + elif args.command == "query_data_with_float": + query_data_with_float(args.instance_id, args.database_id) + elif args.command == "query_data_with_int": + query_data_with_int(args.instance_id, args.database_id) + elif args.command == "query_data_with_string": + query_data_with_string(args.instance_id, args.database_id) + elif args.command == "query_data_with_timestamp_parameter": + query_data_with_timestamp_parameter(args.instance_id, args.database_id) + elif args.command == "query_data_with_query_options": + query_data_with_query_options(args.instance_id, args.database_id) + elif args.command == "create_client_with_query_options": + create_client_with_query_options(args.instance_id, args.database_id) + elif args.command == "add_and_drop_database_roles": + add_and_drop_database_roles(args.instance_id, args.database_id) + elif args.command == "read_data_with_database_role": + read_data_with_database_role(args.instance_id, args.database_id) + elif args.command == "list_database_roles": + list_database_roles(args.instance_id, args.database_id) + elif args.command == "enable_fine_grained_access": + enable_fine_grained_access( + args.instance_id, + args.database_id, + args.iam_member, + args.database_role, + args.title, + ) + elif args.command == "directed_read_options": + directed_read_options(args.instance_id, args.database_id) + elif args.command == "isolation_level_options": + isolation_level_options(args.instance_id, args.database_id) + elif args.command == "set_custom_timeout_and_retry": + set_custom_timeout_and_retry(args.instance_id, args.database_id) + elif args.command == "create_instance_with_autoscaling_config": + create_instance_with_autoscaling_config(args.instance_id) + elif args.command == "add_proto_type_columns": + add_proto_type_columns(args.instance_id, args.database_id) + elif args.command == "update_data_with_proto_types": + update_data_with_proto_types(args.instance_id, args.database_id) + elif args.command == "update_data_with_proto_types_with_dml": + update_data_with_proto_types_with_dml(args.instance_id, args.database_id) + elif args.command == "query_data_with_proto_types_parameter": + query_data_with_proto_types_parameter(args.instance_id, args.database_id) + elif args.command == "add_split_points": + add_split_points(args.instance_id, args.database_id) diff --git a/samples/samples/snippets_test.py b/samples/samples/snippets_test.py new file mode 100644 index 0000000000..03c9f2682c --- /dev/null +++ b/samples/samples/snippets_test.py @@ -0,0 +1,1047 @@ +# Copyright 2016 Google, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import time +import uuid + +from google.api_core import exceptions +from google.cloud import spanner +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect +import pytest +from test_utils.retry import RetryErrors + +import snippets + +CREATE_TABLE_SINGERS = """\ +CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX), + FullName STRING(2048) AS ( + ARRAY_TO_STRING([FirstName, LastName], " ") + ) STORED +) PRIMARY KEY (SingerId) +""" + +CREATE_TABLE_ALBUMS = """\ +CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX) +) PRIMARY KEY (SingerId, AlbumId), +INTERLEAVE IN PARENT Singers ON DELETE CASCADE +""" + +CREATE_TABLE_SINGERS_ = """\ +CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + ) PRIMARY KEY (SingerId) +""" + +retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + + +@pytest.fixture(scope="module") +def sample_name(): + return "snippets" + + +@pytest.fixture(scope="module") +def database_dialect(): + """Spanner dialect to be used for this sample. + + The dialect is used to initialize the dialect for the database. + It can either be GoogleStandardSql or PostgreSql. + """ + return DatabaseDialect.GOOGLE_STANDARD_SQL + + +@pytest.fixture(scope="module") +def create_instance_id(): + """Id for the low-cost instance.""" + return f"create-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def lci_instance_id(): + """Id for the low-cost instance.""" + return f"lci-instance-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def instance_partition_instance_id(): + """Id for the instance that tests instance partitions.""" + return f"instance-partition-test-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_id(): + return f"test-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def create_database_id(): + return f"create-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def cmek_database_id(): + return f"cmek-db-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def default_leader_database_id(): + return f"leader_db_{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def proto_columns_database_id(): + return f"test-db-proto-{uuid.uuid4().hex[:10]}" + + +@pytest.fixture(scope="module") +def database_ddl(): + """Sequence of DDL statements used to set up the database. + + Sample testcase modules can override as needed. + """ + return [CREATE_TABLE_SINGERS, CREATE_TABLE_ALBUMS] + + +@pytest.fixture(scope="module") +def proto_columns_database_ddl(): + """Sequence of DDL statements used to set up the database for proto columns. + + Sample testcase modules can override as needed. + """ + return [CREATE_TABLE_SINGERS_, CREATE_TABLE_ALBUMS] + + +@pytest.fixture(scope="module") +def default_leader(): + """Default leader for multi-region instances.""" + return "us-east4" + + +@pytest.fixture(scope="module") +def user_managed_instance_config_name(spanner_client): + name = f"custom-python-samples-config-{uuid.uuid4().hex[:10]}" + yield name + snippets.delete_instance_config( + "{}/instanceConfigs/{}".format(spanner_client.project_name, name) + ) + return + + +@pytest.fixture(scope="module") +def base_instance_config_id(spanner_client): + return "{}/instanceConfigs/{}".format(spanner_client.project_name, "nam7") + + +def test_create_and_update_instance_explicit(spanner_client, create_instance_id): + # Rather than re-use 'sample_isntance', we create a new instance, to + # ensure that the 'create_instance' snippet is tested. + retry_429(snippets.create_instance)(create_instance_id) + # Rather than re-use 'sample_isntance', we are using created instance, to + # ensure that the 'update_instance' snippet is tested. + retry_429(snippets.update_instance)(create_instance_id) + instance = spanner_client.instance(create_instance_id) + retry_429(instance.delete)() + + +def test_create_database_explicit(sample_instance, create_database_id): + # Rather than re-use 'sample_database', we create a new database, to + # ensure that the 'create_database' snippet is tested. + snippets.create_database(sample_instance.instance_id, create_database_id) + database = sample_instance.database(create_database_id) + database.drop() + + +def test_create_instance_with_processing_units(capsys, lci_instance_id): + processing_units = 500 + retry_429(snippets.create_instance_with_processing_units)( + lci_instance_id, + processing_units, + ) + out, _ = capsys.readouterr() + assert lci_instance_id in out + assert "{} processing units".format(processing_units) in out + spanner_client = spanner.Client() + instance = spanner_client.instance(lci_instance_id) + retry_429(instance.delete)() + + +def test_create_instance_with_autoscaling_config(capsys, lci_instance_id): + retry_429(snippets.create_instance_with_autoscaling_config)( + lci_instance_id, + ) + out, _ = capsys.readouterr() + assert lci_instance_id in out + assert "autoscaling config" in out + spanner_client = spanner.Client() + instance = spanner_client.instance(lci_instance_id) + retry_429(instance.delete)() + + +def test_create_and_update_instance_default_backup_schedule_type( + capsys, lci_instance_id +): + retry_429(snippets.create_instance_without_default_backup_schedules)( + lci_instance_id, + ) + create_out, _ = capsys.readouterr() + assert lci_instance_id in create_out + assert "without default backup schedules" in create_out + + retry_429(snippets.update_instance_default_backup_schedule_type)( + lci_instance_id, + ) + update_out, _ = capsys.readouterr() + assert lci_instance_id in update_out + assert "to have default backup schedules" in update_out + spanner_client = spanner.Client() + instance = spanner_client.instance(lci_instance_id) + retry_429(instance.delete)() + + +def test_create_instance_partition(capsys, instance_partition_instance_id): + # Unable to use create_instance since it has editions set where partitions are unsupported. + # The minimal requirement for editions is ENTERPRISE_PLUS for the paritions to get supported. + snippets.create_instance_with_processing_units(instance_partition_instance_id, 1000) + retry_429(snippets.create_instance_partition)( + instance_partition_instance_id, "my-instance-partition" + ) + out, _ = capsys.readouterr() + assert "Created instance partition my-instance-partition" in out + spanner_client = spanner.Client() + instance = spanner_client.instance(instance_partition_instance_id) + retry_429(instance.delete)() + + +def test_update_database(capsys, instance_id, sample_database): + snippets.update_database(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated database {}.".format(sample_database.name) in out + + # Cleanup + sample_database.enable_drop_protection = False + op = sample_database.update(["enable_drop_protection"]) + op.result() + + +def test_create_database_with_encryption_config( + capsys, instance_id, cmek_database_id, kms_key_name +): + snippets.create_database_with_encryption_key( + instance_id, cmek_database_id, kms_key_name + ) + out, _ = capsys.readouterr() + assert cmek_database_id in out + assert kms_key_name in out + + +@pytest.mark.skip(reason="skipped since the KMS keys are not added on test " "project") +def test_create_database_with_multiple_kms_keys( + capsys, + multi_region_instance, + multi_region_instance_id, + cmek_database_id, + kms_key_names, +): + snippets.create_database_with_multiple_kms_keys( + multi_region_instance_id, cmek_database_id, kms_key_names + ) + out, _ = capsys.readouterr() + assert cmek_database_id in out + assert kms_key_names[0] in out + assert kms_key_names[1] in out + assert kms_key_names[2] in out + + +def test_get_instance_config(capsys): + instance_config = "nam6" + snippets.get_instance_config(instance_config) + out, _ = capsys.readouterr() + assert instance_config in out + + +def test_list_instance_config(capsys): + snippets.list_instance_config() + out, _ = capsys.readouterr() + assert "regional-us-central1" in out + + +@pytest.mark.dependency(name="create_instance_config") +def test_create_instance_config( + capsys, user_managed_instance_config_name, base_instance_config_id +): + snippets.create_instance_config( + user_managed_instance_config_name, base_instance_config_id + ) + out, _ = capsys.readouterr() + assert "Created instance configuration" in out + + +@pytest.mark.dependency(depends=["create_instance_config"]) +def test_update_instance_config(capsys, user_managed_instance_config_name): + snippets.update_instance_config(user_managed_instance_config_name) + out, _ = capsys.readouterr() + assert "Updated instance configuration" in out + + +@pytest.mark.dependency(depends=["create_instance_config"]) +def test_delete_instance_config(capsys, user_managed_instance_config_name): + spanner_client = spanner.Client() + snippets.delete_instance_config( + "{}/instanceConfigs/{}".format( + spanner_client.project_name, user_managed_instance_config_name + ) + ) + out, _ = capsys.readouterr() + assert "successfully deleted" in out + + +def test_list_instance_config_operations(capsys): + snippets.list_instance_config_operations() + out, _ = capsys.readouterr() + assert "List instance config operations" in out + + +def test_list_databases(capsys, instance_id): + snippets.list_databases(instance_id) + out, _ = capsys.readouterr() + assert "has default leader" in out + + +def test_create_database_with_default_leader( + capsys, + multi_region_instance, + multi_region_instance_id, + default_leader_database_id, + default_leader, +): + retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + retry_429(snippets.create_database_with_default_leader)( + multi_region_instance_id, default_leader_database_id, default_leader + ) + out, _ = capsys.readouterr() + assert default_leader_database_id in out + assert default_leader in out + + +def test_update_database_with_default_leader( + capsys, + multi_region_instance, + multi_region_instance_id, + default_leader_database_id, + default_leader, +): + retry_429 = RetryErrors(exceptions.ResourceExhausted, delay=15) + retry_429(snippets.update_database_with_default_leader)( + multi_region_instance_id, default_leader_database_id, default_leader + ) + out, _ = capsys.readouterr() + assert default_leader_database_id in out + assert default_leader in out + + +def test_get_database_ddl(capsys, instance_id, sample_database): + snippets.get_database_ddl(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert sample_database.database_id in out + + +def test_query_information_schema_database_options( + capsys, + multi_region_instance, + multi_region_instance_id, + default_leader_database_id, + default_leader, +): + snippets.query_information_schema_database_options( + multi_region_instance_id, default_leader_database_id + ) + out, _ = capsys.readouterr() + assert default_leader in out + + +@pytest.mark.dependency(name="insert_data") +def test_insert_data(capsys, instance_id, sample_database): + snippets.insert_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Inserted data" in out + + +@pytest.mark.dependency(name="batch_write") +def test_batch_write(capsys, instance_id, sample_database): + snippets.batch_write(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "could not be applied with error" not in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_delete_data(capsys, instance_id, sample_database): + snippets.delete_data(instance_id, sample_database.database_id) + # put it back for other tests + snippets.insert_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Deleted data" in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_query_data(capsys, instance_id, sample_database): + snippets.query_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk" in out + + +@pytest.mark.dependency(name="add_column", depends=["insert_data"]) +def test_add_column(capsys, instance_id, sample_database): + snippets.add_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the MarketingBudget column." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_read_data(capsys, instance_id, sample_database): + snippets.read_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk" in out + + +@pytest.mark.dependency(name="update_data", depends=["add_column"]) +def test_update_data(capsys, instance_id, sample_database): + # Sleep for 15 seconds to ensure previous inserts will be + # 'stale' by the time test_read_stale_data is run. + time.sleep(15) + + snippets.update_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated data." in out + + +@pytest.mark.dependency(depends=["update_data"]) +def test_read_stale_data(capsys, instance_id, sample_database): + # This snippet relies on test_update_data inserting data + # at least 15 seconds after the previous insert + snippets.read_stale_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, MarketingBudget: None" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_read_write_transaction(capsys, instance_id, sample_database): + snippets.read_write_transaction(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Transaction complete" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_query_data_with_new_column(capsys, instance_id, sample_database): + snippets.query_data_with_new_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, MarketingBudget: 300000" in out + assert "SingerId: 2, AlbumId: 2, MarketingBudget: 300000" in out + + +@pytest.mark.dependency(name="add_index", depends=["insert_data"]) +def test_add_index(capsys, instance_id, sample_database): + snippets.add_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the AlbumsByAlbumTitle index" in out + + +@pytest.mark.dependency(depends=["add_index"]) +def test_query_data_with_index(capsys, instance_id, sample_database): + snippets.query_data_with_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Go, Go, Go" in out + assert "Forever Hold Your Peace" in out + assert "Green" not in out + + +@pytest.mark.dependency(depends=["add_index"]) +def test_read_data_with_index(capsys, instance_id, sample_database): + snippets.read_data_with_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Go, Go, Go" in out + assert "Forever Hold Your Peace" in out + assert "Green" in out + + +@pytest.mark.dependency(name="add_storing_index", depends=["insert_data"]) +def test_add_storing_index(capsys, instance_id, sample_database): + snippets.add_storing_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added the AlbumsByAlbumTitle2 index." in out + + +@pytest.mark.dependency(depends=["add_storing_index"]) +def test_read_data_with_storing_index(capsys, instance_id, sample_database): + snippets.read_data_with_storing_index(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "300000" in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_read_only_transaction(capsys, instance_id, sample_database): + snippets.read_only_transaction(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + # Snippet does two reads, so entry should be listed twice + assert out.count("SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk") == 2 + + +@pytest.mark.dependency(name="add_timestamp_column", depends=["insert_data"]) +def test_add_timestamp_column(capsys, instance_id, sample_database): + snippets.add_timestamp_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert 'Altered table "Albums" on database ' in out + + +@pytest.mark.dependency(depends=["add_timestamp_column"]) +def test_update_data_with_timestamp(capsys, instance_id, sample_database): + snippets.update_data_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated data" in out + + +@pytest.mark.dependency(depends=["add_timestamp_column"]) +def test_query_data_with_timestamp(capsys, instance_id, sample_database): + snippets.query_data_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, MarketingBudget: 1000000" in out + assert "SingerId: 2, AlbumId: 2, MarketingBudget: 750000" in out + + +@pytest.mark.dependency(name="create_table_with_timestamp") +def test_create_table_with_timestamp(capsys, instance_id, sample_database): + snippets.create_table_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created Performances table on database" in out + + +@pytest.mark.dependency(depends=["create_table_with_timestamp"]) +def test_insert_data_with_timestamp(capsys, instance_id, sample_database): + snippets.insert_data_with_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Inserted data." in out + + +@pytest.mark.dependency(name="write_struct_data") +def test_write_struct_data(capsys, instance_id, sample_database): + snippets.write_struct_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Inserted sample data for STRUCT queries" in out + + +@pytest.mark.dependency(depends=["write_struct_data"]) +def test_query_with_struct(capsys, instance_id, sample_database): + snippets.query_with_struct(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 6" in out + + +@pytest.mark.dependency(depends=["write_struct_data"]) +def test_query_with_array_of_struct(capsys, instance_id, sample_database): + snippets.query_with_array_of_struct(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 8" in out + assert "SingerId: 7" in out + assert "SingerId: 6" in out + + +@pytest.mark.dependency(depends=["write_struct_data"]) +def test_query_struct_field(capsys, instance_id, sample_database): + snippets.query_struct_field(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 6" in out + + +@pytest.mark.dependency(depends=["write_struct_data"]) +def test_query_nested_struct_field(capsys, instance_id, sample_database): + snippets.query_nested_struct_field(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 6 SongName: Imagination" in out + assert "SingerId: 9 SongName: Imagination" in out + + +@pytest.mark.dependency(name="insert_data_with_dml") +def test_insert_data_with_dml(capsys, instance_id, sample_database): + snippets.insert_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) inserted." in out + + +@pytest.mark.dependency(name="log_commit_stats") +def test_log_commit_stats(capsys, instance_id, sample_database): + snippets.log_commit_stats(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) inserted." in out + # SingerId, FirstName, and LastName plus FullName which is generated. + assert "4 mutation(s) in transaction." in out + + +@pytest.mark.dependency(name="set_max_commit_delay") +def test_set_max_commit_delay(capsys, instance_id, sample_database): + snippets.set_max_commit_delay(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) inserted." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_update_data_with_dml(capsys, instance_id, sample_database): + snippets.update_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) updated." in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_update_data_with_dml_returning(capsys, instance_id, sample_database): + snippets.update_data_with_dml_returning(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) updated." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_delete_data_with_dml(capsys, instance_id, sample_database): + snippets.delete_data_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) deleted." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_delete_data_with_dml_returning(capsys, instance_id, sample_database): + snippets.delete_data_with_dml_returning(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) deleted." in out + + +@pytest.mark.dependency(depends=["add_timestamp_column"]) +def test_update_data_with_dml_timestamp(capsys, instance_id, sample_database): + snippets.update_data_with_dml_timestamp(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "2 record(s) updated." in out + + +@pytest.mark.dependency(name="dml_write_read_transaction") +def test_dml_write_read_transaction(capsys, instance_id, sample_database): + snippets.dml_write_read_transaction(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) inserted." in out + assert "FirstName: Timothy, LastName: Campbell" in out + + +@pytest.mark.dependency(depends=["dml_write_read_transaction"]) +def test_update_data_with_dml_struct(capsys, instance_id, sample_database): + snippets.update_data_with_dml_struct(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) updated" in out + + +@pytest.mark.dependency(name="insert_with_dml") +def test_insert_with_dml(capsys, instance_id, sample_database): + snippets.insert_with_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "4 record(s) inserted" in out + + +@pytest.mark.dependency(depends=[""]) +def test_insert_with_dml_returning(capsys, instance_id, sample_database): + snippets.insert_with_dml_returning(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "4 record(s) inserted" in out + + +@pytest.mark.dependency(depends=["insert_with_dml"]) +def test_query_data_with_parameter(capsys, instance_id, sample_database): + snippets.query_data_with_parameter(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 12, FirstName: Melissa, LastName: Garcia" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_write_with_dml_transaction(capsys, instance_id, sample_database): + snippets.write_with_dml_transaction(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Transferred 200000 from Album2's budget to Album1's" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_update_data_with_partitioned_dml(capsys, instance_id, sample_database): + snippets.update_data_with_partitioned_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "3 records updated" in out + + +@pytest.mark.dependency( + depends=[ + "insert_with_dml", + "dml_write_read_transaction", + "log_commit_stats", + "set_max_commit_delay", + ] +) +def test_delete_data_with_partitioned_dml(capsys, instance_id, sample_database): + snippets.delete_data_with_partitioned_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "7 record(s) deleted" in out + + +@pytest.mark.dependency(depends=["add_column"]) +def test_update_with_batch_dml(capsys, instance_id, sample_database): + snippets.update_with_batch_dml(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Executed 2 SQL statements using Batch DML" in out + + +@pytest.mark.dependency(name="create_table_with_datatypes") +def test_create_table_with_datatypes(capsys, instance_id, sample_database): + snippets.create_table_with_datatypes(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created Venues table on database" in out + + +@pytest.mark.dependency( + name="insert_datatypes_data", + depends=["create_table_with_datatypes"], +) +def test_insert_datatypes_data(capsys, instance_id, sample_database): + snippets.insert_datatypes_data(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Inserted data." in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_array(capsys, instance_id, sample_database): + snippets.query_data_with_array(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 19, VenueName: Venue 19, AvailableDate: 2020-11-01" in out + assert "VenueId: 42, VenueName: Venue 42, AvailableDate: 2020-10-01" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_bool(capsys, instance_id, sample_database): + snippets.query_data_with_bool(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 19, VenueName: Venue 19, OutdoorVenue: True" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_bytes(capsys, instance_id, sample_database): + snippets.query_data_with_bytes(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_date(capsys, instance_id, sample_database): + snippets.query_data_with_date(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, LastContactDate: 2018-09-02" in out + assert "VenueId: 42, VenueName: Venue 42, LastContactDate: 2018-10-01" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_float(capsys, instance_id, sample_database): + snippets.query_data_with_float(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, PopularityScore: 0.8" in out + assert "VenueId: 19, VenueName: Venue 19, PopularityScore: 0.9" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_int(capsys, instance_id, sample_database): + snippets.query_data_with_int(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 19, VenueName: Venue 19, Capacity: 6300" in out + assert "VenueId: 42, VenueName: Venue 42, Capacity: 3000" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_string(capsys, instance_id, sample_database): + snippets.query_data_with_string(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 42, VenueName: Venue 42" in out + + +@pytest.mark.dependency( + name="add_numeric_column", + depends=["create_table_with_datatypes"], +) +def test_add_numeric_column(capsys, instance_id, sample_database): + snippets.add_numeric_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert 'Altered table "Venues" on database ' in out + + +@pytest.mark.dependency(depends=["add_numeric_column", "insert_datatypes_data"]) +def test_update_data_with_numeric(capsys, instance_id, sample_database): + snippets.update_data_with_numeric(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated data" in out + + +@pytest.mark.dependency(depends=["add_numeric_column"]) +def test_query_data_with_numeric_parameter(capsys, instance_id, sample_database): + snippets.query_data_with_numeric_parameter(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, Revenue: 35000" in out + + +@pytest.mark.dependency( + name="add_json_column", + depends=["create_table_with_datatypes"], +) +def test_add_json_column(capsys, instance_id, sample_database): + snippets.add_json_column(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert 'Altered table "Venues" on database ' in out + + +@pytest.mark.dependency(depends=["add_json_column", "insert_datatypes_data"]) +def test_update_data_with_json(capsys, instance_id, sample_database): + snippets.update_data_with_json(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Updated data" in out + + +@pytest.mark.dependency(depends=["add_json_column"]) +def test_query_data_with_json_parameter(capsys, instance_id, sample_database): + snippets.query_data_with_json_parameter(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 19, VenueDetails: {'open': True, 'rating': 9}" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_timestamp_parameter(capsys, instance_id, sample_database): + snippets.query_data_with_timestamp_parameter( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, LastUpdateTime:" in out + assert "VenueId: 19, VenueName: Venue 19, LastUpdateTime:" in out + assert "VenueId: 42, VenueName: Venue 42, LastUpdateTime:" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_query_data_with_query_options(capsys, instance_id, sample_database): + snippets.query_data_with_query_options(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, LastUpdateTime:" in out + assert "VenueId: 19, VenueName: Venue 19, LastUpdateTime:" in out + assert "VenueId: 42, VenueName: Venue 42, LastUpdateTime:" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_create_client_with_query_options(capsys, instance_id, sample_database): + snippets.create_client_with_query_options(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "VenueId: 4, VenueName: Venue 4, LastUpdateTime:" in out + assert "VenueId: 19, VenueName: Venue 19, LastUpdateTime:" in out + assert "VenueId: 42, VenueName: Venue 42, LastUpdateTime:" in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_set_transaction_tag(capsys, instance_id, sample_database): + snippets.set_transaction_tag(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Venue capacities updated." in out + assert "New venue inserted." in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_set_transaction_timeout(capsys, instance_id, sample_database): + snippets.set_transaction_timeout(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) inserted." in out + + +@pytest.mark.dependency(depends=["insert_datatypes_data"]) +def test_set_statement_timeout(capsys, instance_id, sample_database): + snippets.set_statement_timeout(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) inserted." in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_set_request_tag(capsys, instance_id, sample_database): + snippets.set_request_tag(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk" in out + + +@pytest.mark.dependency(name="add_and_drop_database_roles", depends=["insert_data"]) +def test_add_and_drop_database_roles(capsys, instance_id, sample_database): + snippets.add_and_drop_database_roles(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Created roles new_parent and new_child and granted privileges" in out + assert "Revoked privileges and dropped role new_child" in out + + +@pytest.mark.dependency(depends=["add_and_drop_database_roles"]) +def test_read_data_with_database_role(capsys, instance_id, sample_database): + snippets.read_data_with_database_role(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "ingerId: 1, FirstName: Marc, LastName: Richards" in out + + +@pytest.mark.dependency(depends=["add_and_drop_database_roles"]) +def test_list_database_roles(capsys, instance_id, sample_database): + snippets.list_database_roles(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "new_parent" in out + + +@pytest.mark.dependency(name="create_table_with_foreign_key_delete_cascade") +def test_create_table_with_foreign_key_delete_cascade( + capsys, instance_id, sample_database +): + snippets.create_table_with_foreign_key_delete_cascade( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert ( + "Created Customers and ShoppingCarts table with FKShoppingCartsCustomerId" + in out + ) + + +@pytest.mark.dependency( + name="alter_table_with_foreign_key_delete_cascade", + depends=["create_table_with_foreign_key_delete_cascade"], +) +def test_alter_table_with_foreign_key_delete_cascade( + capsys, instance_id, sample_database +): + snippets.alter_table_with_foreign_key_delete_cascade( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert "Altered ShoppingCarts table with FKShoppingCartsCustomerName" in out + + +@pytest.mark.dependency(depends=["alter_table_with_foreign_key_delete_cascade"]) +def test_drop_foreign_key_contraint_delete_cascade( + capsys, instance_id, sample_database +): + snippets.drop_foreign_key_constraint_delete_cascade( + instance_id, sample_database.database_id + ) + out, _ = capsys.readouterr() + assert "Altered ShoppingCarts table to drop FKShoppingCartsCustomerName" in out + + +def test_create_sequence(capsys, instance_id, bit_reverse_sequence_database): + snippets.create_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Created Seq sequence and Customers table, where the key column CustomerId uses the sequence as a default value on database" + in out + ) + assert "Number of customer records inserted is 3" in out + assert "Inserted customer record with Customer Id:" in out + + +@pytest.mark.dependency(depends=["create_sequence"]) +def test_alter_sequence(capsys, instance_id, bit_reverse_sequence_database): + snippets.alter_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Altered Seq sequence to skip an inclusive range between 1000 and 5000000 on database" + in out + ) + assert "Number of customer records inserted is 3" in out + assert "Inserted customer record with Customer Id:" in out + + +@pytest.mark.dependency(depends=["alter_sequence"]) +def test_drop_sequence(capsys, instance_id, bit_reverse_sequence_database): + snippets.drop_sequence(instance_id, bit_reverse_sequence_database.database_id) + out, _ = capsys.readouterr() + assert ( + "Altered Customers table to drop DEFAULT from CustomerId column and dropped the Seq sequence on database" + in out + ) + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_directed_read_options(capsys, instance_id, sample_database): + snippets.directed_read_options(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk" in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_set_custom_timeout_and_retry(capsys, instance_id, sample_database): + snippets.set_custom_timeout_and_retry(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "SingerId: 1, AlbumId: 1, AlbumTitle: Total Junk" in out + + +@pytest.mark.dependency(depends=["insert_data"]) +def test_isolated_level_options(capsys, instance_id, sample_database): + snippets.isolation_level_options(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "1 record(s) updated." in out + + +@pytest.mark.dependency( + name="add_proto_types_column", +) +def test_add_proto_types_column(capsys, instance_id, proto_columns_database): + snippets.add_proto_type_columns(instance_id, proto_columns_database.database_id) + out, _ = capsys.readouterr() + assert 'Altered table "Singers" on database ' in out + + snippets.insert_data(instance_id, proto_columns_database.database_id) + + +@pytest.mark.dependency( + name="update_data_with_proto_message", depends=["add_proto_types_column"] +) +def test_update_data_with_proto_types(capsys, instance_id, proto_columns_database): + snippets.update_data_with_proto_types( + instance_id, proto_columns_database.database_id + ) + out, _ = capsys.readouterr() + assert "Data updated" in out + + snippets.update_data_with_proto_types_with_dml( + instance_id, proto_columns_database.database_id + ) + out, _ = capsys.readouterr() + assert "1 record(s) updated." in out + + +@pytest.mark.dependency( + depends=["add_proto_types_column", "update_data_with_proto_message"] +) +def test_query_data_with_proto_types_parameter( + capsys, instance_id, proto_columns_database +): + snippets.query_data_with_proto_types_parameter( + instance_id, proto_columns_database.database_id + ) + out, _ = capsys.readouterr() + assert "SingerId: 2, SingerInfo: singer_id: 2" in out + + +@pytest.mark.dependency(name="add_split_points", depends=["insert_data"]) +def test_add_split_points(capsys, instance_id, sample_database): + snippets.add_split_points(instance_id, sample_database.database_id) + out, _ = capsys.readouterr() + assert "Added split points." in out diff --git a/samples/samples/testdata/README.md b/samples/samples/testdata/README.md new file mode 100644 index 0000000000..b4ff1b649b --- /dev/null +++ b/samples/samples/testdata/README.md @@ -0,0 +1,5 @@ +#### To generate singer_pb2.py and descriptos.pb file from singer.proto using `protoc` +```shell +cd samples/samples +protoc --proto_path=testdata/ --include_imports --descriptor_set_out=testdata/descriptors.pb --python_out=testdata/ testdata/singer.proto +``` diff --git a/samples/samples/testdata/descriptors.pb b/samples/samples/testdata/descriptors.pb new file mode 100644 index 0000000000..0536d5004d Binary files /dev/null and b/samples/samples/testdata/descriptors.pb differ diff --git a/samples/samples/testdata/singer.proto b/samples/samples/testdata/singer.proto new file mode 100644 index 0000000000..1a995614a7 --- /dev/null +++ b/samples/samples/testdata/singer.proto @@ -0,0 +1,17 @@ +syntax = "proto3"; + +package examples.spanner.music; + +message SingerInfo { + optional int64 singer_id = 1; + optional string birth_date = 2; + optional string nationality = 3; + optional Genre genre = 4; +} + +enum Genre { + POP = 0; + JAZZ = 1; + FOLK = 2; + ROCK = 3; +} diff --git a/samples/samples/testdata/singer_pb2.py b/samples/samples/testdata/singer_pb2.py new file mode 100644 index 0000000000..286f884163 --- /dev/null +++ b/samples/samples/testdata/singer_pb2.py @@ -0,0 +1,28 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: singer.proto +# Protobuf Python Version: 4.25.1 +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +from google.protobuf.internal import builder as _builder +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0csinger.proto\x12\x16\x65xamples.spanner.music\"\xc1\x01\n\nSingerInfo\x12\x16\n\tsinger_id\x18\x01 \x01(\x03H\x00\x88\x01\x01\x12\x17\n\nbirth_date\x18\x02 \x01(\tH\x01\x88\x01\x01\x12\x18\n\x0bnationality\x18\x03 \x01(\tH\x02\x88\x01\x01\x12\x31\n\x05genre\x18\x04 \x01(\x0e\x32\x1d.examples.spanner.music.GenreH\x03\x88\x01\x01\x42\x0c\n\n_singer_idB\r\n\x0b_birth_dateB\x0e\n\x0c_nationalityB\x08\n\x06_genre*.\n\x05Genre\x12\x07\n\x03POP\x10\x00\x12\x08\n\x04JAZZ\x10\x01\x12\x08\n\x04\x46OLK\x10\x02\x12\x08\n\x04ROCK\x10\x03\x62\x06proto3') + +_globals = globals() +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'singer_pb2', _globals) +if _descriptor._USE_C_DESCRIPTORS == False: + DESCRIPTOR._options = None + _globals['_GENRE']._serialized_start=236 + _globals['_GENRE']._serialized_end=282 + _globals['_SINGERINFO']._serialized_start=41 + _globals['_SINGERINFO']._serialized_end=234 +# @@protoc_insertion_point(module_scope) diff --git a/scripts/decrypt-secrets.sh b/scripts/decrypt-secrets.sh new file mode 100755 index 0000000000..120b0ddc43 --- /dev/null +++ b/scripts/decrypt-secrets.sh @@ -0,0 +1,46 @@ +#!/bin/bash + +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" +ROOT=$( dirname "$DIR" ) + +# Work from the project root. +cd $ROOT + +# Prevent it from overriding files. +# We recommend that sample authors use their own service account files and cloud project. +# In that case, they are supposed to prepare these files by themselves. +if [[ -f "testing/test-env.sh" ]] || \ + [[ -f "testing/service-account.json" ]] || \ + [[ -f "testing/client-secrets.json" ]]; then + echo "One or more target files exist, aborting." + exit 1 +fi + +# Use SECRET_MANAGER_PROJECT if set, fallback to cloud-devrel-kokoro-resources. +PROJECT_ID="${SECRET_MANAGER_PROJECT:-cloud-devrel-kokoro-resources}" + +gcloud secrets versions access latest --secret="python-docs-samples-test-env" \ + --project="${PROJECT_ID}" \ + > testing/test-env.sh +gcloud secrets versions access latest \ + --secret="python-docs-samples-service-account" \ + --project="${PROJECT_ID}" \ + > testing/service-account.json +gcloud secrets versions access latest \ + --secret="python-docs-samples-client-secrets" \ + --project="${PROJECT_ID}" \ + > testing/client-secrets.json diff --git a/scripts/fixup_spanner_admin_database_v1_keywords.py b/scripts/fixup_spanner_admin_database_v1_keywords.py new file mode 100644 index 0000000000..d642e9a0e3 --- /dev/null +++ b/scripts/fixup_spanner_admin_database_v1_keywords.py @@ -0,0 +1,202 @@ +#! /usr/bin/env python3 +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import argparse +import os +import libcst as cst +import pathlib +import sys +from typing import (Any, Callable, Dict, List, Sequence, Tuple) + + +def partition( + predicate: Callable[[Any], bool], + iterator: Sequence[Any] +) -> Tuple[List[Any], List[Any]]: + """A stable, out-of-place partition.""" + results = ([], []) + + for i in iterator: + results[int(predicate(i))].append(i) + + # Returns trueList, falseList + return results[1], results[0] + + +class spanner_admin_databaseCallTransformer(cst.CSTTransformer): + CTRL_PARAMS: Tuple[str] = ('retry', 'timeout', 'metadata') + METHOD_TO_PARAMS: Dict[str, Tuple[str]] = { + 'add_split_points': ('database', 'split_points', 'initiator', ), + 'copy_backup': ('parent', 'backup_id', 'source_backup', 'expire_time', 'encryption_config', ), + 'create_backup': ('parent', 'backup_id', 'backup', 'encryption_config', ), + 'create_backup_schedule': ('parent', 'backup_schedule_id', 'backup_schedule', ), + 'create_database': ('parent', 'create_statement', 'extra_statements', 'encryption_config', 'database_dialect', 'proto_descriptors', ), + 'delete_backup': ('name', ), + 'delete_backup_schedule': ('name', ), + 'drop_database': ('database', ), + 'get_backup': ('name', ), + 'get_backup_schedule': ('name', ), + 'get_database': ('name', ), + 'get_database_ddl': ('database', ), + 'get_iam_policy': ('resource', 'options', ), + 'internal_update_graph_operation': ('database', 'operation_id', 'vm_identity_token', 'progress', 'status', ), + 'list_backup_operations': ('parent', 'filter', 'page_size', 'page_token', ), + 'list_backups': ('parent', 'filter', 'page_size', 'page_token', ), + 'list_backup_schedules': ('parent', 'page_size', 'page_token', ), + 'list_database_operations': ('parent', 'filter', 'page_size', 'page_token', ), + 'list_database_roles': ('parent', 'page_size', 'page_token', ), + 'list_databases': ('parent', 'page_size', 'page_token', ), + 'restore_database': ('parent', 'database_id', 'backup', 'encryption_config', ), + 'set_iam_policy': ('resource', 'policy', 'update_mask', ), + 'test_iam_permissions': ('resource', 'permissions', ), + 'update_backup': ('backup', 'update_mask', ), + 'update_backup_schedule': ('backup_schedule', 'update_mask', ), + 'update_database': ('database', 'update_mask', ), + 'update_database_ddl': ('database', 'statements', 'operation_id', 'proto_descriptors', 'throughput_mode', ), + } + + def leave_Call(self, original: cst.Call, updated: cst.Call) -> cst.CSTNode: + try: + key = original.func.attr.value + kword_params = self.METHOD_TO_PARAMS[key] + except (AttributeError, KeyError): + # Either not a method from the API or too convoluted to be sure. + return updated + + # If the existing code is valid, keyword args come after positional args. + # Therefore, all positional args must map to the first parameters. + args, kwargs = partition(lambda a: not bool(a.keyword), updated.args) + if any(k.keyword.value == "request" for k in kwargs): + # We've already fixed this file, don't fix it again. + return updated + + kwargs, ctrl_kwargs = partition( + lambda a: a.keyword.value not in self.CTRL_PARAMS, + kwargs + ) + + args, ctrl_args = args[:len(kword_params)], args[len(kword_params):] + ctrl_kwargs.extend(cst.Arg(value=a.value, keyword=cst.Name(value=ctrl)) + for a, ctrl in zip(ctrl_args, self.CTRL_PARAMS)) + + request_arg = cst.Arg( + value=cst.Dict([ + cst.DictElement( + cst.SimpleString("'{}'".format(name)), +cst.Element(value=arg.value) + ) + # Note: the args + kwargs looks silly, but keep in mind that + # the control parameters had to be stripped out, and that + # those could have been passed positionally or by keyword. + for name, arg in zip(kword_params, args + kwargs)]), + keyword=cst.Name("request") + ) + + return updated.with_changes( + args=[request_arg] + ctrl_kwargs + ) + + +def fix_files( + in_dir: pathlib.Path, + out_dir: pathlib.Path, + *, + transformer=spanner_admin_databaseCallTransformer(), +): + """Duplicate the input dir to the output dir, fixing file method calls. + + Preconditions: + * in_dir is a real directory + * out_dir is a real, empty directory + """ + pyfile_gen = ( + pathlib.Path(os.path.join(root, f)) + for root, _, files in os.walk(in_dir) + for f in files if os.path.splitext(f)[1] == ".py" + ) + + for fpath in pyfile_gen: + with open(fpath, 'r') as f: + src = f.read() + + # Parse the code and insert method call fixes. + tree = cst.parse_module(src) + updated = tree.visit(transformer) + + # Create the path and directory structure for the new file. + updated_path = out_dir.joinpath(fpath.relative_to(in_dir)) + updated_path.parent.mkdir(parents=True, exist_ok=True) + + # Generate the updated source file at the corresponding path. + with open(updated_path, 'w') as f: + f.write(updated.code) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser( + description="""Fix up source that uses the spanner_admin_database client library. + +The existing sources are NOT overwritten but are copied to output_dir with changes made. + +Note: This tool operates at a best-effort level at converting positional + parameters in client method calls to keyword based parameters. + Cases where it WILL FAIL include + A) * or ** expansion in a method call. + B) Calls via function or method alias (includes free function calls) + C) Indirect or dispatched calls (e.g. the method is looked up dynamically) + + These all constitute false negatives. The tool will also detect false + positives when an API method shares a name with another method. +""") + parser.add_argument( + '-d', + '--input-directory', + required=True, + dest='input_dir', + help='the input directory to walk for python files to fix up', + ) + parser.add_argument( + '-o', + '--output-directory', + required=True, + dest='output_dir', + help='the directory to output files fixed via un-flattening', + ) + args = parser.parse_args() + input_dir = pathlib.Path(args.input_dir) + output_dir = pathlib.Path(args.output_dir) + if not input_dir.is_dir(): + print( + f"input directory '{input_dir}' does not exist or is not a directory", + file=sys.stderr, + ) + sys.exit(-1) + + if not output_dir.is_dir(): + print( + f"output directory '{output_dir}' does not exist or is not a directory", + file=sys.stderr, + ) + sys.exit(-1) + + if os.listdir(output_dir): + print( + f"output directory '{output_dir}' is not empty", + file=sys.stderr, + ) + sys.exit(-1) + + fix_files(input_dir, output_dir) diff --git a/scripts/fixup_spanner_admin_instance_v1_keywords.py b/scripts/fixup_spanner_admin_instance_v1_keywords.py new file mode 100644 index 0000000000..8200af5099 --- /dev/null +++ b/scripts/fixup_spanner_admin_instance_v1_keywords.py @@ -0,0 +1,196 @@ +#! /usr/bin/env python3 +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import argparse +import os +import libcst as cst +import pathlib +import sys +from typing import (Any, Callable, Dict, List, Sequence, Tuple) + + +def partition( + predicate: Callable[[Any], bool], + iterator: Sequence[Any] +) -> Tuple[List[Any], List[Any]]: + """A stable, out-of-place partition.""" + results = ([], []) + + for i in iterator: + results[int(predicate(i))].append(i) + + # Returns trueList, falseList + return results[1], results[0] + + +class spanner_admin_instanceCallTransformer(cst.CSTTransformer): + CTRL_PARAMS: Tuple[str] = ('retry', 'timeout', 'metadata') + METHOD_TO_PARAMS: Dict[str, Tuple[str]] = { + 'create_instance': ('parent', 'instance_id', 'instance', ), + 'create_instance_config': ('parent', 'instance_config_id', 'instance_config', 'validate_only', ), + 'create_instance_partition': ('parent', 'instance_partition_id', 'instance_partition', ), + 'delete_instance': ('name', ), + 'delete_instance_config': ('name', 'etag', 'validate_only', ), + 'delete_instance_partition': ('name', 'etag', ), + 'get_iam_policy': ('resource', 'options', ), + 'get_instance': ('name', 'field_mask', ), + 'get_instance_config': ('name', ), + 'get_instance_partition': ('name', ), + 'list_instance_config_operations': ('parent', 'filter', 'page_size', 'page_token', ), + 'list_instance_configs': ('parent', 'page_size', 'page_token', ), + 'list_instance_partition_operations': ('parent', 'filter', 'page_size', 'page_token', 'instance_partition_deadline', ), + 'list_instance_partitions': ('parent', 'page_size', 'page_token', 'instance_partition_deadline', ), + 'list_instances': ('parent', 'page_size', 'page_token', 'filter', 'instance_deadline', ), + 'move_instance': ('name', 'target_config', ), + 'set_iam_policy': ('resource', 'policy', 'update_mask', ), + 'test_iam_permissions': ('resource', 'permissions', ), + 'update_instance': ('instance', 'field_mask', ), + 'update_instance_config': ('instance_config', 'update_mask', 'validate_only', ), + 'update_instance_partition': ('instance_partition', 'field_mask', ), + } + + def leave_Call(self, original: cst.Call, updated: cst.Call) -> cst.CSTNode: + try: + key = original.func.attr.value + kword_params = self.METHOD_TO_PARAMS[key] + except (AttributeError, KeyError): + # Either not a method from the API or too convoluted to be sure. + return updated + + # If the existing code is valid, keyword args come after positional args. + # Therefore, all positional args must map to the first parameters. + args, kwargs = partition(lambda a: not bool(a.keyword), updated.args) + if any(k.keyword.value == "request" for k in kwargs): + # We've already fixed this file, don't fix it again. + return updated + + kwargs, ctrl_kwargs = partition( + lambda a: a.keyword.value not in self.CTRL_PARAMS, + kwargs + ) + + args, ctrl_args = args[:len(kword_params)], args[len(kword_params):] + ctrl_kwargs.extend(cst.Arg(value=a.value, keyword=cst.Name(value=ctrl)) + for a, ctrl in zip(ctrl_args, self.CTRL_PARAMS)) + + request_arg = cst.Arg( + value=cst.Dict([ + cst.DictElement( + cst.SimpleString("'{}'".format(name)), +cst.Element(value=arg.value) + ) + # Note: the args + kwargs looks silly, but keep in mind that + # the control parameters had to be stripped out, and that + # those could have been passed positionally or by keyword. + for name, arg in zip(kword_params, args + kwargs)]), + keyword=cst.Name("request") + ) + + return updated.with_changes( + args=[request_arg] + ctrl_kwargs + ) + + +def fix_files( + in_dir: pathlib.Path, + out_dir: pathlib.Path, + *, + transformer=spanner_admin_instanceCallTransformer(), +): + """Duplicate the input dir to the output dir, fixing file method calls. + + Preconditions: + * in_dir is a real directory + * out_dir is a real, empty directory + """ + pyfile_gen = ( + pathlib.Path(os.path.join(root, f)) + for root, _, files in os.walk(in_dir) + for f in files if os.path.splitext(f)[1] == ".py" + ) + + for fpath in pyfile_gen: + with open(fpath, 'r') as f: + src = f.read() + + # Parse the code and insert method call fixes. + tree = cst.parse_module(src) + updated = tree.visit(transformer) + + # Create the path and directory structure for the new file. + updated_path = out_dir.joinpath(fpath.relative_to(in_dir)) + updated_path.parent.mkdir(parents=True, exist_ok=True) + + # Generate the updated source file at the corresponding path. + with open(updated_path, 'w') as f: + f.write(updated.code) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser( + description="""Fix up source that uses the spanner_admin_instance client library. + +The existing sources are NOT overwritten but are copied to output_dir with changes made. + +Note: This tool operates at a best-effort level at converting positional + parameters in client method calls to keyword based parameters. + Cases where it WILL FAIL include + A) * or ** expansion in a method call. + B) Calls via function or method alias (includes free function calls) + C) Indirect or dispatched calls (e.g. the method is looked up dynamically) + + These all constitute false negatives. The tool will also detect false + positives when an API method shares a name with another method. +""") + parser.add_argument( + '-d', + '--input-directory', + required=True, + dest='input_dir', + help='the input directory to walk for python files to fix up', + ) + parser.add_argument( + '-o', + '--output-directory', + required=True, + dest='output_dir', + help='the directory to output files fixed via un-flattening', + ) + args = parser.parse_args() + input_dir = pathlib.Path(args.input_dir) + output_dir = pathlib.Path(args.output_dir) + if not input_dir.is_dir(): + print( + f"input directory '{input_dir}' does not exist or is not a directory", + file=sys.stderr, + ) + sys.exit(-1) + + if not output_dir.is_dir(): + print( + f"output directory '{output_dir}' does not exist or is not a directory", + file=sys.stderr, + ) + sys.exit(-1) + + if os.listdir(output_dir): + print( + f"output directory '{output_dir}' is not empty", + file=sys.stderr, + ) + sys.exit(-1) + + fix_files(input_dir, output_dir) diff --git a/scripts/fixup_spanner_v1_keywords.py b/scripts/fixup_spanner_v1_keywords.py new file mode 100644 index 0000000000..c7f41be11e --- /dev/null +++ b/scripts/fixup_spanner_v1_keywords.py @@ -0,0 +1,191 @@ +#! /usr/bin/env python3 +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import argparse +import os +import libcst as cst +import pathlib +import sys +from typing import (Any, Callable, Dict, List, Sequence, Tuple) + + +def partition( + predicate: Callable[[Any], bool], + iterator: Sequence[Any] +) -> Tuple[List[Any], List[Any]]: + """A stable, out-of-place partition.""" + results = ([], []) + + for i in iterator: + results[int(predicate(i))].append(i) + + # Returns trueList, falseList + return results[1], results[0] + + +class spannerCallTransformer(cst.CSTTransformer): + CTRL_PARAMS: Tuple[str] = ('retry', 'timeout', 'metadata') + METHOD_TO_PARAMS: Dict[str, Tuple[str]] = { + 'batch_create_sessions': ('database', 'session_count', 'session_template', ), + 'batch_write': ('session', 'mutation_groups', 'request_options', 'exclude_txn_from_change_streams', ), + 'begin_transaction': ('session', 'options', 'request_options', 'mutation_key', ), + 'commit': ('session', 'transaction_id', 'single_use_transaction', 'mutations', 'return_commit_stats', 'max_commit_delay', 'request_options', 'precommit_token', ), + 'create_session': ('database', 'session', ), + 'delete_session': ('name', ), + 'execute_batch_dml': ('session', 'transaction', 'statements', 'seqno', 'request_options', 'last_statements', ), + 'execute_sql': ('session', 'sql', 'transaction', 'params', 'param_types', 'resume_token', 'query_mode', 'partition_token', 'seqno', 'query_options', 'request_options', 'directed_read_options', 'data_boost_enabled', 'last_statement', ), + 'execute_streaming_sql': ('session', 'sql', 'transaction', 'params', 'param_types', 'resume_token', 'query_mode', 'partition_token', 'seqno', 'query_options', 'request_options', 'directed_read_options', 'data_boost_enabled', 'last_statement', ), + 'get_session': ('name', ), + 'list_sessions': ('database', 'page_size', 'page_token', 'filter', ), + 'partition_query': ('session', 'sql', 'transaction', 'params', 'param_types', 'partition_options', ), + 'partition_read': ('session', 'table', 'key_set', 'transaction', 'index', 'columns', 'partition_options', ), + 'read': ('session', 'table', 'columns', 'key_set', 'transaction', 'index', 'limit', 'resume_token', 'partition_token', 'request_options', 'directed_read_options', 'data_boost_enabled', 'order_by', 'lock_hint', ), + 'rollback': ('session', 'transaction_id', ), + 'streaming_read': ('session', 'table', 'columns', 'key_set', 'transaction', 'index', 'limit', 'resume_token', 'partition_token', 'request_options', 'directed_read_options', 'data_boost_enabled', 'order_by', 'lock_hint', ), + } + + def leave_Call(self, original: cst.Call, updated: cst.Call) -> cst.CSTNode: + try: + key = original.func.attr.value + kword_params = self.METHOD_TO_PARAMS[key] + except (AttributeError, KeyError): + # Either not a method from the API or too convoluted to be sure. + return updated + + # If the existing code is valid, keyword args come after positional args. + # Therefore, all positional args must map to the first parameters. + args, kwargs = partition(lambda a: not bool(a.keyword), updated.args) + if any(k.keyword.value == "request" for k in kwargs): + # We've already fixed this file, don't fix it again. + return updated + + kwargs, ctrl_kwargs = partition( + lambda a: a.keyword.value not in self.CTRL_PARAMS, + kwargs + ) + + args, ctrl_args = args[:len(kword_params)], args[len(kword_params):] + ctrl_kwargs.extend(cst.Arg(value=a.value, keyword=cst.Name(value=ctrl)) + for a, ctrl in zip(ctrl_args, self.CTRL_PARAMS)) + + request_arg = cst.Arg( + value=cst.Dict([ + cst.DictElement( + cst.SimpleString("'{}'".format(name)), +cst.Element(value=arg.value) + ) + # Note: the args + kwargs looks silly, but keep in mind that + # the control parameters had to be stripped out, and that + # those could have been passed positionally or by keyword. + for name, arg in zip(kword_params, args + kwargs)]), + keyword=cst.Name("request") + ) + + return updated.with_changes( + args=[request_arg] + ctrl_kwargs + ) + + +def fix_files( + in_dir: pathlib.Path, + out_dir: pathlib.Path, + *, + transformer=spannerCallTransformer(), +): + """Duplicate the input dir to the output dir, fixing file method calls. + + Preconditions: + * in_dir is a real directory + * out_dir is a real, empty directory + """ + pyfile_gen = ( + pathlib.Path(os.path.join(root, f)) + for root, _, files in os.walk(in_dir) + for f in files if os.path.splitext(f)[1] == ".py" + ) + + for fpath in pyfile_gen: + with open(fpath, 'r') as f: + src = f.read() + + # Parse the code and insert method call fixes. + tree = cst.parse_module(src) + updated = tree.visit(transformer) + + # Create the path and directory structure for the new file. + updated_path = out_dir.joinpath(fpath.relative_to(in_dir)) + updated_path.parent.mkdir(parents=True, exist_ok=True) + + # Generate the updated source file at the corresponding path. + with open(updated_path, 'w') as f: + f.write(updated.code) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser( + description="""Fix up source that uses the spanner client library. + +The existing sources are NOT overwritten but are copied to output_dir with changes made. + +Note: This tool operates at a best-effort level at converting positional + parameters in client method calls to keyword based parameters. + Cases where it WILL FAIL include + A) * or ** expansion in a method call. + B) Calls via function or method alias (includes free function calls) + C) Indirect or dispatched calls (e.g. the method is looked up dynamically) + + These all constitute false negatives. The tool will also detect false + positives when an API method shares a name with another method. +""") + parser.add_argument( + '-d', + '--input-directory', + required=True, + dest='input_dir', + help='the input directory to walk for python files to fix up', + ) + parser.add_argument( + '-o', + '--output-directory', + required=True, + dest='output_dir', + help='the directory to output files fixed via un-flattening', + ) + args = parser.parse_args() + input_dir = pathlib.Path(args.input_dir) + output_dir = pathlib.Path(args.output_dir) + if not input_dir.is_dir(): + print( + f"input directory '{input_dir}' does not exist or is not a directory", + file=sys.stderr, + ) + sys.exit(-1) + + if not output_dir.is_dir(): + print( + f"output directory '{output_dir}' does not exist or is not a directory", + file=sys.stderr, + ) + sys.exit(-1) + + if os.listdir(output_dir): + print( + f"output directory '{output_dir}' is not empty", + file=sys.stderr, + ) + sys.exit(-1) + + fix_files(input_dir, output_dir) diff --git a/scripts/readme-gen/readme_gen.py b/scripts/readme-gen/readme_gen.py new file mode 100644 index 0000000000..8f5e248a0d --- /dev/null +++ b/scripts/readme-gen/readme_gen.py @@ -0,0 +1,69 @@ +#!/usr/bin/env python + +# Copyright 2024 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Generates READMEs using configuration defined in yaml.""" + +import argparse +import io +import os +import subprocess + +import jinja2 +import yaml + + +jinja_env = jinja2.Environment( + trim_blocks=True, + loader=jinja2.FileSystemLoader( + os.path.abspath(os.path.join(os.path.dirname(__file__), "templates")) + ), + autoescape=True, +) + +README_TMPL = jinja_env.get_template("README.tmpl.rst") + + +def get_help(file): + return subprocess.check_output(["python", file, "--help"]).decode() + + +def main(): + parser = argparse.ArgumentParser() + parser.add_argument("source") + parser.add_argument("--destination", default="README.rst") + + args = parser.parse_args() + + source = os.path.abspath(args.source) + root = os.path.dirname(source) + destination = os.path.join(root, args.destination) + + jinja_env.globals["get_help"] = get_help + + with io.open(source, "r") as f: + config = yaml.load(f) + + # This allows get_help to execute in the right directory. + os.chdir(root) + + output = README_TMPL.render(config) + + with io.open(destination, "w") as f: + f.write(output) + + +if __name__ == "__main__": + main() diff --git a/scripts/readme-gen/templates/README.tmpl.rst b/scripts/readme-gen/templates/README.tmpl.rst new file mode 100644 index 0000000000..4fd239765b --- /dev/null +++ b/scripts/readme-gen/templates/README.tmpl.rst @@ -0,0 +1,87 @@ +{# The following line is a lie. BUT! Once jinja2 is done with it, it will + become truth! #} +.. This file is automatically generated. Do not edit this file directly. + +{{product.name}} Python Samples +=============================================================================== + +.. image:: https://gstatic.com/cloudssh/images/open-btn.png + :target: https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/GoogleCloudPlatform/python-docs-samples&page=editor&open_in_editor={{folder}}/README.rst + + +This directory contains samples for {{product.name}}. {{product.description}} + +{{description}} + +.. _{{product.name}}: {{product.url}} + +{% if required_api_url %} +To run the sample, you need to enable the API at: {{required_api_url}} +{% endif %} + +{% if required_role %} +To run the sample, you need to have `{{required_role}}` role. +{% endif %} + +{{other_required_steps}} + +{% if setup %} +Setup +------------------------------------------------------------------------------- + +{% for section in setup %} + +{% include section + '.tmpl.rst' %} + +{% endfor %} +{% endif %} + +{% if samples %} +Samples +------------------------------------------------------------------------------- + +{% for sample in samples %} +{{sample.name}} ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +{% if not sample.hide_cloudshell_button %} +.. image:: https://gstatic.com/cloudssh/images/open-btn.png + :target: https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/GoogleCloudPlatform/python-docs-samples&page=editor&open_in_editor={{folder}}/{{sample.file}},{{folder}}/README.rst +{% endif %} + + +{{sample.description}} + +To run this sample: + +.. code-block:: bash + + $ python {{sample.file}} +{% if sample.show_help %} + + {{get_help(sample.file)|indent}} +{% endif %} + + +{% endfor %} +{% endif %} + +{% if cloud_client_library %} + +The client library +------------------------------------------------------------------------------- + +This sample uses the `Google Cloud Client Library for Python`_. +You can read the documentation for more details on API usage and use GitHub +to `browse the source`_ and `report issues`_. + +.. _Google Cloud Client Library for Python: + https://googlecloudplatform.github.io/google-cloud-python/ +.. _browse the source: + https://github.com/GoogleCloudPlatform/google-cloud-python +.. _report issues: + https://github.com/GoogleCloudPlatform/google-cloud-python/issues + +{% endif %} + +.. _Google Cloud SDK: https://cloud.google.com/sdk/ \ No newline at end of file diff --git a/scripts/readme-gen/templates/auth.tmpl.rst b/scripts/readme-gen/templates/auth.tmpl.rst new file mode 100644 index 0000000000..1446b94a5e --- /dev/null +++ b/scripts/readme-gen/templates/auth.tmpl.rst @@ -0,0 +1,9 @@ +Authentication +++++++++++++++ + +This sample requires you to have authentication setup. Refer to the +`Authentication Getting Started Guide`_ for instructions on setting up +credentials for applications. + +.. _Authentication Getting Started Guide: + https://cloud.google.com/docs/authentication/getting-started diff --git a/scripts/readme-gen/templates/auth_api_key.tmpl.rst b/scripts/readme-gen/templates/auth_api_key.tmpl.rst new file mode 100644 index 0000000000..11957ce271 --- /dev/null +++ b/scripts/readme-gen/templates/auth_api_key.tmpl.rst @@ -0,0 +1,14 @@ +Authentication +++++++++++++++ + +Authentication for this service is done via an `API Key`_. To obtain an API +Key: + +1. Open the `Cloud Platform Console`_ +2. Make sure that billing is enabled for your project. +3. From the **Credentials** page, create a new **API Key** or use an existing + one for your project. + +.. _API Key: + https://developers.google.com/api-client-library/python/guide/aaa_apikeys +.. _Cloud Console: https://console.cloud.google.com/project?_ diff --git a/scripts/readme-gen/templates/install_deps.tmpl.rst b/scripts/readme-gen/templates/install_deps.tmpl.rst new file mode 100644 index 0000000000..6f069c6c87 --- /dev/null +++ b/scripts/readme-gen/templates/install_deps.tmpl.rst @@ -0,0 +1,29 @@ +Install Dependencies +++++++++++++++++++++ + +#. Clone python-docs-samples and change directory to the sample directory you want to use. + + .. code-block:: bash + + $ git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git + +#. Install `pip`_ and `virtualenv`_ if you do not already have them. You may want to refer to the `Python Development Environment Setup Guide`_ for Google Cloud Platform for instructions. + + .. _Python Development Environment Setup Guide: + https://cloud.google.com/python/setup + +#. Create a virtualenv. Samples are compatible with Python 3.7+. + + .. code-block:: bash + + $ virtualenv env + $ source env/bin/activate + +#. Install the dependencies needed to run the samples. + + .. code-block:: bash + + $ pip install -r requirements.txt + +.. _pip: https://pip.pypa.io/ +.. _virtualenv: https://virtualenv.pypa.io/ diff --git a/scripts/readme-gen/templates/install_portaudio.tmpl.rst b/scripts/readme-gen/templates/install_portaudio.tmpl.rst new file mode 100644 index 0000000000..5ea33d18c0 --- /dev/null +++ b/scripts/readme-gen/templates/install_portaudio.tmpl.rst @@ -0,0 +1,35 @@ +Install PortAudio ++++++++++++++++++ + +Install `PortAudio`_. This is required by the `PyAudio`_ library to stream +audio from your computer's microphone. PyAudio depends on PortAudio for cross-platform compatibility, and is installed differently depending on the +platform. + +* For Mac OS X, you can use `Homebrew`_:: + + brew install portaudio + + **Note**: if you encounter an error when running `pip install` that indicates + it can't find `portaudio.h`, try running `pip install` with the following + flags:: + + pip install --global-option='build_ext' \ + --global-option='-I/usr/local/include' \ + --global-option='-L/usr/local/lib' \ + pyaudio + +* For Debian / Ubuntu Linux:: + + apt-get install portaudio19-dev python-all-dev + +* Windows may work without having to install PortAudio explicitly (it will get + installed with PyAudio). + +For more details, see the `PyAudio installation`_ page. + + +.. _PyAudio: https://people.csail.mit.edu/hubert/pyaudio/ +.. _PortAudio: http://www.portaudio.com/ +.. _PyAudio installation: + https://people.csail.mit.edu/hubert/pyaudio/#downloads +.. _Homebrew: http://brew.sh diff --git a/setup.cfg b/setup.cfg deleted file mode 100644 index 3bd555500e..0000000000 --- a/setup.cfg +++ /dev/null @@ -1,3 +0,0 @@ -# Generated by synthtool. DO NOT EDIT! -[bdist_wheel] -universal = 1 diff --git a/setup.py b/setup.py index 911d9c82a1..858982f783 100644 --- a/setup.py +++ b/setup.py @@ -1,4 +1,5 @@ -# Copyright 2018 Google LLC +# -*- coding: utf-8 -*- +# Copyright 2022 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -11,32 +12,52 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - +# import io import os -import setuptools +import setuptools # type: ignore - -# Package metadata. +package_root = os.path.abspath(os.path.dirname(__file__)) name = "google-cloud-spanner" -description = "Cloud Spanner API client library" -version = "1.15.1" -# Should be one of: -# 'Development Status :: 3 - Alpha' -# 'Development Status :: 4 - Beta' -# 'Development Status :: 5 - Production/Stable' -release_status = "Development Status :: 5 - Production/Stable" + + +description = "Google Cloud Spanner API client library" + +version = {} +with open(os.path.join(package_root, "google/cloud/spanner_v1/gapic_version.py")) as fp: + exec(fp.read(), version) +version = version["__version__"] + +if version[0] == "0": + release_status = "Development Status :: 4 - Beta" +else: + release_status = "Development Status :: 5 - Production/Stable" + dependencies = [ - "google-api-core[grpc, grpcgcp] >= 1.14.0, < 2.0.0dev", - "google-cloud-core >= 1.0.3, < 2.0dev", - "grpc-google-iam-v1 >= 0.12.3, < 0.13dev", + "google-api-core[grpc] >= 1.34.0, <3.0.0,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,!=2.8.*,!=2.9.*,!=2.10.*", + "google-cloud-core >= 1.4.4, < 3.0.0", + "grpc-google-iam-v1 >= 0.12.4, <1.0.0", + "proto-plus >= 1.22.0, <2.0.0", + "sqlparse >= 0.4.4", + "proto-plus >= 1.22.2, <2.0.0; python_version>='3.11'", + "protobuf>=3.20.2,<7.0.0,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5", + "grpc-interceptor >= 0.15.4", ] -extras = {} - +extras = { + "tracing": [ + "opentelemetry-api >= 1.22.0", + "opentelemetry-sdk >= 1.22.0", + "opentelemetry-semantic-conventions >= 0.43b0", + "opentelemetry-resourcedetector-gcp >= 1.8.0a0", + "google-cloud-monitoring >= 2.16.0", + "mmh3 >= 4.1.0 ", + ], + "libcst": "libcst >= 0.2.5", +} -# Setup boilerplate below this line. +url = "https://github.com/googleapis/python-spanner" package_root = os.path.abspath(os.path.dirname(__file__)) @@ -44,18 +65,12 @@ with io.open(readme_filename, encoding="utf-8") as readme_file: readme = readme_file.read() -# Only include packages under the 'google' namespace. Do not include tests, -# benchmarks, etc. packages = [ - package for package in setuptools.find_packages() if package.startswith("google") + package + for package in setuptools.find_namespace_packages() + if package.startswith("google") ] -# Determine which namespaces are needed. -namespaces = ["google"] -if "google.cloud" in packages: - namespaces.append("google.cloud") - - setuptools.setup( name=name, version=version, @@ -64,27 +79,25 @@ author="Google LLC", author_email="googleapis-packages@google.com", license="Apache 2.0", - url="https://github.com/googleapis/python-spanner", + url=url, classifiers=[ release_status, "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python", - "Programming Language :: Python :: 2", - "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", - "Programming Language :: Python :: 3.5", - "Programming Language :: Python :: 3.6", - "Programming Language :: Python :: 3.7", + "Programming Language :: Python :: 3.9", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", "Operating System :: OS Independent", "Topic :: Internet", ], platforms="Posix; MacOS X; Windows", packages=packages, - namespace_packages=namespaces, install_requires=dependencies, extras_require=extras, - python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*", + python_requires=">=3.9", include_package_data=True, zip_safe=False, ) diff --git a/google/cloud/spanner_v1/gapic/__init__.py b/stale_outputs_checked similarity index 100% rename from google/cloud/spanner_v1/gapic/__init__.py rename to stale_outputs_checked diff --git a/synth.metadata b/synth.metadata deleted file mode 100644 index bb226f324a..0000000000 --- a/synth.metadata +++ /dev/null @@ -1,60 +0,0 @@ -{ - "updateTime": "2020-03-24T12:17:04.474073Z", - "sources": [ - { - "generator": { - "name": "artman", - "version": "1.1.1", - "dockerImage": "googleapis/artman@sha256:5ef340c8d9334719bc5c6981d95f4a5d2737b0a6a24f2b9a0d430e96fff85c5b" - } - }, - { - "git": { - "name": "googleapis", - "remote": "https://github.com/googleapis/googleapis.git", - "sha": "36c0febd0fa7267ab66d14408eec2afd1b6bec4e", - "internalRef": "302639621", - "log": "36c0febd0fa7267ab66d14408eec2afd1b6bec4e\nUpdate GAPIC configurations to v2 .yaml.\n\nPiperOrigin-RevId: 302639621\n\n078f222366ed344509a48f2f084944ef61476613\nFix containeranalysis v1beta1 assembly target name\n\nPiperOrigin-RevId: 302529186\n\n0be7105dc52590fa9a24e784052298ae37ce53aa\nAdd BUILD.bazel file to asset/v1p1beta1\n\nPiperOrigin-RevId: 302154871\n\n6c248fd13e8543f8d22cbf118d978301a9fbe2a8\nAdd missing resource annotations and additional_bindings to dialogflow v2 API.\n\nPiperOrigin-RevId: 302063117\n\n9a3a7f33be9eeacf7b3e98435816b7022d206bd7\nChange the service name from \"chromeos-moblab.googleapis.com\" to \"chromeosmoblab.googleapis.com\"\n\nPiperOrigin-RevId: 302060989\n\n98a339237577e3de26cb4921f75fb5c57cc7a19f\nfeat: devtools/build/v1 publish client library config annotations\n\n* add details field to some of the BuildEvents\n* add final_invocation_id and build_tool_exit_code fields to BuildStatus\n\nPiperOrigin-RevId: 302044087\n\ncfabc98c6bbbb22d1aeaf7612179c0be193b3a13\nfeat: home/graph/v1 publish client library config annotations & comment updates\n\nThis change includes adding the client library configuration annotations, updated proto comments, and some client library configuration files.\n\nPiperOrigin-RevId: 302042647\n\nc8c8c0bd15d082db9546253dbaad1087c7a9782c\nchore: use latest gapic-generator in bazel WORKSPACE.\nincluding the following commits from gapic-generator:\n- feat: take source protos in all sub-packages (#3144)\n\nPiperOrigin-RevId: 301843591\n\ne4daf5202ea31cb2cb6916fdbfa9d6bd771aeb4c\nAdd bazel file for v1 client lib generation\n\nPiperOrigin-RevId: 301802926\n\n275fbcce2c900278d487c33293a3c7e1fbcd3a34\nfeat: pubsub/v1 add an experimental filter field to Subscription\n\nPiperOrigin-RevId: 301661567\n\nf2b18cec51d27c999ad30011dba17f3965677e9c\nFix: UpdateBackupRequest.backup is a resource, not a resource reference - remove annotation.\n\nPiperOrigin-RevId: 301636171\n\n800384063ac93a0cac3a510d41726fa4b2cd4a83\nCloud Billing Budget API v1beta1\nModified api documentation to include warnings about the new filter field.\n\nPiperOrigin-RevId: 301634389\n\n0cc6c146b660db21f04056c3d58a4b752ee445e3\nCloud Billing Budget API v1alpha1\nModified api documentation to include warnings about the new filter field.\n\nPiperOrigin-RevId: 301630018\n\nff2ea00f69065585c3ac0993c8b582af3b6fc215\nFix: Add resource definition for a parent of InspectTemplate which was otherwise missing.\n\nPiperOrigin-RevId: 301623052\n\n55fa441c9daf03173910760191646399338f2b7c\nAdd proto definition for AccessLevel, AccessPolicy, and ServicePerimeter.\n\nPiperOrigin-RevId: 301620844\n\ne7b10591c5408a67cf14ffafa267556f3290e262\nCloud Bigtable Managed Backup service and message proto files.\n\nPiperOrigin-RevId: 301585144\n\nd8e226f702f8ddf92915128c9f4693b63fb8685d\nfeat: Add time-to-live in a queue for builds\n\nPiperOrigin-RevId: 301579876\n\n430375af011f8c7a5174884f0d0e539c6ffa7675\ndocs: add missing closing backtick\n\nPiperOrigin-RevId: 301538851\n\n0e9f1f60ded9ad1c2e725e37719112f5b487ab65\nbazel: Use latest release of gax_java\n\nPiperOrigin-RevId: 301480457\n\n5058c1c96d0ece7f5301a154cf5a07b2ad03a571\nUpdate GAPIC v2 with batching parameters for Logging API\n\nPiperOrigin-RevId: 301443847\n\n64ab9744073de81fec1b3a6a931befc8a90edf90\nFix: Introduce location-based organization/folder/billing-account resources\nChore: Update copyright years\n\nPiperOrigin-RevId: 301373760\n\n23d5f09e670ebb0c1b36214acf78704e2ecfc2ac\nUpdate field_behavior annotations in V1 and V2.\n\nPiperOrigin-RevId: 301337970\n\nb2cf37e7fd62383a811aa4d54d013ecae638851d\nData Catalog V1 API\n\nPiperOrigin-RevId: 301282503\n\n1976b9981e2900c8172b7d34b4220bdb18c5db42\nCloud DLP api update. Adds missing fields to Finding and adds support for hybrid jobs.\n\nPiperOrigin-RevId: 301205325\n\nae78682c05e864d71223ce22532219813b0245ac\nfix: several sample code blocks in comments are now properly indented for markdown\n\nPiperOrigin-RevId: 301185150\n\ndcd171d04bda5b67db13049320f97eca3ace3731\nPublish Media Translation API V1Beta1\n\nPiperOrigin-RevId: 301180096\n\nff1713453b0fbc5a7544a1ef6828c26ad21a370e\nAdd protos and BUILD rules for v1 API.\n\nPiperOrigin-RevId: 301179394\n\n8386761d09819b665b6a6e1e6d6ff884bc8ff781\nfeat: chromeos/modlab publish protos and config for Chrome OS Moblab API.\n\nPiperOrigin-RevId: 300843960\n\nb2e2bc62fab90e6829e62d3d189906d9b79899e4\nUpdates to GCS gRPC API spec:\n\n1. Changed GetIamPolicy and TestBucketIamPermissions to use wrapper messages around google.iam.v1 IAM requests messages, and added CommonRequestParams. This lets us support RequesterPays buckets.\n2. Added a metadata field to GetObjectMediaResponse, to support resuming an object media read safely (by extracting the generation of the object being read, and using it in the resumed read request).\n\nPiperOrigin-RevId: 300817706\n\n7fd916ce12335cc9e784bb9452a8602d00b2516c\nAdd deprecated_collections field for backward-compatiblity in PHP and monolith-generated Python and Ruby clients.\n\nGenerate TopicName class in Java which covers the functionality of both ProjectTopicName and DeletedTopicName. Introduce breaking changes to be fixed by synth.py.\n\nDelete default retry parameters.\n\nRetry codes defs can be deleted once # https://github.com/googleapis/gapic-generator/issues/3137 is fixed.\n\nPiperOrigin-RevId: 300813135\n\n047d3a8ac7f75383855df0166144f891d7af08d9\nfix!: google/rpc refactor ErrorInfo.type to ErrorInfo.reason and comment updates.\n\nPiperOrigin-RevId: 300773211\n\nfae4bb6d5aac52aabe5f0bb4396466c2304ea6f6\nAdding RetryPolicy to pubsub.proto\n\nPiperOrigin-RevId: 300769420\n\n7d569be2928dbd72b4e261bf9e468f23afd2b950\nAdding additional protocol buffer annotations to v3.\n\nPiperOrigin-RevId: 300718800\n\n13942d1a85a337515040a03c5108993087dc0e4f\nAdd logging protos for Recommender v1.\n\nPiperOrigin-RevId: 300689896\n\na1a573c3eecfe2c404892bfa61a32dd0c9fb22b6\nfix: change go package to use cloud.google.com/go/maps\n\nPiperOrigin-RevId: 300661825\n\nc6fbac11afa0c7ab2972d9df181493875c566f77\nfeat: publish documentai/v1beta2 protos\n\nPiperOrigin-RevId: 300656808\n\n5202a9e0d9903f49e900f20fe5c7f4e42dd6588f\nProtos for v1beta1 release of Cloud Security Center Settings API\n\nPiperOrigin-RevId: 300580858\n\n83518e18655d9d4ac044acbda063cc6ecdb63ef8\nAdds gapic.yaml file and BUILD.bazel file.\n\nPiperOrigin-RevId: 300554200\n\n836c196dc8ef8354bbfb5f30696bd3477e8db5e2\nRegenerate recommender v1beta1 gRPC ServiceConfig file for Insights methods.\n\nPiperOrigin-RevId: 300549302\n\n" - } - }, - { - "git": { - "name": "synthtool", - "remote": "https://github.com/googleapis/synthtool.git", - "sha": "6a17abc7652e2fe563e1288c6e8c23fc260dda97" - } - } - ], - "destinations": [ - { - "client": { - "source": "googleapis", - "apiName": "spanner", - "apiVersion": "v1", - "language": "python", - "generator": "gapic", - "config": "google/spanner/artman_spanner.yaml" - } - }, - { - "client": { - "source": "googleapis", - "apiName": "spanner_admin_instance", - "apiVersion": "v1", - "language": "python", - "generator": "gapic", - "config": "google/spanner/admin/instance/artman_spanner_admin_instance.yaml" - } - }, - { - "client": { - "source": "googleapis", - "apiName": "spanner_admin_database", - "apiVersion": "v1", - "language": "python", - "generator": "gapic", - "config": "google/spanner/admin/database/artman_spanner_admin_database.yaml" - } - } - ] -} \ No newline at end of file diff --git a/synth.py b/synth.py deleted file mode 100644 index ed1794b090..0000000000 --- a/synth.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright 2018 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""This script is used to synthesize generated parts of this library.""" -import synthtool as s -from synthtool import gcp - -gapic = gcp.GAPICGenerator() -common = gcp.CommonTemplates() - -# ---------------------------------------------------------------------------- -# Generate spanner GAPIC layer -# ---------------------------------------------------------------------------- -library = gapic.py_library( - "spanner", - "v1", - config_path="/google/spanner/artman_spanner.yaml", - artman_output_name="spanner-v1", - include_protos=True, -) - -s.move(library / "google/cloud/spanner_v1/proto") -s.move(library / "google/cloud/spanner_v1/gapic") -s.move(library / "tests") - -# Add grpcio-gcp options -s.replace( - "google/cloud/spanner_v1/gapic/transports/spanner_grpc_transport.py", - "import google.api_core.grpc_helpers\n", - "import pkg_resources\n" - "import grpc_gcp\n" - "\n" - "import google.api_core.grpc_helpers\n", -) -s.replace( - "google/cloud/spanner_v1/gapic/transports/spanner_grpc_transport.py", - "from google.cloud.spanner_v1.proto import spanner_pb2_grpc\n", - "\g<0>\n\n_GRPC_KEEPALIVE_MS = 2 * 60 * 1000\n" - "_SPANNER_GRPC_CONFIG = 'spanner.grpc.config'\n", -) - -s.replace( - "google/cloud/spanner_v1/gapic/transports/spanner_grpc_transport.py", - "(\s+)'grpc.max_receive_message_length': -1,", - "\g<0>\g<1>\"grpc.keepalive_time_ms\": _GRPC_KEEPALIVE_MS,", -) - -s.replace( - "google/cloud/spanner_v1/gapic/transports/spanner_grpc_transport.py", - "(\s+)return google.api_core.grpc_helpers.create_channel\(\n", - "\g<1>grpc_gcp_config = grpc_gcp.api_config_from_text_pb(" - "\g<1> pkg_resources.resource_string(__name__, _SPANNER_GRPC_CONFIG))" - "\g<1>options = [(grpc_gcp.API_CONFIG_CHANNEL_ARG, grpc_gcp_config)]" - "\g<1>if 'options' in kwargs:" - "\g<1> options.extend(kwargs['options'])" - "\g<1>kwargs['options'] = options" - "\g<0>", -) -s.replace( - "tests/unit/gapic/v1/test_spanner_client_v1.py", - "from google.cloud import spanner_v1", - "from google.cloud.spanner_v1.gapic import spanner_client as spanner_v1", -) - -# ---------------------------------------------------------------------------- -# Generate instance admin client -# ---------------------------------------------------------------------------- -library = gapic.py_library( - "spanner_admin_instance", - "v1", - config_path="/google/spanner/admin/instance" "/artman_spanner_admin_instance.yaml", - artman_output_name="spanner-admin-instance-v1", - include_protos=True, -) - -s.move(library / "google/cloud/spanner_admin_instance_v1/gapic") -s.move(library / "google/cloud/spanner_admin_instance_v1/proto") -s.move(library / "tests") - -# Fix up the _GAPIC_LIBRARY_VERSION targets -s.replace( - "google/cloud/spanner_admin_instance_v1/gapic/instance_admin_client.py", - "'google-cloud-spanner-admin-instance'", - "'google-cloud-spanner'", -) - -# Fix up generated imports -s.replace( - "google/**/*.py", - "from google\.cloud\.spanner\.admin\.instance_v1.proto", - "from google.cloud.spanner_admin_instance_v1.proto", -) - -# Fix docstrings -s.replace("google/cloud/spanner_v1/proto/transaction_pb2.py", r"""====*""", r"") -s.replace("google/cloud/spanner_v1/proto/transaction_pb2.py", r"""----*""", r"") -s.replace("google/cloud/spanner_v1/proto/transaction_pb2.py", r"""~~~~*""", r"") - -# ---------------------------------------------------------------------------- -# Generate database admin client -# ---------------------------------------------------------------------------- -library = gapic.py_library( - "spanner_admin_database", - "v1", - config_path="/google/spanner/admin/database" "/artman_spanner_admin_database.yaml", - artman_output_name="spanner-admin-database-v1", - include_protos=True, -) - -s.move(library / "google/cloud/spanner_admin_database_v1/gapic") -s.move(library / "google/cloud/spanner_admin_database_v1/proto") -s.move(library / "tests") - -# Fix up the _GAPIC_LIBRARY_VERSION targets -s.replace( - "google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py", - "'google-cloud-spanner-admin-database'", - "'google-cloud-spanner'", -) - -# Fix up the _GAPIC_LIBRARY_VERSION targets -s.replace( - "google/**/*.py", - "from google\.cloud\.spanner\.admin\.database_v1.proto", - "from google.cloud.spanner_admin_database_v1.proto", -) - -# Fix up proto docs that are missing summary line. -s.replace( - "google/cloud/spanner_admin_instance_v1/proto/spanner_instance_admin_pb2.py", - '"""Attributes:', - '"""Protocol buffer.\n\n Attributes:', -) - -# Fix LRO return types -s.replace("google/cloud/spanner_admin_instance_v1/gapic/instance_admin_client.py", - "cloud.spanner_admin_instance_v1.types._OperationFuture", - "api_core.operation.Operation") -s.replace("google/cloud/spanner_admin_database_v1/gapic/database_admin_client.py", - "cloud.spanner_admin_database_v1.types._OperationFuture", - "api_core.operation.Operation") - -# ---------------------------------------------------------------------------- -# Add templated files -# ---------------------------------------------------------------------------- -templated_files = common.py_library(unit_cov_level=97, cov_level=99) -s.move(templated_files, excludes=["noxfile.py"]) - -# Template's MANIFEST.in does not include the needed GAPIC config file. -# See PR #6928. -s.replace( - "MANIFEST.in", - "include README.rst LICENSE\n", - "include README.rst LICENSE\n" - "include google/cloud/spanner_v1/gapic/transports/spanner.grpc.config\n", -) - -s.shell.run(["nox", "-s", "blacken"], hide_output=False) diff --git a/test.py b/test.py new file mode 100644 index 0000000000..6032524b04 --- /dev/null +++ b/test.py @@ -0,0 +1,11 @@ +from google.cloud import spanner +from gooogle.cloud.spanner_v1 import RequestOptions + +client = spanner.Client() +instance = client.instance('test-instance') +database = instance.database('test-db') + +with database.snapshot() as snapshot: + results = snapshot.execute_sql("SELECT * in all_types LIMIT %s", ) + +database.drop() \ No newline at end of file diff --git a/test_utils/credentials.json.enc b/test_utils/credentials.json.enc deleted file mode 100644 index f073c7e4f7..0000000000 --- a/test_utils/credentials.json.enc +++ /dev/null @@ -1,49 +0,0 @@ -U2FsdGVkX1/vVm/dOEg1DCACYbdOcL+ey6+64A+DZGZVgF8Z/3skK6rpPocu6GOA -UZAqASsBH9QifDf8cKVXQXVYpYq6HSv2O0w7vOmVorZO9GYPo98s9/8XO+4ty/AU -aB6TD68frBAYv4cT/l5m7aYdzfzMTy0EOXoleZT09JYP3B5FV3KCO114FzMXGwrj -HXsR6E5SyUUlUnWPC3eD3aqmovay0gxOKYO3ZwjFK1nlbN/8q6/8nwBCf/Bg6SHV -V93pNxdolRlJev9kgKz4RN1z4jGCy5PAndhSLE82NFIs9LoAiEOU5YeMlN+Ulqus -J92nh+ptUe9a4pJGbAuveUWO7zdS1QyXvTMUcmmSfXCNm/eIQjNuu5+rHtIjWKh8 -Ilwj2w1aTfSptQEhk/kwRgFz/d11vfwJzvwTmCxO6zyOeL0VUWLqdCBGgG5As9He -/RenF8PZ1O0WbTt7fns5oTlTk/MUo+0xJ1xqvu/y45LaqqcBAnEdrWKmtM3dJHWv -ufQku+kD+83F/VwBnQdvgMHu6KZEs6LRrNo58r4QuK6fS7VCACdzxID1RM2cL7kT -6BFRlyGj1aigmjne9g9M9Jx4R+mZDpPU1WDzzG71J4qCUwaX8Dfwutuv4uiFvzwq -NUF0wLJJPtKWmtW+hnZ/fhHQGCRsOpZzFnqp6Zv7J7k6esqxMgIjfal7Djk5Acy8 -j3iVvm6CYmKMVqzL62JHYS9Ye83tzBCaR8hpnJQKgH3FSOFY8HSwrtQSIsl/hSeF -41sgnz0Y+/gkzNeU18qFk+eCZmvljyu+JK0nPYUgpOCJYVBNQpNHz5PUyiAEKhtM -IOSdjPRW1Y+Xf4RroJnLPoF24Ijwrow5LCm9hBRY6TPPMMmnIXCd23xcLJ1rMj6g -x4ZikElans+cwuc9wtbb7w01DcpTwQ1+eIV1qV+KIgpnLjRGLhZD4etobBsrwYu/ -vnIwy2QHCKENPb8sbdgp7x2mF7VSX0/7tf+9+i70EBiMzpOKBkiZhtLzm6hOBkEy -ODaWrx4lTTwbSw8Rmtf58APhPFMsjHoNsjiUoK249Y8Y2Ff4fMfqYsXu6VC1n/At -CuWYHc3EfBwFcLJS+RQB9kFk/4FygFBWq4Kj0MqoRruLbKmoGeJKH9q35W0f0NCD -j+iHt3014kMGiuyJe1UDQ6fvEihFFdHuDivFpPAXDt4PTY/WtpDhaGMx23kb54pK -jkAuxpznAB1lK3u9bGRXDasGeHIrNtIlPvgkrWHXvoBVqM7zry8TGtoxp3E3I42Z -cUfDWfB9GqVdrOwvrTzyZsl2uShRkAJaZFZj5aMyYxiptp4gM8CwWiNtOd2EwtRO -LxZX4M02PQFIqXV3FSDA0q6EwglUrTZdAlYeOEkopaKCtG31dEPOSQG3NGJAEYso -Cxm99H7970dp0OAgpNSgRbcWDbhVbQXnRzvFGqLeH6a9dQ/a8uD3s8Qm9Du/kB6d -XxTRe2OGxzcD0AgI8GClE4rIZHCLbcwuJRp0EYcN+pgY80O4U98fZ5RYpU6OYbU/ -MEiaBYFKtZtGkV6AQD568V7hHJWqc5DDfVHUQ/aeQwnKi2vnU66u+nnV2rZxXxLP -+dqeLRpul+wKa5b/Z5SfQ14Ff8s7aVyxaogGpyggyPL1vyq4KWZ6Or/wEE5hgNO4 -kBh6ht0QT1Hti8XY2JK1M+Jgbjgcg4jkHBGVqegrG1Rvcc2A4TYKwx+QMSBhyxrU -5qhROjS4lTcC42hQslMUkUwc4U/Y91XdFbOOnaAkwzI36NRYL0pmgZnYxGJZeRvr -E5foOhnOEVSFGdOkLfFh+FkWZQf56Lmn8Gg2wHE3dZTxLHibiUYfkgOr1uEosq29 -D1NstvlJURPQ0Q+8QQNWcl9nEZHMAjOmnL1hbx+QfuC6seucp+sXGzdZByMLZbvT -tG8KNL293CmyQowgf9MXToWYnwRkcvqfTaKyor2Ggze3JtoFW4t0j4DI1XPciZFX -XmfApHrzdB/bZadzxyaZ2NE0CuH9zDelwI6rz38xsN5liYnp5qmNKVCZVOHccXa6 -J8x365m5/VaaA2RrtdPqKxn8VaKy7+T690QgMXVGM4PbzQzQxHuSleklocqlP+sB -jSMXCZY+ng/i4UmRO9noiyW3UThYh0hIdMYs12EmmI9cnF/OuYZpl30fmqwV+VNM -td5B2fYvAvvsjiX60SFCn3DATP1GrPMBlZSmhhP3GYS+xrWt3Xxta9qIX2BEF1Gg -twnZZRjoULSRFUYPfJPEOfEH2UQwm84wxx/GezVE+S/RpBlatPOgCiLnNNaLfdTC -mTG9qY9elJv3GGQO8Lqgf4i8blExs05lSPk1BDhzTB6H9TLz+Ge0/l1QxKf3gPXU -aImK1azieXMXHECkdKxrzmehwu1dZ/oYOLc/OFQCETwSRoLPFOFpYUpizwmVVHR6 -uLSfRptte4ZOU3zHfpd/0+J4tkwHwEkGzsmMdqudlm7qME6upuIplyVBH8JiXzUK -n1RIH/OPmVEluAnexWRLZNdk7MrakIO4XACVbICENiYQgAIErP568An6twWEGDbZ -bEN64E3cVDTDRPRAunIhhsEaapcxpFEPWlHorxv36nMUt0R0h0bJlCu5QdzckfcX -ZrRuu1kl76ZfbSE8T0G4/rBb9gsU4Gn3WyvLIO3MgFBuxR68ZwcR8LpEUd8qp38H -NG4cxPmN1nGKo663Z+xI2Gt5up4gpl+fOt4mXqxY386rB7yHaOfElMG5TUYdrS9w -1xbbCVgeJ6zxX+NFlndG33cSAPprhw+C18eUu6ZU63WZcYFo3GfK6rs3lvYtofvE -8DxztdTidQedNVNE+63YCjhxd/cZUI5n/UpgYkr9owp7hNGJiR3tdoNLR2gcoGqL -qWhH928k2aSgF2j97LZ2OqoPCp0tUB7ho4jD2u4Ik3GLVNlCc3dCvWRvpHtDTQDv -tujESMfHUc9I2r4S/PD3bku/ABGwa977Yp1PjzJGr9RajA5is5n6GVpyynwjtKG4 -iyyITpdwpCgr8pueTBLwZnas3slmiMOog/E4PmPgctHzvC+vhQijhUtw5zSsmv0l -bZlw/mVhp5Ta7dTcLBKR8DA3m3vTbaEGkz0xpfQr7GfiSMRbJyvIw88pDK0gyTMD diff --git a/test_utils/scripts/circleci/get_tagged_package.py b/test_utils/scripts/circleci/get_tagged_package.py deleted file mode 100644 index c148b9dc23..0000000000 --- a/test_utils/scripts/circleci/get_tagged_package.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright 2016 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Helper to determine package from tag. -Get the current package directory corresponding to the Circle Tag. -""" - -from __future__ import print_function - -import os -import re -import sys - - -TAG_RE = re.compile(r""" - ^ - (?P - (([a-z]+)[_-])*) # pkg-name-with-hyphens-or-underscores (empty allowed) - ([0-9]+)\.([0-9]+)\.([0-9]+) # Version x.y.z (x, y, z all ints) - $ -""", re.VERBOSE) -TAG_ENV = 'CIRCLE_TAG' -ERROR_MSG = '%s env. var. not set' % (TAG_ENV,) -BAD_TAG_MSG = 'Invalid tag name: %s. Expected pkg-name-x.y.z' -CIRCLE_CI_SCRIPTS_DIR = os.path.dirname(__file__) -ROOT_DIR = os.path.realpath( - os.path.join(CIRCLE_CI_SCRIPTS_DIR, '..', '..', '..')) - - -def main(): - """Get the current package directory. - Prints the package directory out so callers can consume it. - """ - if TAG_ENV not in os.environ: - print(ERROR_MSG, file=sys.stderr) - sys.exit(1) - - tag_name = os.environ[TAG_ENV] - match = TAG_RE.match(tag_name) - if match is None: - print(BAD_TAG_MSG % (tag_name,), file=sys.stderr) - sys.exit(1) - - pkg_name = match.group('pkg') - if pkg_name is None: - print(ROOT_DIR) - else: - pkg_dir = pkg_name.rstrip('-').replace('-', '_') - print(os.path.join(ROOT_DIR, pkg_dir)) - - -if __name__ == '__main__': - main() diff --git a/test_utils/scripts/circleci/twine_upload.sh b/test_utils/scripts/circleci/twine_upload.sh deleted file mode 100755 index 23a4738e90..0000000000 --- a/test_utils/scripts/circleci/twine_upload.sh +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/bash - -# Copyright 2016 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -ev - -# If this is not a CircleCI tag, no-op. -if [[ -z "$CIRCLE_TAG" ]]; then - echo "This is not a release tag. Doing nothing." - exit 0 -fi - -# H/T: http://stackoverflow.com/a/246128/1068170 -SCRIPT="$(dirname "${BASH_SOURCE[0]}")/get_tagged_package.py" -# Determine the package directory being deploying on this tag. -PKG_DIR="$(python ${SCRIPT})" - -# Ensure that we have the latest versions of Twine, Wheel, and Setuptools. -python3 -m pip install --upgrade twine wheel setuptools - -# Move into the package, build the distribution and upload. -cd ${PKG_DIR} -python3 setup.py sdist bdist_wheel -twine upload dist/* diff --git a/test_utils/scripts/get_target_packages.py b/test_utils/scripts/get_target_packages.py deleted file mode 100644 index 1d51830cc2..0000000000 --- a/test_utils/scripts/get_target_packages.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright 2017 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Print a list of packages which require testing.""" - -import os -import re -import subprocess -import warnings - - -CURRENT_DIR = os.path.realpath(os.path.dirname(__file__)) -BASE_DIR = os.path.realpath(os.path.join(CURRENT_DIR, '..', '..')) -GITHUB_REPO = os.environ.get('GITHUB_REPO', 'google-cloud-python') -CI = os.environ.get('CI', '') -CI_BRANCH = os.environ.get('CIRCLE_BRANCH') -CI_PR = os.environ.get('CIRCLE_PR_NUMBER') -CIRCLE_TAG = os.environ.get('CIRCLE_TAG') -head_hash, head_name = subprocess.check_output(['git', 'show-ref', 'HEAD'] -).strip().decode('ascii').split() -rev_parse = subprocess.check_output( - ['git', 'rev-parse', '--abbrev-ref', 'HEAD'] -).strip().decode('ascii') -MAJOR_DIV = '#' * 78 -MINOR_DIV = '#' + '-' * 77 - -# NOTE: This reg-ex is copied from ``get_tagged_packages``. -TAG_RE = re.compile(r""" - ^ - (?P - (([a-z]+)-)*) # pkg-name-with-hyphens- (empty allowed) - ([0-9]+)\.([0-9]+)\.([0-9]+) # Version x.y.z (x, y, z all ints) - $ -""", re.VERBOSE) - -# This is the current set of dependencies by package. -# As of this writing, the only "real" dependency is that of error_reporting -# (on logging), the rest are just system test dependencies. -PKG_DEPENDENCIES = { - 'logging': {'pubsub'}, -} - - -def get_baseline(): - """Return the baseline commit. - - On a pull request, or on a branch, return the common parent revision - with the master branch. - - Locally, return a value pulled from environment variables, or None if - the environment variables are not set. - - On a push to master, return None. This will effectively cause everything - to be considered to be affected. - """ - - # If this is a pull request or branch, return the tip for master. - # We will test only packages which have changed since that point. - ci_non_master = (CI == 'true') and any([CI_BRANCH != 'master', CI_PR]) - - if ci_non_master: - - repo_url = 'git@github.com:GoogleCloudPlatform/{}'.format(GITHUB_REPO) - subprocess.run(['git', 'remote', 'add', 'baseline', repo_url], - stderr=subprocess.DEVNULL) - subprocess.run(['git', 'pull', 'baseline'], stderr=subprocess.DEVNULL) - - if CI_PR is None and CI_BRANCH is not None: - output = subprocess.check_output([ - 'git', 'merge-base', '--fork-point', - 'baseline/master', CI_BRANCH]) - return output.strip().decode('ascii') - - return 'baseline/master' - - # If environment variables are set identifying what the master tip is, - # use that. - if os.environ.get('GOOGLE_CLOUD_TESTING_REMOTE', ''): - remote = os.environ['GOOGLE_CLOUD_TESTING_REMOTE'] - branch = os.environ.get('GOOGLE_CLOUD_TESTING_BRANCH', 'master') - return '%s/%s' % (remote, branch) - - # If we are not in CI and we got this far, issue a warning. - if not CI: - warnings.warn('No baseline could be determined; this means tests ' - 'will run for every package. If this is local ' - 'development, set the $GOOGLE_CLOUD_TESTING_REMOTE ' - 'environment variable.') - - # That is all we can do; return None. - return None - - -def get_changed_files(): - """Return a list of files that have been changed since the baseline. - - If there is no base, return None. - """ - # Get the baseline, and fail quickly if there is no baseline. - baseline = get_baseline() - print('# Baseline commit: {}'.format(baseline)) - if not baseline: - return None - - # Return a list of altered files. - try: - return subprocess.check_output([ - 'git', 'diff', '--name-only', '{}..HEAD'.format(baseline), - ], stderr=subprocess.DEVNULL).decode('utf8').strip().split('\n') - except subprocess.CalledProcessError: - warnings.warn('Unable to perform git diff; falling back to assuming ' - 'all packages have changed.') - return None - - -def reverse_map(dict_of_sets): - """Reverse a map of one-to-many. - - So the map:: - - { - 'A': {'B', 'C'}, - 'B': {'C'}, - } - - becomes - - { - 'B': {'A'}, - 'C': {'A', 'B'}, - } - - Args: - dict_of_sets (dict[set]): A dictionary of sets, mapping - one value to many. - - Returns: - dict[set]: The reversed map. - """ - result = {} - for key, values in dict_of_sets.items(): - for value in values: - result.setdefault(value, set()).add(key) - - return result - -def get_changed_packages(file_list): - """Return a list of changed packages based on the provided file list. - - If the file list is None, then all packages should be considered to be - altered. - """ - # Determine a complete list of packages. - all_packages = set() - for file_ in os.listdir(BASE_DIR): - abs_file = os.path.realpath(os.path.join(BASE_DIR, file_)) - nox_file = os.path.join(abs_file, 'nox.py') - if os.path.isdir(abs_file) and os.path.isfile(nox_file): - all_packages.add(file_) - - # If ther is no file list, send down the full package set. - if file_list is None: - return all_packages - - # Create a set based on the list of changed files. - answer = set() - reverse_deps = reverse_map(PKG_DEPENDENCIES) - for file_ in file_list: - # Ignore root directory changes (setup.py, .gitignore, etc.). - if os.path.sep not in file_: - continue - - # Ignore changes that are not in a package (usually this will be docs). - package = file_.split(os.path.sep, 1)[0] - if package not in all_packages: - continue - - # If there is a change in core, short-circuit now and return - # everything. - if package in ('core',): - return all_packages - - # Add the package, as well as any dependencies this package has. - # NOTE: For now, dependencies only go down one level. - answer.add(package) - answer = answer.union(reverse_deps.get(package, set())) - - # We got this far without being short-circuited; return the final answer. - return answer - - -def get_tagged_package(): - """Return the package corresponding to the current tag. - - If there is not tag, will return :data:`None`. - """ - if CIRCLE_TAG is None: - return - - match = TAG_RE.match(CIRCLE_TAG) - if match is None: - return - - pkg_name = match.group('pkg') - if pkg_name == '': - # NOTE: This corresponds to the "umbrella" tag. - return - - return pkg_name.rstrip('-').replace('-', '_') - - -def get_target_packages(): - """Return a list of target packages to be run in the current build. - - If in a tag build, will run only the package(s) that are tagged, otherwise - will run the packages that have file changes in them (or packages that - depend on those). - """ - tagged_package = get_tagged_package() - if tagged_package is None: - file_list = get_changed_files() - print(MAJOR_DIV) - print('# Changed files:') - print(MINOR_DIV) - for file_ in file_list or (): - print('# {}'.format(file_)) - for package in sorted(get_changed_packages(file_list)): - yield package - else: - yield tagged_package - - -def main(): - print(MAJOR_DIV) - print('# Environment') - print(MINOR_DIV) - print('# CircleCI: {}'.format(CI)) - print('# CircleCI branch: {}'.format(CI_BRANCH)) - print('# CircleCI pr: {}'.format(CI_PR)) - print('# CircleCI tag: {}'.format(CIRCLE_TAG)) - print('# HEAD ref: {}'.format(head_hash)) - print('# {}'.format(head_name)) - print('# Git branch: {}'.format(rev_parse)) - print(MAJOR_DIV) - - packages = list(get_target_packages()) - - print(MAJOR_DIV) - print('# Target packages:') - print(MINOR_DIV) - for package in packages: - print(package) - print(MAJOR_DIV) - - -if __name__ == '__main__': - main() diff --git a/test_utils/scripts/get_target_packages_kokoro.py b/test_utils/scripts/get_target_packages_kokoro.py deleted file mode 100644 index 27d3a0c940..0000000000 --- a/test_utils/scripts/get_target_packages_kokoro.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright 2017 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Print a list of packages which require testing.""" - -import pathlib -import subprocess - -import ci_diff_helper -import requests - - -def print_environment(environment): - print("-> CI environment:") - print('Branch', environment.branch) - print('PR', environment.pr) - print('In PR', environment.in_pr) - print('Repo URL', environment.repo_url) - if environment.in_pr: - print('PR Base', environment.base) - - -def get_base(environment): - if environment.in_pr: - return environment.base - else: - # If we're not in a PR, just calculate the changes between this commit - # and its parent. - return 'HEAD~1' - - -def get_changed_files_from_base(base): - return subprocess.check_output([ - 'git', 'diff', '--name-only', f'{base}..HEAD', - ], stderr=subprocess.DEVNULL).decode('utf8').strip().split('\n') - - -_URL_TEMPLATE = ( - 'https://api.github.com/repos/googleapis/google-cloud-python/pulls/' - '{}/files' -) - - -def get_changed_files_from_pr(pr): - url = _URL_TEMPLATE.format(pr) - while url is not None: - response = requests.get(url) - for info in response.json(): - yield info['filename'] - url = response.links.get('next', {}).get('url') - - -def determine_changed_packages(changed_files): - packages = [ - path.parent for path in pathlib.Path('.').glob('*/noxfile.py') - ] - - changed_packages = set() - for file in changed_files: - file = pathlib.Path(file) - for package in packages: - if package in file.parents: - changed_packages.add(package) - - return changed_packages - - -def main(): - environment = ci_diff_helper.get_config() - print_environment(environment) - base = get_base(environment) - - if environment.in_pr: - changed_files = list(get_changed_files_from_pr(environment.pr)) - else: - changed_files = get_changed_files_from_base(base) - - packages = determine_changed_packages(changed_files) - - print(f"Comparing against {base}.") - print("-> Changed packages:") - - for package in packages: - print(package) - - -main() diff --git a/test_utils/scripts/run_emulator.py b/test_utils/scripts/run_emulator.py deleted file mode 100644 index 287b086406..0000000000 --- a/test_utils/scripts/run_emulator.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright 2016 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Run system tests locally with the emulator. - -First makes system calls to spawn the emulator and get the local environment -variable needed for it. Then calls the system tests. -""" - - -import argparse -import os -import subprocess - -import psutil - -from google.cloud.environment_vars import BIGTABLE_EMULATOR -from google.cloud.environment_vars import GCD_DATASET -from google.cloud.environment_vars import GCD_HOST -from google.cloud.environment_vars import PUBSUB_EMULATOR -from run_system_test import run_module_tests - - -BIGTABLE = 'bigtable' -DATASTORE = 'datastore' -PUBSUB = 'pubsub' -PACKAGE_INFO = { - BIGTABLE: (BIGTABLE_EMULATOR,), - DATASTORE: (GCD_DATASET, GCD_HOST), - PUBSUB: (PUBSUB_EMULATOR,), -} -EXTRA = { - DATASTORE: ('--no-legacy',), -} -_DS_READY_LINE = '[datastore] Dev App Server is now running.\n' -_PS_READY_LINE_PREFIX = '[pubsub] INFO: Server started, listening on ' -_BT_READY_LINE_PREFIX = '[bigtable] Cloud Bigtable emulator running on ' - - -def get_parser(): - """Get simple ``argparse`` parser to determine package. - - :rtype: :class:`argparse.ArgumentParser` - :returns: The parser for this script. - """ - parser = argparse.ArgumentParser( - description='Run google-cloud system tests against local emulator.') - parser.add_argument('--package', dest='package', - choices=sorted(PACKAGE_INFO.keys()), - default=DATASTORE, help='Package to be tested.') - return parser - - -def get_start_command(package): - """Get command line arguments for starting emulator. - - :type package: str - :param package: The package to start an emulator for. - - :rtype: tuple - :returns: The arguments to be used, in a tuple. - """ - result = ('gcloud', 'beta', 'emulators', package, 'start') - extra = EXTRA.get(package, ()) - return result + extra - - -def get_env_init_command(package): - """Get command line arguments for getting emulator env. info. - - :type package: str - :param package: The package to get environment info for. - - :rtype: tuple - :returns: The arguments to be used, in a tuple. - """ - result = ('gcloud', 'beta', 'emulators', package, 'env-init') - extra = EXTRA.get(package, ()) - return result + extra - - -def datastore_wait_ready(popen): - """Wait until the datastore emulator is ready to use. - - :type popen: :class:`subprocess.Popen` - :param popen: An open subprocess to interact with. - """ - emulator_ready = False - while not emulator_ready: - emulator_ready = popen.stderr.readline() == _DS_READY_LINE - - -def wait_ready_prefix(popen, prefix): - """Wait until the a process encounters a line with matching prefix. - - :type popen: :class:`subprocess.Popen` - :param popen: An open subprocess to interact with. - - :type prefix: str - :param prefix: The prefix to match - """ - emulator_ready = False - while not emulator_ready: - emulator_ready = popen.stderr.readline().startswith(prefix) - - -def wait_ready(package, popen): - """Wait until the emulator is ready to use. - - :type package: str - :param package: The package to check if ready. - - :type popen: :class:`subprocess.Popen` - :param popen: An open subprocess to interact with. - - :raises: :class:`KeyError` if the ``package`` is not among - ``datastore``, ``pubsub`` or ``bigtable``. - """ - if package == DATASTORE: - datastore_wait_ready(popen) - elif package == PUBSUB: - wait_ready_prefix(popen, _PS_READY_LINE_PREFIX) - elif package == BIGTABLE: - wait_ready_prefix(popen, _BT_READY_LINE_PREFIX) - else: - raise KeyError('Package not supported', package) - - -def cleanup(pid): - """Cleanup a process (including all of its children). - - :type pid: int - :param pid: Process ID. - """ - proc = psutil.Process(pid) - for child_proc in proc.children(recursive=True): - try: - child_proc.kill() - child_proc.terminate() - except psutil.NoSuchProcess: - pass - proc.terminate() - proc.kill() - - -def run_tests_in_emulator(package): - """Spawn an emulator instance and run the system tests. - - :type package: str - :param package: The package to run system tests against. - """ - # Make sure this package has environment vars to replace. - env_vars = PACKAGE_INFO[package] - - start_command = get_start_command(package) - # Ignore stdin and stdout, don't pollute the user's output with them. - proc_start = subprocess.Popen(start_command, stdout=subprocess.PIPE, - stderr=subprocess.PIPE) - try: - wait_ready(package, proc_start) - env_init_command = get_env_init_command(package) - proc_env = subprocess.Popen(env_init_command, stdout=subprocess.PIPE, - stderr=subprocess.PIPE) - env_status = proc_env.wait() - if env_status != 0: - raise RuntimeError(env_status, proc_env.stderr.read()) - env_lines = proc_env.stdout.read().strip().split('\n') - # Set environment variables before running the system tests. - for env_var in env_vars: - line_prefix = 'export ' + env_var + '=' - value, = [line.split(line_prefix, 1)[1] for line in env_lines - if line.startswith(line_prefix)] - os.environ[env_var] = value - run_module_tests(package, - ignore_requirements=True) - finally: - cleanup(proc_start.pid) - - -def main(): - """Main method to run this script.""" - parser = get_parser() - args = parser.parse_args() - run_tests_in_emulator(args.package) - - -if __name__ == '__main__': - main() diff --git a/test_utils/scripts/update_docs.sh b/test_utils/scripts/update_docs.sh deleted file mode 100755 index 8cbab9f0da..0000000000 --- a/test_utils/scripts/update_docs.sh +++ /dev/null @@ -1,93 +0,0 @@ -#!/bin/bash - -# Copyright 2016 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -ev - -GH_OWNER='GoogleCloudPlatform' -GH_PROJECT_NAME='google-cloud-python' - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -# Function to build the docs. -function build_docs { - rm -rf docs/_build/ - rm -f docs/bigquery/generated/*.rst - # -W -> warnings as errors - # -T -> show full traceback on exception - # -N -> no color - sphinx-build \ - -W -T -N \ - -b html \ - -d docs/_build/doctrees \ - docs/ \ - docs/_build/html/ - return $? -} - -# Only update docs if we are on CircleCI. -if [[ "${CIRCLE_BRANCH}" == "master" ]] && [[ -z "${CIRCLE_PR_NUMBER}" ]]; then - echo "Building new docs on a merged commit." -elif [[ "$1" == "kokoro" ]]; then - echo "Building and publishing docs on Kokoro." -elif [[ -n "${CIRCLE_TAG}" ]]; then - echo "Building new docs on a tag (but will not deploy)." - build_docs - exit $? -else - echo "Not on master nor a release tag." - echo "Building new docs for testing purposes, but not deploying." - build_docs - exit $? -fi - -# Adding GitHub pages branch. `git submodule add` checks it -# out at HEAD. -GH_PAGES_DIR='ghpages' -git submodule add -q -b gh-pages \ - "git@github.com:${GH_OWNER}/${GH_PROJECT_NAME}" ${GH_PAGES_DIR} - -# Determine if we are building a new tag or are building docs -# for master. Then build new docs in docs/_build from master. -if [[ -n "${CIRCLE_TAG}" ]]; then - # Sphinx will use the package version by default. - build_docs -else - SPHINX_RELEASE=$(git log -1 --pretty=%h) build_docs -fi - -# Update gh-pages with the created docs. -cd ${GH_PAGES_DIR} -git rm -fr latest/ -cp -R ../docs/_build/html/ latest/ - -# Update the files push to gh-pages. -git add . -git status - -# If there are no changes, just exit cleanly. -if [[ -z "$(git status --porcelain)" ]]; then - echo "Nothing to commit. Exiting without pushing changes." - exit -fi - -# Commit to gh-pages branch to apply changes. -git config --global user.email "dpebot@google.com" -git config --global user.name "dpebot" -git commit -m "Update docs after merge to master." - -# NOTE: This may fail if two docs updates (on merges to master) -# happen in close proximity. -git push -q origin HEAD:gh-pages diff --git a/test_utils/setup.py b/test_utils/setup.py deleted file mode 100644 index 8e9222a7f8..0000000000 --- a/test_utils/setup.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright 2017 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os - -from setuptools import find_packages -from setuptools import setup - - -PACKAGE_ROOT = os.path.abspath(os.path.dirname(__file__)) - - -# NOTE: This is duplicated throughout and we should try to -# consolidate. -SETUP_BASE = { - 'author': 'Google Cloud Platform', - 'author_email': 'googleapis-publisher@google.com', - 'scripts': [], - 'url': 'https://github.com/GoogleCloudPlatform/google-cloud-python', - 'license': 'Apache 2.0', - 'platforms': 'Posix; MacOS X; Windows', - 'include_package_data': True, - 'zip_safe': False, - 'classifiers': [ - 'Development Status :: 4 - Beta', - 'Intended Audience :: Developers', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 2', - 'Programming Language :: Python :: 2.7', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.5', - 'Programming Language :: Python :: 3.6', - 'Programming Language :: Python :: 3.7', - 'Topic :: Internet', - ], -} - - -REQUIREMENTS = [ - 'google-auth >= 0.4.0', - 'six', -] - -setup( - name='google-cloud-testutils', - version='0.24.0', - description='System test utilities for google-cloud-python', - packages=find_packages(), - install_requires=REQUIREMENTS, - python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*', - **SETUP_BASE -) diff --git a/test_utils/test_utils/__init__.py b/test_utils/test_utils/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/test_utils/test_utils/imports.py b/test_utils/test_utils/imports.py deleted file mode 100644 index 5991af7fc4..0000000000 --- a/test_utils/test_utils/imports.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright 2019 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import mock -import six - - -def maybe_fail_import(predicate): - """Create and return a patcher that conditionally makes an import fail. - - Args: - predicate (Callable[[...], bool]): A callable that, if it returns `True`, - triggers an `ImportError`. It must accept the same arguments as the - built-in `__import__` function. - https://docs.python.org/3/library/functions.html#__import__ - - Returns: - A mock patcher object that can be used to enable patched import behavior. - """ - orig_import = six.moves.builtins.__import__ - - def custom_import(name, globals=None, locals=None, fromlist=(), level=0): - if predicate(name, globals, locals, fromlist, level): - raise ImportError - return orig_import(name, globals, locals, fromlist, level) - - return mock.patch.object(six.moves.builtins, "__import__", new=custom_import) diff --git a/test_utils/test_utils/retry.py b/test_utils/test_utils/retry.py deleted file mode 100644 index e61c001a03..0000000000 --- a/test_utils/test_utils/retry.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright 2016 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import time -from functools import wraps - -import six - -MAX_TRIES = 4 -DELAY = 1 -BACKOFF = 2 - - -def _retry_all(_): - """Retry all caught exceptions.""" - return True - - -class BackoffFailed(Exception): - """Retry w/ backoffs did not complete successfully.""" - - -class RetryBase(object): - """Base for retrying calling a decorated function w/ exponential backoff. - - :type max_tries: int - :param max_tries: Number of times to try (not retry) before giving up. - - :type delay: int - :param delay: Initial delay between retries in seconds. - - :type backoff: int - :param backoff: Backoff multiplier e.g. value of 2 will double the - delay each retry. - - :type logger: logging.Logger instance - :param logger: Logger to use. If None, print. - """ - def __init__(self, max_tries=MAX_TRIES, delay=DELAY, backoff=BACKOFF, - logger=None): - self.max_tries = max_tries - self.delay = delay - self.backoff = backoff - self.logger = logger.warning if logger else six.print_ - - -class RetryErrors(RetryBase): - """Decorator for retrying given exceptions in testing. - - :type exception: Exception or tuple of Exceptions - :param exception: The exception to check or may be a tuple of - exceptions to check. - - :type error_predicate: function, takes caught exception, returns bool - :param error_predicate: Predicate evaluating whether to retry after a - caught exception. - - :type max_tries: int - :param max_tries: Number of times to try (not retry) before giving up. - - :type delay: int - :param delay: Initial delay between retries in seconds. - - :type backoff: int - :param backoff: Backoff multiplier e.g. value of 2 will double the - delay each retry. - - :type logger: logging.Logger instance - :param logger: Logger to use. If None, print. - """ - def __init__(self, exception, error_predicate=_retry_all, - max_tries=MAX_TRIES, delay=DELAY, backoff=BACKOFF, - logger=None): - super(RetryErrors, self).__init__(max_tries, delay, backoff, logger) - self.exception = exception - self.error_predicate = error_predicate - - def __call__(self, to_wrap): - @wraps(to_wrap) - def wrapped_function(*args, **kwargs): - tries = 0 - while tries < self.max_tries: - try: - return to_wrap(*args, **kwargs) - except self.exception as caught_exception: - - if not self.error_predicate(caught_exception): - raise - - delay = self.delay * self.backoff**tries - msg = ("%s, Trying again in %d seconds..." % - (caught_exception, delay)) - self.logger(msg) - - time.sleep(delay) - tries += 1 - return to_wrap(*args, **kwargs) - - return wrapped_function - - -class RetryResult(RetryBase): - """Decorator for retrying based on non-error result. - - :type result_predicate: function, takes result, returns bool - :param result_predicate: Predicate evaluating whether to retry after a - result is returned. - - :type max_tries: int - :param max_tries: Number of times to try (not retry) before giving up. - - :type delay: int - :param delay: Initial delay between retries in seconds. - - :type backoff: int - :param backoff: Backoff multiplier e.g. value of 2 will double the - delay each retry. - - :type logger: logging.Logger instance - :param logger: Logger to use. If None, print. - """ - def __init__(self, result_predicate, - max_tries=MAX_TRIES, delay=DELAY, backoff=BACKOFF, - logger=None): - super(RetryResult, self).__init__(max_tries, delay, backoff, logger) - self.result_predicate = result_predicate - - def __call__(self, to_wrap): - @wraps(to_wrap) - def wrapped_function(*args, **kwargs): - tries = 0 - while tries < self.max_tries: - result = to_wrap(*args, **kwargs) - if self.result_predicate(result): - return result - - delay = self.delay * self.backoff**tries - msg = "%s. Trying again in %d seconds..." % ( - self.result_predicate.__name__, delay,) - self.logger(msg) - - time.sleep(delay) - tries += 1 - raise BackoffFailed() - - return wrapped_function - - -class RetryInstanceState(RetryBase): - """Decorator for retrying based on instance state. - - :type instance_predicate: function, takes instance, returns bool - :param instance_predicate: Predicate evaluating whether to retry after an - API-invoking method is called. - - :type max_tries: int - :param max_tries: Number of times to try (not retry) before giving up. - - :type delay: int - :param delay: Initial delay between retries in seconds. - - :type backoff: int - :param backoff: Backoff multiplier e.g. value of 2 will double the - delay each retry. - - :type logger: logging.Logger instance - :param logger: Logger to use. If None, print. - """ - def __init__(self, instance_predicate, - max_tries=MAX_TRIES, delay=DELAY, backoff=BACKOFF, - logger=None): - super(RetryInstanceState, self).__init__( - max_tries, delay, backoff, logger) - self.instance_predicate = instance_predicate - - def __call__(self, to_wrap): - instance = to_wrap.__self__ # only instance methods allowed - - @wraps(to_wrap) - def wrapped_function(*args, **kwargs): - tries = 0 - while tries < self.max_tries: - result = to_wrap(*args, **kwargs) - if self.instance_predicate(instance): - return result - - delay = self.delay * self.backoff**tries - msg = "%s. Trying again in %d seconds..." % ( - self.instance_predicate.__name__, delay,) - self.logger(msg) - - time.sleep(delay) - tries += 1 - raise BackoffFailed() - - return wrapped_function diff --git a/test_utils/test_utils/system.py b/test_utils/test_utils/system.py deleted file mode 100644 index 590dc62a06..0000000000 --- a/test_utils/test_utils/system.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright 2014 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from __future__ import print_function -import os -import sys -import time - -import google.auth.credentials -from google.auth.environment_vars import CREDENTIALS as TEST_CREDENTIALS - - -# From shell environ. May be None. -CREDENTIALS = os.getenv(TEST_CREDENTIALS) - -ENVIRON_ERROR_MSG = """\ -To run the system tests, you need to set some environment variables. -Please check the CONTRIBUTING guide for instructions. -""" - - -class EmulatorCreds(google.auth.credentials.Credentials): - """A mock credential object. - - Used to avoid unnecessary token refreshing or reliance on the network - while an emulator is running. - """ - - def __init__(self): # pylint: disable=super-init-not-called - self.token = b'seekrit' - self.expiry = None - - @property - def valid(self): - """Would-be validity check of the credentials. - - Always is :data:`True`. - """ - return True - - def refresh(self, unused_request): # pylint: disable=unused-argument - """Off-limits implementation for abstract method.""" - raise RuntimeError('Should never be refreshed.') - - -def check_environ(): - err_msg = None - if CREDENTIALS is None: - err_msg = '\nMissing variables: ' + TEST_CREDENTIALS - elif not os.path.isfile(CREDENTIALS): - err_msg = '\nThe %s path %r is not a file.' % (TEST_CREDENTIALS, - CREDENTIALS) - - if err_msg is not None: - msg = ENVIRON_ERROR_MSG + err_msg - print(msg, file=sys.stderr) - sys.exit(1) - - -def unique_resource_id(delimiter='_'): - """A unique identifier for a resource. - - Intended to help locate resources created in particular - testing environments and at particular times. - """ - build_id = os.getenv('CIRCLE_BUILD_NUM', '') - if build_id == '': - return '%s%d' % (delimiter, 1000 * time.time()) - else: - return '%s%s%s%d' % (delimiter, build_id, delimiter, time.time()) diff --git a/testing/.gitignore b/testing/.gitignore new file mode 100644 index 0000000000..b05fbd6308 --- /dev/null +++ b/testing/.gitignore @@ -0,0 +1,3 @@ +test-env.sh +service-account.json +client-secrets.json \ No newline at end of file diff --git a/testing/constraints-3.10.txt b/testing/constraints-3.10.txt new file mode 100644 index 0000000000..ad3f0fa58e --- /dev/null +++ b/testing/constraints-3.10.txt @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- +# This constraints file is required for unit tests. +# List all library dependencies and extras in this file. +google-api-core +proto-plus +protobuf +grpc-google-iam-v1 diff --git a/testing/constraints-3.11.txt b/testing/constraints-3.11.txt new file mode 100644 index 0000000000..ad3f0fa58e --- /dev/null +++ b/testing/constraints-3.11.txt @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- +# This constraints file is required for unit tests. +# List all library dependencies and extras in this file. +google-api-core +proto-plus +protobuf +grpc-google-iam-v1 diff --git a/testing/constraints-3.12.txt b/testing/constraints-3.12.txt new file mode 100644 index 0000000000..ad3f0fa58e --- /dev/null +++ b/testing/constraints-3.12.txt @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- +# This constraints file is required for unit tests. +# List all library dependencies and extras in this file. +google-api-core +proto-plus +protobuf +grpc-google-iam-v1 diff --git a/testing/constraints-3.13.txt b/testing/constraints-3.13.txt new file mode 100644 index 0000000000..2010e549cc --- /dev/null +++ b/testing/constraints-3.13.txt @@ -0,0 +1,12 @@ +# We use the constraints file for the latest Python version +# (currently this file) to check that the latest +# major versions of dependencies are supported in setup.py. +# List all library dependencies and extras in this file. +# Require the latest major version be installed for each dependency. +# e.g., if setup.py has "google-cloud-foo >= 1.14.0, < 2.0.0", +# Then this file should have google-cloud-foo>=1 +google-api-core>=2 +google-auth>=2 +proto-plus>=1 +protobuf>=6 +grpc-google-iam-v1>=0 diff --git a/testing/constraints-3.8.txt b/testing/constraints-3.8.txt new file mode 100644 index 0000000000..ad3f0fa58e --- /dev/null +++ b/testing/constraints-3.8.txt @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- +# This constraints file is required for unit tests. +# List all library dependencies and extras in this file. +google-api-core +proto-plus +protobuf +grpc-google-iam-v1 diff --git a/testing/constraints-3.9.txt b/testing/constraints-3.9.txt new file mode 100644 index 0000000000..ad3f0fa58e --- /dev/null +++ b/testing/constraints-3.9.txt @@ -0,0 +1,7 @@ +# -*- coding: utf-8 -*- +# This constraints file is required for unit tests. +# List all library dependencies and extras in this file. +google-api-core +proto-plus +protobuf +grpc-google-iam-v1 diff --git a/tests/__init__.py b/tests/__init__.py index e69de29bb2..cbf94b283c 100644 --- a/tests/__init__.py +++ b/tests/__init__.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/tests/_builders.py b/tests/_builders.py new file mode 100644 index 0000000000..c2733be6de --- /dev/null +++ b/tests/_builders.py @@ -0,0 +1,231 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from datetime import datetime +from logging import Logger +from mock import create_autospec +from typing import Mapping + +from google.auth.credentials import Credentials, Scoped +from google.cloud.spanner_dbapi import Connection +from google.cloud.spanner_v1 import SpannerClient +from google.cloud.spanner_v1.client import Client +from google.cloud.spanner_v1.database import Database +from google.cloud.spanner_v1.instance import Instance +from google.cloud.spanner_v1.session import Session +from google.cloud.spanner_v1.transaction import Transaction + +from google.cloud.spanner_v1.types import ( + CommitResponse as CommitResponsePB, + MultiplexedSessionPrecommitToken as PrecommitTokenPB, + Session as SessionPB, + Transaction as TransactionPB, +) + +from google.cloud._helpers import _datetime_to_pb_timestamp + +# Default values used to populate required or expected attributes. +# Tests should not depend on them: if a test requires a specific +# identifier or name, it should set it explicitly. +_PROJECT_ID = "default-project-id" +_INSTANCE_ID = "default-instance-id" +_DATABASE_ID = "default-database-id" +_SESSION_ID = "default-session-id" + +_PROJECT_NAME = "projects/" + _PROJECT_ID +_INSTANCE_NAME = _PROJECT_NAME + "/instances/" + _INSTANCE_ID +_DATABASE_NAME = _INSTANCE_NAME + "/databases/" + _DATABASE_ID +_SESSION_NAME = _DATABASE_NAME + "/sessions/" + _SESSION_ID + +_TRANSACTION_ID = b"default-transaction-id" +_PRECOMMIT_TOKEN = b"default-precommit-token" +_SEQUENCE_NUMBER = -1 +_TIMESTAMP = _datetime_to_pb_timestamp(datetime.now()) + +# Protocol buffers +# ---------------- + + +def build_commit_response_pb(**kwargs) -> CommitResponsePB: + """Builds and returns a commit response protocol buffer for testing using the given arguments. + If an expected argument is not provided, a default value will be used.""" + + if "commit_timestamp" not in kwargs: + kwargs["commit_timestamp"] = _TIMESTAMP + + return CommitResponsePB(**kwargs) + + +def build_precommit_token_pb(**kwargs) -> PrecommitTokenPB: + """Builds and returns a multiplexed session precommit token protocol buffer for + testing using the given arguments. If an expected argument is not provided, a + default value will be used.""" + + if "precommit_token" not in kwargs: + kwargs["precommit_token"] = _PRECOMMIT_TOKEN + + if "seq_num" not in kwargs: + kwargs["seq_num"] = _SEQUENCE_NUMBER + + return PrecommitTokenPB(**kwargs) + + +def build_session_pb(**kwargs) -> SessionPB: + """Builds and returns a session protocol buffer for testing using the given arguments. + If an expected argument is not provided, a default value will be used.""" + + if "name" not in kwargs: + kwargs["name"] = _SESSION_NAME + + return SessionPB(**kwargs) + + +def build_transaction_pb(**kwargs) -> TransactionPB: + """Builds and returns a transaction protocol buffer for testing using the given arguments.. + If an expected argument is not provided, a default value will be used.""" + + if "id" not in kwargs: + kwargs["id"] = _TRANSACTION_ID + + return TransactionPB(**kwargs) + + +# Client classes +# -------------- + + +def build_client(**kwargs: Mapping) -> Client: + """Builds and returns a client for testing using the given arguments. + If a required argument is not provided, a default value will be used.""" + + if "project" not in kwargs: + kwargs["project"] = _PROJECT_ID + + if "credentials" not in kwargs: + kwargs["credentials"] = build_scoped_credentials() + + return Client(**kwargs) + + +def build_connection(**kwargs: Mapping) -> Connection: + """Builds and returns a connection for testing using the given arguments. + If a required argument is not provided, a default value will be used.""" + + if "instance" not in kwargs: + kwargs["instance"] = build_instance() + + if "database" not in kwargs: + kwargs["database"] = build_database(instance=kwargs["instance"]) + + return Connection(**kwargs) + + +def build_database(**kwargs: Mapping) -> Database: + """Builds and returns a database for testing using the given arguments. + If a required argument is not provided, a default value will be used.""" + + if "database_id" not in kwargs: + kwargs["database_id"] = _DATABASE_ID + + if "logger" not in kwargs: + kwargs["logger"] = build_logger() + + if "instance" not in kwargs: + kwargs["instance"] = build_instance() + + database = Database(**kwargs) + database._spanner_api = build_spanner_api() + + return database + + +def build_instance(**kwargs: Mapping) -> Instance: + """Builds and returns an instance for testing using the given arguments. + If a required argument is not provided, a default value will be used.""" + + if "instance_id" not in kwargs: + kwargs["instance_id"] = _INSTANCE_ID + + if "client" not in kwargs: + kwargs["client"] = build_client() + + return Instance(**kwargs) + + +def build_session(**kwargs: Mapping) -> Session: + """Builds and returns a session for testing using the given arguments. + If a required argument is not provided, a default value will be used.""" + + if "database" not in kwargs: + kwargs["database"] = build_database() + + return Session(**kwargs) + + +def build_snapshot(**kwargs): + """Builds and returns a snapshot for testing using the given arguments. + If a required argument is not provided, a default value will be used.""" + + session = kwargs.pop("session", build_session()) + + # Ensure session exists. + if session.session_id is None: + session._session_id = _SESSION_ID + + return session.snapshot(**kwargs) + + +def build_transaction(session=None) -> Transaction: + """Builds and returns a transaction for testing using the given arguments. + If a required argument is not provided, a default value will be used.""" + + session = session or build_session() + + # Ensure session exists. + if session.session_id is None: + session._session_id = _SESSION_ID + + return session.transaction() + + +# Other classes +# ------------- + + +def build_logger() -> Logger: + """Builds and returns a logger for testing.""" + + return create_autospec(Logger, instance=True) + + +def build_scoped_credentials() -> Credentials: + """Builds and returns a mock scoped credentials for testing.""" + + class _ScopedCredentials(Credentials, Scoped): + pass + + return create_autospec(spec=_ScopedCredentials, instance=True) + + +def build_spanner_api() -> SpannerClient: + """Builds and returns a mock Spanner Client API for testing using the given arguments. + Commonly used methods are mocked to return default values.""" + + api = create_autospec(SpannerClient, instance=True) + + # Mock API calls with default return values. + api.begin_transaction.return_value = build_transaction_pb() + api.commit.return_value = build_commit_response_pb() + api.create_session.return_value = build_session_pb() + + return api diff --git a/tests/_fixtures.py b/tests/_fixtures.py index d0b78c0ba5..7a80adc00a 100644 --- a/tests/_fixtures.py +++ b/tests/_fixtures.py @@ -28,6 +28,86 @@ phone_number STRING(1024) ) PRIMARY KEY (contact_id, phone_type), INTERLEAVE IN PARENT contacts ON DELETE CASCADE; +CREATE PROTO BUNDLE ( + examples.spanner.music.SingerInfo, + examples.spanner.music.Genre, + ); +CREATE TABLE all_types ( + pkey INT64 NOT NULL, + int_value INT64, + int_array ARRAY, + bool_value BOOL, + bool_array ARRAY, + bytes_value BYTES(16), + bytes_array ARRAY, + date_value DATE, + date_array ARRAY, + float_value FLOAT64, + float_array ARRAY, + string_value STRING(16), + string_array ARRAY, + timestamp_value TIMESTAMP, + timestamp_array ARRAY, + numeric_value NUMERIC, + numeric_array ARRAY, + json_value JSON, + json_array ARRAY, + proto_message_value examples.spanner.music.SingerInfo, + proto_message_array ARRAY, + proto_enum_value examples.spanner.music.Genre, + proto_enum_array ARRAY, + ) + PRIMARY KEY (pkey); +CREATE TABLE counters ( + name STRING(1024), + value INT64 ) + PRIMARY KEY (name); +CREATE TABLE string_plus_array_of_string ( + id INT64, + name STRING(16), + tags ARRAY ) + PRIMARY KEY (id); +CREATE INDEX name ON contacts(first_name, last_name); +CREATE TABLE users_history ( + id INT64 NOT NULL, + commit_ts TIMESTAMP NOT NULL OPTIONS + (allow_commit_timestamp=true), + name STRING(MAX) NOT NULL, + email STRING(MAX), + deleted BOOL NOT NULL ) + PRIMARY KEY(id, commit_ts DESC); + +CREATE TABLE Customers ( + CustomerId INT64 NOT NULL, + CustomerName STRING(62) NOT NULL, + ) PRIMARY KEY (CustomerId); + + CREATE TABLE ShoppingCarts ( + CartId INT64 NOT NULL, + CustomerId INT64 NOT NULL, + CustomerName STRING(62) NOT NULL, + CONSTRAINT FKShoppingCartsCustomerId FOREIGN KEY (CustomerId) + REFERENCES Customers (CustomerId) ON DELETE CASCADE + ) PRIMARY KEY (CartId); +""" + +EMULATOR_DDL = """\ +CREATE TABLE contacts ( + contact_id INT64, + first_name STRING(1024), + last_name STRING(1024), + email STRING(1024) ) + PRIMARY KEY (contact_id); +CREATE TABLE contact_phones ( + contact_id INT64, + phone_type STRING(1024), + phone_number STRING(1024) ) + PRIMARY KEY (contact_id, phone_type), + INTERLEAVE IN PARENT contacts ON DELETE CASCADE; +CREATE PROTO BUNDLE ( + examples.spanner.music.SingerInfo, + examples.spanner.music.Genre, + ); CREATE TABLE all_types ( pkey INT64 NOT NULL, int_value INT64, @@ -65,4 +145,74 @@ PRIMARY KEY(id, commit_ts DESC); """ +PG_DDL = """\ +CREATE TABLE contacts ( + contact_id BIGINT, + first_name VARCHAR(1024), + last_name VARCHAR(1024), + email VARCHAR(1024), + PRIMARY KEY (contact_id) ); +CREATE TABLE all_types ( + pkey BIGINT NOT NULL, + int_value INT, + int_array INT[], + bool_value BOOL, + bool_array BOOL[], + bytes_value BYTEA, + bytes_array BYTEA[], + float_value DOUBLE PRECISION, + float_array DOUBLE PRECISION[], + string_value VARCHAR(16), + string_array VARCHAR(16)[], + date_value DATE, + date_array DATE[], + timestamp_value TIMESTAMPTZ, + timestamp_array TIMESTAMPTZ[], + numeric_value NUMERIC, + numeric_array NUMERIC[], + jsonb_value JSONB, + jsonb_array JSONB[], + PRIMARY KEY (pkey) ); +CREATE TABLE counters ( + name VARCHAR(1024), + value BIGINT, + PRIMARY KEY (name)); +CREATE TABLE string_plus_array_of_string ( + id BIGINT, + name VARCHAR(16), + PRIMARY KEY (id)); +CREATE INDEX name ON contacts(first_name, last_name); +CREATE TABLE Customers ( + CustomerId BIGINT, + CustomerName VARCHAR(62) NOT NULL, + PRIMARY KEY (CustomerId)); + + CREATE TABLE ShoppingCarts ( + CartId BIGINT, + CustomerId BIGINT NOT NULL, + CustomerName VARCHAR(62) NOT NULL, + CONSTRAINT "FKShoppingCartsCustomerId" FOREIGN KEY (CustomerId) + REFERENCES Customers (CustomerId) ON DELETE CASCADE, + PRIMARY KEY (CartId) + ); +""" + +PROTO_COLUMNS_DDL = """\ +CREATE TABLE singers ( + singer_id INT64 NOT NULL, + first_name STRING(1024), + last_name STRING(1024), + singer_info examples.spanner.music.SingerInfo, + singer_genre examples.spanner.music.Genre, ) + PRIMARY KEY (singer_id); +CREATE INDEX SingerByGenre ON singers(singer_genre) STORING (first_name, last_name); +""" + DDL_STATEMENTS = [stmt.strip() for stmt in DDL.split(";") if stmt.strip()] +EMULATOR_DDL_STATEMENTS = [ + stmt.strip() for stmt in EMULATOR_DDL.split(";") if stmt.strip() +] +PG_DDL_STATEMENTS = [stmt.strip() for stmt in PG_DDL.split(";") if stmt.strip()] +PROTO_COLUMNS_DDL_STATEMENTS = [ + stmt.strip() for stmt in PROTO_COLUMNS_DDL.split(";") if stmt.strip() +] diff --git a/tests/_helpers.py b/tests/_helpers.py new file mode 100644 index 0000000000..c7502816da --- /dev/null +++ b/tests/_helpers.py @@ -0,0 +1,172 @@ +import unittest +from os import getenv + +import mock + +from google.cloud.spanner_v1 import gapic_version +from google.cloud.spanner_v1.database_sessions_manager import TransactionType + +LIB_VERSION = gapic_version.__version__ + +try: + from opentelemetry import trace + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.export import SimpleSpanProcessor + from opentelemetry.sdk.trace.export.in_memory_span_exporter import ( + InMemorySpanExporter, + ) + from opentelemetry.semconv.attributes.otel_attributes import ( + OTEL_SCOPE_NAME, + OTEL_SCOPE_VERSION, + ) + from opentelemetry.sdk.trace.sampling import TraceIdRatioBased + + from opentelemetry.trace.status import StatusCode + + trace.set_tracer_provider(TracerProvider(sampler=TraceIdRatioBased(1.0))) + + HAS_OPENTELEMETRY_INSTALLED = True +except ImportError: + HAS_OPENTELEMETRY_INSTALLED = False + + StatusCode = mock.Mock() + +_TEST_OT_EXPORTER = None +_TEST_OT_PROVIDER_INITIALIZED = False + + +def is_multiplexed_enabled(transaction_type: TransactionType) -> bool: + """Returns whether multiplexed sessions are enabled for the given transaction type.""" + + env_var = "GOOGLE_CLOUD_SPANNER_MULTIPLEXED_SESSIONS" + env_var_partitioned = "GOOGLE_CLOUD_SPANNER_MULTIPLEXED_SESSIONS_PARTITIONED_OPS" + env_var_read_write = "GOOGLE_CLOUD_SPANNER_MULTIPLEXED_SESSIONS_FOR_RW" + + def _getenv(val: str) -> bool: + return getenv(val, "true").lower().strip() != "false" + + if transaction_type is TransactionType.READ_ONLY: + return _getenv(env_var) + elif transaction_type is TransactionType.PARTITIONED: + return _getenv(env_var) and _getenv(env_var_partitioned) + else: + return _getenv(env_var) and _getenv(env_var_read_write) + + +def get_test_ot_exporter(): + global _TEST_OT_EXPORTER + + if _TEST_OT_EXPORTER is None: + _TEST_OT_EXPORTER = InMemorySpanExporter() + return _TEST_OT_EXPORTER + + +def enrich_with_otel_scope(attrs): + """ + This helper enriches attrs with OTEL_SCOPE_NAME and OTEL_SCOPE_VERSION + for the purpose of avoiding cumbersome duplicated imports. + """ + if HAS_OPENTELEMETRY_INSTALLED: + attrs[OTEL_SCOPE_NAME] = "cloud.google.com/python/spanner" + attrs[OTEL_SCOPE_VERSION] = LIB_VERSION + + return attrs + + +def use_test_ot_exporter(): + global _TEST_OT_PROVIDER_INITIALIZED + + if _TEST_OT_PROVIDER_INITIALIZED: + return + + provider = trace.get_tracer_provider() + if not hasattr(provider, "add_span_processor"): + return + provider.add_span_processor(SimpleSpanProcessor(get_test_ot_exporter())) + _TEST_OT_PROVIDER_INITIALIZED = True + + +class OpenTelemetryBase(unittest.TestCase): + @classmethod + def setUpClass(cls): + if HAS_OPENTELEMETRY_INSTALLED: + use_test_ot_exporter() + cls.ot_exporter = get_test_ot_exporter() + + def tearDown(self): + if HAS_OPENTELEMETRY_INSTALLED: + self.ot_exporter.clear() + + def assertNoSpans(self): + if HAS_OPENTELEMETRY_INSTALLED: + span_list = self.get_finished_spans() + self.assertEqual(len(span_list), 0) + + def assertSpanAttributes( + self, name, status=StatusCode.OK, attributes=None, span=None + ): + if HAS_OPENTELEMETRY_INSTALLED: + if not span: + span_list = self.get_finished_spans() + self.assertEqual(len(span_list) > 0, True) + span = span_list[0] + + self.assertEqual(span.name, name) + self.assertEqual(span.status.status_code, status) + self.assertEqual(dict(span.attributes), attributes) + + def assertSpanEvents(self, name, wantEventNames=[], span=None): + if not HAS_OPENTELEMETRY_INSTALLED: + return + + if not span: + span_list = self.ot_exporter.get_finished_spans() + self.assertEqual(len(span_list) > 0, True) + span = span_list[0] + + self.assertEqual(span.name, name) + actualEventNames = [] + for event in span.events: + actualEventNames.append(event.name) + self.assertEqual(actualEventNames, wantEventNames) + + def assertSpanNames(self, want_span_names): + if not HAS_OPENTELEMETRY_INSTALLED: + return + + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + self.assertEqual(got_span_names, want_span_names) + + def get_finished_spans(self): + if HAS_OPENTELEMETRY_INSTALLED: + span_list = list( + filter( + lambda span: span and span.name, + self.ot_exporter.get_finished_spans(), + ) + ) + # Sort the spans by their start time in the hierarchy. + return sorted(span_list, key=lambda span: span.start_time) + else: + return [] + + def reset(self): + self.tearDown() + + def finished_spans_events_statuses(self): + span_list = self.get_finished_spans() + # Some event attributes are noisy/highly ephemeral + # and can't be directly compared against. + got_all_events = [] + imprecise_event_attributes = ["exception.stacktrace", "delay_seconds", "cause"] + for span in span_list: + for event in span.events: + evt_attributes = event.attributes.copy() + for attr_name in imprecise_event_attributes: + if attr_name in evt_attributes: + evt_attributes[attr_name] = "EPHEMERAL" + + got_all_events.append((event.name, evt_attributes)) + + return got_all_events diff --git a/google/cloud/spanner_v1/gapic/transports/__init__.py b/tests/mockserver_tests/__init__.py similarity index 100% rename from google/cloud/spanner_v1/gapic/transports/__init__.py rename to tests/mockserver_tests/__init__.py diff --git a/tests/mockserver_tests/mock_server_test_base.py b/tests/mockserver_tests/mock_server_test_base.py new file mode 100644 index 0000000000..117b649e1b --- /dev/null +++ b/tests/mockserver_tests/mock_server_test_base.py @@ -0,0 +1,341 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import logging +import unittest + +import grpc +from google.api_core.client_options import ClientOptions +from google.auth.credentials import AnonymousCredentials +from google.cloud.spanner_v1 import Type + +from google.cloud.spanner_v1 import StructType +from google.cloud.spanner_v1._helpers import _make_value_pb + +from google.cloud.spanner_v1 import PartialResultSet +from google.protobuf.duration_pb2 import Duration +from google.rpc import code_pb2, status_pb2 + +from google.rpc.error_details_pb2 import RetryInfo +from grpc_status._common import code_to_grpc_status_code +from grpc_status.rpc_status import _Status + +import google.cloud.spanner_v1.types.result_set as result_set +import google.cloud.spanner_v1.types.type as spanner_type +from google.cloud.spanner_dbapi.parsed_statement import AutocommitDmlMode +from google.cloud.spanner_v1 import Client, FixedSizePool, ResultSetMetadata, TypeCode +from google.cloud.spanner_v1.database import Database +from google.cloud.spanner_v1.instance import Instance +from google.cloud.spanner_v1.testing.mock_database_admin import DatabaseAdminServicer +from google.cloud.spanner_v1.testing.mock_spanner import ( + SpannerServicer, + start_mock_server, +) +from tests._helpers import is_multiplexed_enabled + + +# Creates an aborted status with the smallest possible retry delay. +def aborted_status() -> _Status: + error = status_pb2.Status( + code=code_pb2.ABORTED, + message="Transaction was aborted.", + ) + retry_info = RetryInfo(retry_delay=Duration(seconds=0, nanos=1)) + status = _Status( + code=code_to_grpc_status_code(error.code), + details=error.message, + trailing_metadata=( + ("grpc-status-details-bin", error.SerializeToString()), + ( + "google.rpc.retryinfo-bin", + retry_info.SerializeToString(), + ), + ), + ) + return status + + +def _make_partial_result_sets( + fields: list[tuple[str, TypeCode]], results: list[dict] +) -> list[result_set.PartialResultSet]: + partial_result_sets = [] + for result in results: + partial_result_set = PartialResultSet() + if len(partial_result_sets) == 0: + # setting the metadata + metadata = ResultSetMetadata(row_type=StructType(fields=[])) + for field in fields: + metadata.row_type.fields.append( + StructType.Field(name=field[0], type_=Type(code=field[1])) + ) + partial_result_set.metadata = metadata + for value in result["values"]: + partial_result_set.values.append(_make_value_pb(value)) + partial_result_set.last = result.get("last") or False + partial_result_sets.append(partial_result_set) + return partial_result_sets + + +# Creates an UNAVAILABLE status with the smallest possible retry delay. +def unavailable_status() -> _Status: + error = status_pb2.Status( + code=code_pb2.UNAVAILABLE, + message="Service unavailable.", + ) + retry_info = RetryInfo(retry_delay=Duration(seconds=0, nanos=1)) + status = _Status( + code=code_to_grpc_status_code(error.code), + details=error.message, + trailing_metadata=( + ("grpc-status-details-bin", error.SerializeToString()), + ( + "google.rpc.retryinfo-bin", + retry_info.SerializeToString(), + ), + ), + ) + return status + + +def add_error(method: str, error: status_pb2.Status): + MockServerTestBase.spanner_service.mock_spanner.add_error(method, error) + + +def add_result(sql: str, result: result_set.ResultSet): + MockServerTestBase.spanner_service.mock_spanner.add_result(sql, result) + + +def add_update_count( + sql: str, count: int, dml_mode: AutocommitDmlMode = AutocommitDmlMode.TRANSACTIONAL +): + if dml_mode == AutocommitDmlMode.PARTITIONED_NON_ATOMIC: + stats = dict(row_count_lower_bound=count) + else: + stats = dict(row_count_exact=count) + result = result_set.ResultSet(dict(stats=result_set.ResultSetStats(stats))) + add_result(sql, result) + + +def add_select1_result(): + add_single_result("select 1", "c", TypeCode.INT64, [("1",)]) + + +def add_execute_streaming_sql_results( + sql: str, partial_result_sets: list[result_set.PartialResultSet] +): + MockServerTestBase.spanner_service.mock_spanner.add_execute_streaming_sql_results( + sql, partial_result_sets + ) + + +def add_single_result( + sql: str, column_name: str, type_code: spanner_type.TypeCode, row +): + result = result_set.ResultSet( + dict( + metadata=result_set.ResultSetMetadata( + dict( + row_type=spanner_type.StructType( + dict( + fields=[ + spanner_type.StructType.Field( + dict( + name=column_name, + type=spanner_type.Type(dict(code=type_code)), + ) + ) + ] + ) + ) + ) + ), + ) + ) + result.rows.extend(row) + MockServerTestBase.spanner_service.mock_spanner.add_result(sql, result) + + +class MockServerTestBase(unittest.TestCase): + server: grpc.Server = None + spanner_service: SpannerServicer = None + database_admin_service: DatabaseAdminServicer = None + port: int = None + logger: logging.Logger = None + + def __init__(self, *args, **kwargs): + super(MockServerTestBase, self).__init__(*args, **kwargs) + self._client = None + self._instance = None + self._database = None + self.logger = logging.getLogger("MockServerTestBase") + self.logger.setLevel(logging.WARN) + + @classmethod + def setup_class(cls): + ( + MockServerTestBase.server, + MockServerTestBase.spanner_service, + MockServerTestBase.database_admin_service, + MockServerTestBase.port, + ) = start_mock_server() + + @classmethod + def teardown_class(cls): + if MockServerTestBase.server is not None: + MockServerTestBase.server.stop(grace=None) + Client.NTH_CLIENT.reset() + MockServerTestBase.server = None + + def setup_method(self, *args, **kwargs): + self._client = None + self._instance = None + self._database = None + + def teardown_method(self, *args, **kwargs): + MockServerTestBase.spanner_service.clear_requests() + MockServerTestBase.database_admin_service.clear_requests() + + @property + def client(self) -> Client: + if self._client is None: + self._client = Client( + project="p", + credentials=AnonymousCredentials(), + client_options=ClientOptions( + api_endpoint="localhost:" + str(MockServerTestBase.port), + ), + ) + return self._client + + @property + def instance(self) -> Instance: + if self._instance is None: + self._instance = self.client.instance("test-instance") + return self._instance + + @property + def database(self) -> Database: + if self._database is None: + self._database = self.instance.database( + "test-database", + pool=FixedSizePool(size=10), + enable_interceptors_in_tests=True, + logger=self.logger, + ) + return self._database + + def assert_requests_sequence( + self, + requests, + expected_types, + transaction_type, + allow_multiple_batch_create=True, + ): + """Assert that the requests sequence matches the expected types, accounting for multiplexed sessions and retries. + + Args: + requests: List of requests from spanner_service.requests + expected_types: List of expected request types (excluding session creation requests) + transaction_type: TransactionType enum value to check multiplexed session status + allow_multiple_batch_create: If True, skip all leading BatchCreateSessionsRequest and one optional CreateSessionRequest + """ + from google.cloud.spanner_v1 import ( + BatchCreateSessionsRequest, + CreateSessionRequest, + ) + + mux_enabled = is_multiplexed_enabled(transaction_type) + idx = 0 + # Skip all leading BatchCreateSessionsRequest (for retries) + if allow_multiple_batch_create: + while idx < len(requests) and isinstance( + requests[idx], BatchCreateSessionsRequest + ): + idx += 1 + # For multiplexed, optionally skip a CreateSessionRequest + if ( + mux_enabled + and idx < len(requests) + and isinstance(requests[idx], CreateSessionRequest) + ): + idx += 1 + else: + if mux_enabled: + self.assertTrue( + isinstance(requests[idx], BatchCreateSessionsRequest), + f"Expected BatchCreateSessionsRequest at index {idx}, got {type(requests[idx])}", + ) + idx += 1 + self.assertTrue( + isinstance(requests[idx], CreateSessionRequest), + f"Expected CreateSessionRequest at index {idx}, got {type(requests[idx])}", + ) + idx += 1 + else: + self.assertTrue( + isinstance(requests[idx], BatchCreateSessionsRequest), + f"Expected BatchCreateSessionsRequest at index {idx}, got {type(requests[idx])}", + ) + idx += 1 + # Check the rest of the expected request types + for expected_type in expected_types: + self.assertTrue( + isinstance(requests[idx], expected_type), + f"Expected {expected_type} at index {idx}, got {type(requests[idx])}", + ) + idx += 1 + self.assertEqual( + idx, len(requests), f"Expected {idx} requests, got {len(requests)}" + ) + + def adjust_request_id_sequence(self, expected_segments, requests, transaction_type): + """Adjust expected request ID sequence numbers based on actual session creation requests. + + Args: + expected_segments: List of expected (method, (sequence_numbers)) tuples + requests: List of actual requests from spanner_service.requests + transaction_type: TransactionType enum value to check multiplexed session status + + Returns: + List of adjusted expected segments with corrected sequence numbers + """ + from google.cloud.spanner_v1 import ( + BatchCreateSessionsRequest, + CreateSessionRequest, + ExecuteSqlRequest, + BeginTransactionRequest, + ) + + # Count session creation requests that come before the first non-session request + session_requests_before = 0 + for req in requests: + if isinstance(req, (BatchCreateSessionsRequest, CreateSessionRequest)): + session_requests_before += 1 + elif isinstance(req, (ExecuteSqlRequest, BeginTransactionRequest)): + break + + # For multiplexed sessions, we expect 2 session requests (BatchCreateSessions + CreateSession) + # For non-multiplexed, we expect 1 session request (BatchCreateSessions) + mux_enabled = is_multiplexed_enabled(transaction_type) + expected_session_requests = 2 if mux_enabled else 1 + extra_session_requests = session_requests_before - expected_session_requests + + # Adjust sequence numbers based on extra session requests + adjusted_segments = [] + for method, seq_nums in expected_segments: + # Adjust the sequence number (5th element in the tuple) + adjusted_seq_nums = list(seq_nums) + adjusted_seq_nums[4] += extra_session_requests + adjusted_segments.append((method, tuple(adjusted_seq_nums))) + + return adjusted_segments diff --git a/tests/mockserver_tests/test_aborted_transaction.py b/tests/mockserver_tests/test_aborted_transaction.py new file mode 100644 index 0000000000..a1f9f1ba1e --- /dev/null +++ b/tests/mockserver_tests/test_aborted_transaction.py @@ -0,0 +1,155 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import random + +from google.cloud.spanner_v1 import ( + BeginTransactionRequest, + CommitRequest, + ExecuteSqlRequest, + TypeCode, + ExecuteBatchDmlRequest, +) +from google.cloud.spanner_v1.testing.mock_spanner import SpannerServicer +from google.cloud.spanner_v1.transaction import Transaction +from tests.mockserver_tests.mock_server_test_base import ( + MockServerTestBase, + add_error, + aborted_status, + add_update_count, + add_single_result, +) +from google.api_core import exceptions +from test_utils import retry +from google.cloud.spanner_v1.database_sessions_manager import TransactionType + +retry_maybe_aborted_txn = retry.RetryErrors( + exceptions.Aborted, max_tries=5, delay=0, backoff=1 +) + + +class TestAbortedTransaction(MockServerTestBase): + def test_run_in_transaction_commit_aborted(self): + # Add an Aborted error for the Commit method on the mock server. + add_error(SpannerServicer.Commit.__name__, aborted_status()) + # Run a transaction. The Commit method will return Aborted the first + # time that the transaction tries to commit. It will then be retried + # and succeed. + self.database.run_in_transaction(_insert_mutations) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ + BeginTransactionRequest, + CommitRequest, + BeginTransactionRequest, + CommitRequest, + ], + TransactionType.READ_WRITE, + ) + + def test_run_in_transaction_update_aborted(self): + add_update_count("update my_table set my_col=1 where id=2", 1) + add_error(SpannerServicer.ExecuteSql.__name__, aborted_status()) + self.database.run_in_transaction(_execute_update) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest, ExecuteSqlRequest, CommitRequest], + TransactionType.READ_WRITE, + ) + + def test_run_in_transaction_query_aborted(self): + add_single_result( + "select value from my_table where id=1", + "value", + TypeCode.STRING, + "my-value", + ) + add_error(SpannerServicer.ExecuteStreamingSql.__name__, aborted_status()) + self.database.run_in_transaction(_execute_query) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest, ExecuteSqlRequest, CommitRequest], + TransactionType.READ_WRITE, + ) + + def test_run_in_transaction_batch_dml_aborted(self): + add_update_count("update my_table set my_col=1 where id=1", 1) + add_update_count("update my_table set my_col=1 where id=2", 1) + add_error(SpannerServicer.ExecuteBatchDml.__name__, aborted_status()) + self.database.run_in_transaction(_execute_batch_dml) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteBatchDmlRequest, ExecuteBatchDmlRequest, CommitRequest], + TransactionType.READ_WRITE, + ) + + def test_batch_commit_aborted(self): + # Add an Aborted error for the Commit method on the mock server. + add_error(SpannerServicer.Commit.__name__, aborted_status()) + with self.database.batch() as batch: + batch.insert( + table="Singers", + columns=("SingerId", "FirstName", "LastName"), + values=[ + (1, "Marc", "Richards"), + (2, "Catalina", "Smith"), + (3, "Alice", "Trentor"), + (4, "Lea", "Martin"), + (5, "David", "Lomond"), + ], + ) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [CommitRequest, CommitRequest], + TransactionType.READ_WRITE, + ) + + @retry_maybe_aborted_txn + def test_retry_helper(self): + # Randomly add an Aborted error for the Commit method on the mock server. + if random.random() < 0.5: + add_error(SpannerServicer.Commit.__name__, aborted_status()) + session = self.database.session() + session.create() + transaction = session.transaction() + transaction.begin() + transaction.insert("my_table", ["col1, col2"], [{"col1": 1, "col2": "One"}]) + transaction.commit() + + +def _insert_mutations(transaction: Transaction): + transaction.insert("my_table", ["col1", "col2"], ["value1", "value2"]) + + +def _execute_update(transaction: Transaction): + transaction.execute_update("update my_table set my_col=1 where id=2") + + +def _execute_query(transaction: Transaction): + rows = transaction.execute_sql("select value from my_table where id=1") + for _ in rows: + pass + + +def _execute_batch_dml(transaction: Transaction): + transaction.batch_update( + [ + "update my_table set my_col=1 where id=1", + "update my_table set my_col=1 where id=2", + ] + ) diff --git a/tests/mockserver_tests/test_basics.py b/tests/mockserver_tests/test_basics.py new file mode 100644 index 0000000000..6d80583ab9 --- /dev/null +++ b/tests/mockserver_tests/test_basics.py @@ -0,0 +1,234 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.cloud.spanner_dbapi import Connection +from google.cloud.spanner_dbapi.parsed_statement import AutocommitDmlMode +from google.cloud.spanner_v1 import ( + BeginTransactionRequest, + ExecuteBatchDmlRequest, + ExecuteSqlRequest, + TransactionOptions, + TypeCode, +) +from google.cloud.spanner_v1.testing.mock_spanner import SpannerServicer +from google.cloud.spanner_v1.transaction import Transaction +from google.cloud.spanner_v1.database_sessions_manager import TransactionType + +from tests.mockserver_tests.mock_server_test_base import ( + MockServerTestBase, + _make_partial_result_sets, + add_select1_result, + add_single_result, + add_update_count, + add_error, + unavailable_status, + add_execute_streaming_sql_results, +) +from tests._helpers import is_multiplexed_enabled + + +class TestBasics(MockServerTestBase): + def test_select1(self): + add_select1_result() + with self.database.snapshot() as snapshot: + results = snapshot.execute_sql("select 1") + result_list = [] + for row in results: + result_list.append(row) + self.assertEqual(1, row[0]) + self.assertEqual(1, len(result_list)) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest], + TransactionType.READ_ONLY, + ) + + def test_create_table(self): + database_admin_api = self.client.database_admin_api + request = spanner_database_admin.UpdateDatabaseDdlRequest( + dict( + database=database_admin_api.database_path( + "test-project", "test-instance", "test-database" + ), + statements=[ + "CREATE TABLE Test (" + "Id INT64, " + "Value STRING(MAX)) " + "PRIMARY KEY (Id)", + ], + ) + ) + operation = database_admin_api.update_database_ddl(request) + operation.result(1) + + # TODO: Move this to a separate class once the mock server test setup has + # been re-factored to use a base class for the boiler plate code. + def test_dbapi_partitioned_dml(self): + sql = "UPDATE singers SET foo='bar' WHERE active = true" + add_update_count(sql, 100, AutocommitDmlMode.PARTITIONED_NON_ATOMIC) + connection = Connection(self.instance, self.database) + connection.autocommit = True + connection.set_autocommit_dml_mode(AutocommitDmlMode.PARTITIONED_NON_ATOMIC) + with connection.cursor() as cursor: + # Note: SQLAlchemy uses [] as the list of parameters for statements + # with no parameters. + cursor.execute(sql, []) + self.assertEqual(100, cursor.rowcount) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [BeginTransactionRequest, ExecuteSqlRequest], + TransactionType.PARTITIONED, + allow_multiple_batch_create=True, + ) + # Find the first BeginTransactionRequest after session creation + idx = 0 + from google.cloud.spanner_v1 import ( + BatchCreateSessionsRequest, + CreateSessionRequest, + ) + + while idx < len(requests) and isinstance( + requests[idx], BatchCreateSessionsRequest + ): + idx += 1 + if ( + is_multiplexed_enabled(TransactionType.PARTITIONED) + and idx < len(requests) + and isinstance(requests[idx], CreateSessionRequest) + ): + idx += 1 + begin_request: BeginTransactionRequest = requests[idx] + self.assertEqual( + TransactionOptions(dict(partitioned_dml={})), begin_request.options + ) + + def test_batch_create_sessions_unavailable(self): + add_select1_result() + add_error(SpannerServicer.BatchCreateSessions.__name__, unavailable_status()) + with self.database.snapshot() as snapshot: + results = snapshot.execute_sql("select 1") + result_list = [] + for row in results: + result_list.append(row) + self.assertEqual(1, row[0]) + self.assertEqual(1, len(result_list)) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest], + TransactionType.READ_ONLY, + allow_multiple_batch_create=True, + ) + + def test_execute_streaming_sql_unavailable(self): + add_select1_result() + # Add an UNAVAILABLE error that is returned the first time the + # ExecuteStreamingSql RPC is called. + add_error(SpannerServicer.ExecuteStreamingSql.__name__, unavailable_status()) + with self.database.snapshot() as snapshot: + results = snapshot.execute_sql("select 1") + result_list = [] + for row in results: + result_list.append(row) + self.assertEqual(1, row[0]) + self.assertEqual(1, len(result_list)) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest, ExecuteSqlRequest], + TransactionType.READ_ONLY, + ) + + def test_last_statement_update(self): + sql = "update my_table set my_col=1 where id=2" + add_update_count(sql, 1) + self.database.run_in_transaction( + lambda transaction: transaction.execute_update(sql, last_statement=True) + ) + requests = list( + filter( + lambda msg: isinstance(msg, ExecuteSqlRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(requests), msg=requests) + self.assertTrue(requests[0].last_statement, requests[0]) + + def test_last_statement_batch_update(self): + sql = "update my_table set my_col=1 where id=2" + add_update_count(sql, 1) + self.database.run_in_transaction( + lambda transaction: transaction.batch_update( + [sql, sql], last_statement=True + ) + ) + requests = list( + filter( + lambda msg: isinstance(msg, ExecuteBatchDmlRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(requests), msg=requests) + self.assertTrue(requests[0].last_statements, requests[0]) + + def test_last_statement_query(self): + sql = "insert into my_table (value) values ('One') then return id" + add_single_result(sql, "c", TypeCode.INT64, [("1",)]) + self.database.run_in_transaction( + lambda transaction: _execute_query(transaction, sql) + ) + requests = list( + filter( + lambda msg: isinstance(msg, ExecuteSqlRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(requests), msg=requests) + self.assertTrue(requests[0].last_statement, requests[0]) + + def test_execute_streaming_sql_last_field(self): + partial_result_sets = _make_partial_result_sets( + [("ID", TypeCode.INT64), ("NAME", TypeCode.STRING)], + [ + {"values": ["1", "ABC", "2", "DEF"]}, + {"values": ["3", "GHI"], "last": True}, + ], + ) + + sql = "select * from my_table" + add_execute_streaming_sql_results(sql, partial_result_sets) + count = 1 + with self.database.snapshot() as snapshot: + results = snapshot.execute_sql(sql) + result_list = [] + for row in results: + result_list.append(row) + self.assertEqual(count, row[0]) + count += 1 + self.assertEqual(3, len(result_list)) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest], + TransactionType.READ_ONLY, + ) + + +def _execute_query(transaction: Transaction, sql: str): + rows = transaction.execute_sql(sql, last_statement=True) + for _ in rows: + pass diff --git a/tests/mockserver_tests/test_dbapi_autocommit.py b/tests/mockserver_tests/test_dbapi_autocommit.py new file mode 100644 index 0000000000..7f0e3e432f --- /dev/null +++ b/tests/mockserver_tests/test_dbapi_autocommit.py @@ -0,0 +1,127 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.cloud.spanner_dbapi import Connection +from google.cloud.spanner_v1 import ( + ExecuteSqlRequest, + TypeCode, + CommitRequest, + ExecuteBatchDmlRequest, +) +from tests.mockserver_tests.mock_server_test_base import ( + MockServerTestBase, + add_single_result, + add_update_count, +) + + +class TestDbapiAutoCommit(MockServerTestBase): + @classmethod + def setup_class(cls): + super().setup_class() + add_single_result( + "select name from singers", "name", TypeCode.STRING, [("Some Singer",)] + ) + add_update_count("insert into singers (id, name) values (1, 'Some Singer')", 1) + + def test_select_autocommit(self): + connection = Connection(self.instance, self.database) + connection.autocommit = True + with connection.cursor() as cursor: + cursor.execute("select name from singers") + result_list = cursor.fetchall() + for _ in result_list: + pass + requests = list( + filter( + lambda msg: isinstance(msg, ExecuteSqlRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(requests)) + self.assertFalse(requests[0].last_statement, requests[0]) + self.assertIsNotNone(requests[0].transaction, requests[0]) + self.assertIsNotNone(requests[0].transaction.single_use, requests[0]) + self.assertTrue(requests[0].transaction.single_use.read_only, requests[0]) + + def test_dml_autocommit(self): + connection = Connection(self.instance, self.database) + connection.autocommit = True + with connection.cursor() as cursor: + cursor.execute("insert into singers (id, name) values (1, 'Some Singer')") + self.assertEqual(1, cursor.rowcount) + requests = list( + filter( + lambda msg: isinstance(msg, ExecuteSqlRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(requests)) + self.assertTrue(requests[0].last_statement, requests[0]) + commit_requests = list( + filter( + lambda msg: isinstance(msg, CommitRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(commit_requests)) + + def test_executemany_autocommit(self): + connection = Connection(self.instance, self.database) + connection.autocommit = True + with connection.cursor() as cursor: + cursor.executemany( + "insert into singers (id, name) values (1, 'Some Singer')", [(), ()] + ) + self.assertEqual(2, cursor.rowcount) + requests = list( + filter( + lambda msg: isinstance(msg, ExecuteBatchDmlRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(requests)) + self.assertTrue(requests[0].last_statements, requests[0]) + commit_requests = list( + filter( + lambda msg: isinstance(msg, CommitRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(commit_requests)) + + def test_batch_dml_autocommit(self): + connection = Connection(self.instance, self.database) + connection.autocommit = True + with connection.cursor() as cursor: + cursor.execute("start batch dml") + cursor.execute("insert into singers (id, name) values (1, 'Some Singer')") + cursor.execute("insert into singers (id, name) values (1, 'Some Singer')") + cursor.execute("run batch") + self.assertEqual(2, cursor.rowcount) + requests = list( + filter( + lambda msg: isinstance(msg, ExecuteBatchDmlRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(requests)) + self.assertTrue(requests[0].last_statements, requests[0]) + commit_requests = list( + filter( + lambda msg: isinstance(msg, CommitRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(commit_requests)) diff --git a/tests/mockserver_tests/test_dbapi_isolation_level.py b/tests/mockserver_tests/test_dbapi_isolation_level.py new file mode 100644 index 0000000000..679740969a --- /dev/null +++ b/tests/mockserver_tests/test_dbapi_isolation_level.py @@ -0,0 +1,150 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.api_core.exceptions import Unknown +from google.cloud.spanner_dbapi import Connection +from google.cloud.spanner_v1 import ( + BeginTransactionRequest, + TransactionOptions, +) +from tests.mockserver_tests.mock_server_test_base import ( + MockServerTestBase, + add_update_count, +) + + +class TestDbapiIsolationLevel(MockServerTestBase): + @classmethod + def setup_class(cls): + super().setup_class() + add_update_count("insert into singers (id, name) values (1, 'Some Singer')", 1) + + def test_isolation_level_default(self): + connection = Connection(self.instance, self.database) + with connection.cursor() as cursor: + cursor.execute("insert into singers (id, name) values (1, 'Some Singer')") + self.assertEqual(1, cursor.rowcount) + connection.commit() + begin_requests = list( + filter( + lambda msg: isinstance(msg, BeginTransactionRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(begin_requests)) + self.assertEqual( + begin_requests[0].options.isolation_level, + TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + ) + + def test_custom_isolation_level(self): + connection = Connection(self.instance, self.database) + for level in [ + TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + TransactionOptions.IsolationLevel.REPEATABLE_READ, + TransactionOptions.IsolationLevel.SERIALIZABLE, + ]: + connection.isolation_level = level + with connection.cursor() as cursor: + cursor.execute( + "insert into singers (id, name) values (1, 'Some Singer')" + ) + self.assertEqual(1, cursor.rowcount) + connection.commit() + begin_requests = list( + filter( + lambda msg: isinstance(msg, BeginTransactionRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(begin_requests)) + self.assertEqual(begin_requests[0].options.isolation_level, level) + MockServerTestBase.spanner_service.clear_requests() + + def test_isolation_level_in_connection_kwargs(self): + for level in [ + TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + TransactionOptions.IsolationLevel.REPEATABLE_READ, + TransactionOptions.IsolationLevel.SERIALIZABLE, + ]: + connection = Connection(self.instance, self.database, isolation_level=level) + with connection.cursor() as cursor: + cursor.execute( + "insert into singers (id, name) values (1, 'Some Singer')" + ) + self.assertEqual(1, cursor.rowcount) + connection.commit() + begin_requests = list( + filter( + lambda msg: isinstance(msg, BeginTransactionRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(begin_requests)) + self.assertEqual(begin_requests[0].options.isolation_level, level) + MockServerTestBase.spanner_service.clear_requests() + + def test_transaction_isolation_level(self): + connection = Connection(self.instance, self.database) + for level in [ + TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + TransactionOptions.IsolationLevel.REPEATABLE_READ, + TransactionOptions.IsolationLevel.SERIALIZABLE, + ]: + connection.begin(isolation_level=level) + with connection.cursor() as cursor: + cursor.execute( + "insert into singers (id, name) values (1, 'Some Singer')" + ) + self.assertEqual(1, cursor.rowcount) + connection.commit() + begin_requests = list( + filter( + lambda msg: isinstance(msg, BeginTransactionRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(begin_requests)) + self.assertEqual(begin_requests[0].options.isolation_level, level) + MockServerTestBase.spanner_service.clear_requests() + + def test_begin_isolation_level(self): + connection = Connection(self.instance, self.database) + for level in [ + TransactionOptions.IsolationLevel.REPEATABLE_READ, + TransactionOptions.IsolationLevel.SERIALIZABLE, + ]: + isolation_level_name = level.name.replace("_", " ") + with connection.cursor() as cursor: + cursor.execute(f"begin isolation level {isolation_level_name}") + cursor.execute( + "insert into singers (id, name) values (1, 'Some Singer')" + ) + self.assertEqual(1, cursor.rowcount) + connection.commit() + begin_requests = list( + filter( + lambda msg: isinstance(msg, BeginTransactionRequest), + self.spanner_service.requests, + ) + ) + self.assertEqual(1, len(begin_requests)) + self.assertEqual(begin_requests[0].options.isolation_level, level) + MockServerTestBase.spanner_service.clear_requests() + + def test_begin_invalid_isolation_level(self): + connection = Connection(self.instance, self.database) + with connection.cursor() as cursor: + with self.assertRaises(Unknown): + cursor.execute("begin isolation level does_not_exist") diff --git a/tests/mockserver_tests/test_request_id_header.py b/tests/mockserver_tests/test_request_id_header.py new file mode 100644 index 0000000000..055d9d97b5 --- /dev/null +++ b/tests/mockserver_tests/test_request_id_header.py @@ -0,0 +1,294 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import random +import threading + +from google.cloud.spanner_v1 import ( + BatchCreateSessionsRequest, + CreateSessionRequest, + ExecuteSqlRequest, + BeginTransactionRequest, +) +from google.cloud.spanner_v1.request_id_header import REQ_RAND_PROCESS_ID +from google.cloud.spanner_v1.testing.mock_spanner import SpannerServicer +from tests.mockserver_tests.mock_server_test_base import ( + MockServerTestBase, + add_select1_result, + aborted_status, + add_error, + unavailable_status, +) +from google.cloud.spanner_v1.database_sessions_manager import TransactionType + + +class TestRequestIDHeader(MockServerTestBase): + def tearDown(self): + self.database._x_goog_request_id_interceptor.reset() + + def test_snapshot_execute_sql(self): + add_select1_result() + if not getattr(self.database, "_interceptors", None): + self.database._interceptors = MockServerTestBase._interceptors + with self.database.snapshot() as snapshot: + results = snapshot.execute_sql("select 1") + result_list = [] + for row in results: + result_list.append(row) + self.assertEqual(1, row[0]) + self.assertEqual(1, len(result_list)) + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest], + TransactionType.READ_ONLY, + allow_multiple_batch_create=True, + ) + NTH_CLIENT = self.database._nth_client_id + CHANNEL_ID = self.database._channel_id + got_stream_segments, got_unary_segments = self.canonicalize_request_id_headers() + # Filter out CreateSessionRequest unary segments for comparison + filtered_unary_segments = [ + seg for seg in got_unary_segments if not seg[0].endswith("/CreateSession") + ] + want_unary_segments = [ + ( + "/google.spanner.v1.Spanner/BatchCreateSessions", + (1, REQ_RAND_PROCESS_ID, NTH_CLIENT, CHANNEL_ID, 1, 1), + ) + ] + # Dynamically determine the expected sequence number for ExecuteStreamingSql + session_requests_before = 0 + for req in requests: + if isinstance(req, (BatchCreateSessionsRequest, CreateSessionRequest)): + session_requests_before += 1 + elif isinstance(req, ExecuteSqlRequest): + break + want_stream_segments = [ + ( + "/google.spanner.v1.Spanner/ExecuteStreamingSql", + ( + 1, + REQ_RAND_PROCESS_ID, + NTH_CLIENT, + CHANNEL_ID, + 1 + session_requests_before, + 1, + ), + ) + ] + assert filtered_unary_segments == want_unary_segments + assert got_stream_segments == want_stream_segments + + def test_snapshot_read_concurrent(self): + add_select1_result() + db = self.database + with db.snapshot() as snapshot: + rows = snapshot.execute_sql("select 1") + for row in rows: + _ = row + + def select1(): + with db.snapshot() as snapshot: + rows = snapshot.execute_sql("select 1") + res_list = [] + for row in rows: + self.assertEqual(1, row[0]) + res_list.append(row) + self.assertEqual(1, len(res_list)) + + n = 10 + threads = [] + for i in range(n): + th = threading.Thread(target=select1, name=f"snapshot-select1-{i}") + threads.append(th) + th.start() + random.shuffle(threads) + for thread in threads: + thread.join() + requests = self.spanner_service.requests + # Allow for an extra request due to multiplexed session creation + expected_min = 2 + n + expected_max = expected_min + 1 + assert ( + expected_min <= len(requests) <= expected_max + ), f"Expected {expected_min} or {expected_max} requests, got {len(requests)}: {requests}" + client_id = db._nth_client_id + channel_id = db._channel_id + got_stream_segments, got_unary_segments = self.canonicalize_request_id_headers() + want_unary_segments = [ + ( + "/google.spanner.v1.Spanner/BatchCreateSessions", + (1, REQ_RAND_PROCESS_ID, client_id, channel_id, 1, 1), + ), + ] + assert any(seg == want_unary_segments[0] for seg in got_unary_segments) + + # Dynamically determine the expected sequence numbers for ExecuteStreamingSql + session_requests_before = 0 + for req in requests: + if isinstance(req, (BatchCreateSessionsRequest, CreateSessionRequest)): + session_requests_before += 1 + elif isinstance(req, ExecuteSqlRequest): + break + want_stream_segments = [ + ( + "/google.spanner.v1.Spanner/ExecuteStreamingSql", + ( + 1, + REQ_RAND_PROCESS_ID, + client_id, + channel_id, + session_requests_before + i, + 1, + ), + ) + for i in range(1, n + 2) + ] + assert got_stream_segments == want_stream_segments + + def test_database_run_in_transaction_retries_on_abort(self): + counters = dict(aborted=0) + want_failed_attempts = 2 + + def select_in_txn(txn): + results = txn.execute_sql("select 1") + for row in results: + _ = row + + if counters["aborted"] < want_failed_attempts: + counters["aborted"] += 1 + add_error(SpannerServicer.Commit.__name__, aborted_status()) + + add_select1_result() + if not getattr(self.database, "_interceptors", None): + self.database._interceptors = MockServerTestBase._interceptors + + self.database.run_in_transaction(select_in_txn) + + def test_database_execute_partitioned_dml_request_id(self): + add_select1_result() + if not getattr(self.database, "_interceptors", None): + self.database._interceptors = MockServerTestBase._interceptors + _ = self.database.execute_partitioned_dml("select 1") + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [BeginTransactionRequest, ExecuteSqlRequest], + TransactionType.PARTITIONED, + allow_multiple_batch_create=True, + ) + got_stream_segments, got_unary_segments = self.canonicalize_request_id_headers() + NTH_CLIENT = self.database._nth_client_id + CHANNEL_ID = self.database._channel_id + # Allow for extra unary segments due to session creation + filtered_unary_segments = [ + seg for seg in got_unary_segments if not seg[0].endswith("/CreateSession") + ] + # Find the actual sequence number for BeginTransaction + begin_txn_seq = None + for seg in filtered_unary_segments: + if seg[0].endswith("/BeginTransaction"): + begin_txn_seq = seg[1][4] + break + want_unary_segments = [ + ( + "/google.spanner.v1.Spanner/BatchCreateSessions", + (1, REQ_RAND_PROCESS_ID, NTH_CLIENT, CHANNEL_ID, 1, 1), + ), + ( + "/google.spanner.v1.Spanner/BeginTransaction", + (1, REQ_RAND_PROCESS_ID, NTH_CLIENT, CHANNEL_ID, begin_txn_seq, 1), + ), + ] + # Dynamically determine the expected sequence number for ExecuteStreamingSql + session_requests_before = 0 + for req in requests: + if isinstance(req, (BatchCreateSessionsRequest, CreateSessionRequest)): + session_requests_before += 1 + elif isinstance(req, ExecuteSqlRequest): + break + # Find the actual sequence number for ExecuteStreamingSql + exec_sql_seq = got_stream_segments[0][1][4] if got_stream_segments else None + want_stream_segments = [ + ( + "/google.spanner.v1.Spanner/ExecuteStreamingSql", + (1, REQ_RAND_PROCESS_ID, NTH_CLIENT, CHANNEL_ID, exec_sql_seq, 1), + ) + ] + assert all(seg in filtered_unary_segments for seg in want_unary_segments) + assert got_stream_segments == want_stream_segments + + def test_unary_retryable_error(self): + add_select1_result() + add_error(SpannerServicer.BatchCreateSessions.__name__, unavailable_status()) + + if not getattr(self.database, "_interceptors", None): + self.database._interceptors = MockServerTestBase._interceptors + with self.database.snapshot() as snapshot: + results = snapshot.execute_sql("select 1") + result_list = [] + for row in results: + result_list.append(row) + self.assertEqual(1, row[0]) + self.assertEqual(1, len(result_list)) + + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest], + TransactionType.READ_ONLY, + allow_multiple_batch_create=True, + ) + + NTH_CLIENT = self.database._nth_client_id + CHANNEL_ID = self.database._channel_id + # Now ensure monotonicity of the received request-id segments. + got_stream_segments, got_unary_segments = self.canonicalize_request_id_headers() + + # Dynamically determine the expected sequence number for ExecuteStreamingSql + exec_sql_seq = got_stream_segments[0][1][4] if got_stream_segments else None + want_stream_segments = [ + ( + "/google.spanner.v1.Spanner/ExecuteStreamingSql", + (1, REQ_RAND_PROCESS_ID, NTH_CLIENT, CHANNEL_ID, exec_sql_seq, 1), + ) + ] + assert got_stream_segments == want_stream_segments + + def test_streaming_retryable_error(self): + add_select1_result() + add_error(SpannerServicer.ExecuteStreamingSql.__name__, unavailable_status()) + + if not getattr(self.database, "_interceptors", None): + self.database._interceptors = MockServerTestBase._interceptors + with self.database.snapshot() as snapshot: + results = snapshot.execute_sql("select 1") + result_list = [] + for row in results: + result_list.append(row) + self.assertEqual(1, row[0]) + self.assertEqual(1, len(result_list)) + + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ExecuteSqlRequest, ExecuteSqlRequest], + TransactionType.READ_ONLY, + allow_multiple_batch_create=True, + ) + + def canonicalize_request_id_headers(self): + src = self.database._x_goog_request_id_interceptor + return src._stream_req_segments, src._unary_req_segments diff --git a/tests/mockserver_tests/test_tags.py b/tests/mockserver_tests/test_tags.py new file mode 100644 index 0000000000..9e35517797 --- /dev/null +++ b/tests/mockserver_tests/test_tags.py @@ -0,0 +1,241 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.cloud.spanner_dbapi import Connection +from google.cloud.spanner_v1 import ( + ExecuteSqlRequest, + BeginTransactionRequest, + TypeCode, + CommitRequest, +) +from tests.mockserver_tests.mock_server_test_base import ( + MockServerTestBase, + add_single_result, +) +from tests._helpers import is_multiplexed_enabled +from google.cloud.spanner_v1.database_sessions_manager import TransactionType + + +class TestTags(MockServerTestBase): + @classmethod + def setup_class(cls): + super().setup_class() + add_single_result( + "select name from singers", "name", TypeCode.STRING, [("Some Singer",)] + ) + + def test_select_autocommit_no_tags(self): + connection = Connection(self.instance, self.database) + connection.autocommit = True + request = self._execute_and_verify_select_singers(connection) + self.assertEqual("", request.request_options.request_tag) + self.assertEqual("", request.request_options.transaction_tag) + + def test_select_autocommit_with_request_tag(self): + connection = Connection(self.instance, self.database) + connection.autocommit = True + request = self._execute_and_verify_select_singers( + connection, request_tag="my_tag" + ) + self.assertEqual("my_tag", request.request_options.request_tag) + self.assertEqual("", request.request_options.transaction_tag) + + def test_select_read_only_transaction_no_tags(self): + connection = Connection(self.instance, self.database) + connection.autocommit = False + connection.read_only = True + request = self._execute_and_verify_select_singers(connection) + self.assertEqual("", request.request_options.request_tag) + self.assertEqual("", request.request_options.transaction_tag) + connection.commit() + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [BeginTransactionRequest, ExecuteSqlRequest], + TransactionType.READ_ONLY, + ) + + def test_select_read_only_transaction_with_request_tag(self): + connection = Connection(self.instance, self.database) + connection.autocommit = False + connection.read_only = True + request = self._execute_and_verify_select_singers( + connection, request_tag="my_tag" + ) + self.assertEqual("my_tag", request.request_options.request_tag) + self.assertEqual("", request.request_options.transaction_tag) + connection.commit() + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [BeginTransactionRequest, ExecuteSqlRequest], + TransactionType.READ_ONLY, + ) + + def test_select_read_only_transaction_with_transaction_tag(self): + connection = Connection(self.instance, self.database) + connection.autocommit = False + connection.read_only = True + connection.transaction_tag = "my_transaction_tag" + self._execute_and_verify_select_singers(connection) + self._execute_and_verify_select_singers(connection) + + self.assertEqual("my_transaction_tag", connection.transaction_tag) + connection.commit() + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [BeginTransactionRequest, ExecuteSqlRequest, ExecuteSqlRequest], + TransactionType.READ_ONLY, + ) + # Transaction tags are not supported for read-only transactions. + mux_enabled = is_multiplexed_enabled(TransactionType.READ_ONLY) + tag_idx = 3 if mux_enabled else 2 + self.assertEqual("", requests[tag_idx].request_options.transaction_tag) + self.assertEqual("", requests[tag_idx + 1].request_options.transaction_tag) + + def test_select_read_write_transaction_no_tags(self): + connection = Connection(self.instance, self.database) + connection.autocommit = False + request = self._execute_and_verify_select_singers(connection) + self.assertEqual("", request.request_options.request_tag) + self.assertEqual("", request.request_options.transaction_tag) + connection.commit() + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [BeginTransactionRequest, ExecuteSqlRequest, CommitRequest], + TransactionType.READ_WRITE, + ) + + def test_select_read_write_transaction_with_request_tag(self): + connection = Connection(self.instance, self.database) + connection.autocommit = False + request = self._execute_and_verify_select_singers( + connection, request_tag="my_tag" + ) + self.assertEqual("my_tag", request.request_options.request_tag) + self.assertEqual("", request.request_options.transaction_tag) + connection.commit() + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [BeginTransactionRequest, ExecuteSqlRequest, CommitRequest], + TransactionType.READ_WRITE, + ) + + def test_select_read_write_transaction_with_transaction_tag(self): + connection = Connection(self.instance, self.database) + connection.autocommit = False + connection.transaction_tag = "my_transaction_tag" + self._execute_and_verify_select_singers(connection) + self._execute_and_verify_select_singers(connection) + + self.assertIsNone(connection.transaction_tag) + connection.commit() + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ + BeginTransactionRequest, + ExecuteSqlRequest, + ExecuteSqlRequest, + CommitRequest, + ], + TransactionType.READ_WRITE, + ) + mux_enabled = is_multiplexed_enabled(TransactionType.READ_WRITE) + tag_idx = 3 if mux_enabled else 2 + self.assertEqual( + "my_transaction_tag", requests[tag_idx].request_options.transaction_tag + ) + self.assertEqual( + "my_transaction_tag", requests[tag_idx + 1].request_options.transaction_tag + ) + self.assertEqual( + "my_transaction_tag", requests[tag_idx + 2].request_options.transaction_tag + ) + + def test_select_read_write_transaction_with_transaction_and_request_tag(self): + connection = Connection(self.instance, self.database) + connection.autocommit = False + connection.transaction_tag = "my_transaction_tag" + self._execute_and_verify_select_singers(connection, request_tag="my_tag1") + self._execute_and_verify_select_singers(connection, request_tag="my_tag2") + + self.assertIsNone(connection.transaction_tag) + connection.commit() + requests = self.spanner_service.requests + self.assert_requests_sequence( + requests, + [ + BeginTransactionRequest, + ExecuteSqlRequest, + ExecuteSqlRequest, + CommitRequest, + ], + TransactionType.READ_WRITE, + ) + mux_enabled = is_multiplexed_enabled(TransactionType.READ_WRITE) + tag_idx = 3 if mux_enabled else 2 + self.assertEqual( + "my_transaction_tag", requests[tag_idx].request_options.transaction_tag + ) + self.assertEqual("my_tag1", requests[tag_idx].request_options.request_tag) + self.assertEqual( + "my_transaction_tag", requests[tag_idx + 1].request_options.transaction_tag + ) + self.assertEqual("my_tag2", requests[tag_idx + 1].request_options.request_tag) + self.assertEqual( + "my_transaction_tag", requests[tag_idx + 2].request_options.transaction_tag + ) + + def test_request_tag_is_cleared(self): + connection = Connection(self.instance, self.database) + connection.autocommit = True + with connection.cursor() as cursor: + cursor.request_tag = "my_tag" + cursor.execute("select name from singers") + # This query will not have a request tag. + cursor.execute("select name from singers") + requests = self.spanner_service.requests + + # Filter for SQL requests calls + sql_requests = [ + request for request in requests if isinstance(request, ExecuteSqlRequest) + ] + + self.assertTrue(isinstance(sql_requests[0], ExecuteSqlRequest)) + self.assertTrue(isinstance(sql_requests[1], ExecuteSqlRequest)) + self.assertEqual("my_tag", sql_requests[0].request_options.request_tag) + self.assertEqual("", sql_requests[1].request_options.request_tag) + + def _execute_and_verify_select_singers( + self, connection: Connection, request_tag: str = "", transaction_tag: str = "" + ) -> ExecuteSqlRequest: + with connection.cursor() as cursor: + if request_tag: + cursor.request_tag = request_tag + cursor.execute("select name from singers") + result_list = cursor.fetchall() + for row in result_list: + self.assertEqual("Some Singer", row[0]) + self.assertEqual(1, len(result_list)) + requests = self.spanner_service.requests + return next( + request + for request in requests + if isinstance(request, ExecuteSqlRequest) + and request.sql == "select name from singers" + ) diff --git a/tests/system/_helpers.py b/tests/system/_helpers.py new file mode 100644 index 0000000000..1fc897b39c --- /dev/null +++ b/tests/system/_helpers.py @@ -0,0 +1,168 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import operator +import os +import time + +from google.api_core import exceptions +from google.cloud.spanner_v1 import instance as instance_mod +from tests import _fixtures +from test_utils import retry +from test_utils import system + + +CREATE_INSTANCE_ENVVAR = "GOOGLE_CLOUD_TESTS_CREATE_SPANNER_INSTANCE" +CREATE_INSTANCE = os.getenv(CREATE_INSTANCE_ENVVAR) is not None + +INSTANCE_ID_ENVVAR = "GOOGLE_CLOUD_TESTS_SPANNER_INSTANCE" +INSTANCE_ID_DEFAULT = "google-cloud-python-systest" +INSTANCE_ID = os.environ.get(INSTANCE_ID_ENVVAR, INSTANCE_ID_DEFAULT) + +API_ENDPOINT_ENVVAR = "GOOGLE_CLOUD_TESTS_SPANNER_HOST" +API_ENDPOINT = os.getenv(API_ENDPOINT_ENVVAR) + +SKIP_BACKUP_TESTS_ENVVAR = "SKIP_BACKUP_TESTS" +SKIP_BACKUP_TESTS = os.getenv(SKIP_BACKUP_TESTS_ENVVAR) is not None + +INSTANCE_OPERATION_TIMEOUT_IN_SECONDS = int( + os.getenv("SPANNER_INSTANCE_OPERATION_TIMEOUT_IN_SECONDS", 560) +) +DATABASE_OPERATION_TIMEOUT_IN_SECONDS = int( + os.getenv("SPANNER_DATABASE_OPERATION_TIMEOUT_IN_SECONDS", 120) +) +BACKUP_OPERATION_TIMEOUT_IN_SECONDS = int( + os.getenv("SPANNER_BACKUP_OPERATION_TIMEOUT_IN_SECONDS", 1200) +) + +USE_EMULATOR_ENVVAR = "SPANNER_EMULATOR_HOST" +USE_EMULATOR = os.getenv(USE_EMULATOR_ENVVAR) is not None + +DATABASE_DIALECT_ENVVAR = "SPANNER_DATABASE_DIALECT" +DATABASE_DIALECT = os.getenv(DATABASE_DIALECT_ENVVAR) + +EMULATOR_PROJECT_ENVVAR = "GCLOUD_PROJECT" +EMULATOR_PROJECT_DEFAULT = "emulator-test-project" +EMULATOR_PROJECT = os.getenv(EMULATOR_PROJECT_ENVVAR, EMULATOR_PROJECT_DEFAULT) + + +DDL_STATEMENTS = ( + _fixtures.PG_DDL_STATEMENTS + if DATABASE_DIALECT == "POSTGRESQL" + else ( + _fixtures.EMULATOR_DDL_STATEMENTS if USE_EMULATOR else _fixtures.DDL_STATEMENTS + ) +) + +PROTO_COLUMNS_DDL_STATEMENTS = _fixtures.PROTO_COLUMNS_DDL_STATEMENTS + +retry_true = retry.RetryResult(operator.truth) +retry_false = retry.RetryResult(operator.not_) + +retry_503 = retry.RetryErrors(exceptions.ServiceUnavailable) +retry_429_503 = retry.RetryErrors( + exceptions.TooManyRequests, exceptions.ServiceUnavailable, 8 +) +retry_maybe_aborted_txn = retry.RetryErrors(exceptions.Aborted) +retry_maybe_conflict = retry.RetryErrors(exceptions.Conflict) + + +def _has_all_ddl(database): + # Predicate to test for EC completion. + return len(database.ddl_statements) == len(DDL_STATEMENTS) + + +retry_has_all_dll = retry.RetryInstanceState(_has_all_ddl) + + +def scrub_referencing_databases(to_scrub, db_list): + for db_name in db_list: + db = to_scrub.database(db_name.split("/")[-1]) + try: + retry_429_503(db.delete)() + except exceptions.NotFound: # lost the race + pass + + +def scrub_instance_backups(to_scrub): + try: + for backup_pb in to_scrub.list_backups(): + # Backup cannot be deleted while referencing databases exist. + scrub_referencing_databases(to_scrub, backup_pb.referencing_databases) + bkp = instance_mod.Backup.from_pb(backup_pb, to_scrub) + try: + # Instance cannot be deleted while backups exist. + retry_429_503(bkp.delete)() + except exceptions.NotFound: # lost the race + pass + except exceptions.MethodNotImplemented: + # The CI emulator raises 501: local versions seem fine. + pass + + +def scrub_instance_ignore_not_found(to_scrub): + """Helper for func:`cleanup_old_instances`""" + scrub_instance_backups(to_scrub) + + for database_pb in to_scrub.list_databases(): + db = to_scrub.database(database_pb.name.split("/")[-1]) + db.reload() + try: + if db.enable_drop_protection: + db.enable_drop_protection = False + operation = db.update(["enable_drop_protection"]) + operation.result(DATABASE_OPERATION_TIMEOUT_IN_SECONDS) + except exceptions.NotFound: + pass + + try: + retry_429_503(to_scrub.delete)() + except exceptions.NotFound: + pass + + +def cleanup_old_instances(spanner_client): + cutoff = int(time.time()) - 3 * 60 * 60 # three hour ago + instance_filter = "labels.python-spanner-systests:true" + + for instance_pb in spanner_client.list_instances(filter_=instance_filter): + instance = instance_mod.Instance.from_pb(instance_pb, spanner_client) + + if "created" in instance.labels: + create_time = int(instance.labels["created"]) + + if create_time <= cutoff: + scrub_instance_ignore_not_found(instance) + + +def unique_id(prefix, separator="-"): + return f"{prefix}{system.unique_resource_id(separator)}" + + +class FauxCall: + def __init__(self, code, details="FauxCall"): + self._code = code + self._details = details + + def initial_metadata(self): + return {} + + def trailing_metadata(self): + return {} + + def code(self): + return self._code + + def details(self): + return self._details diff --git a/tests/system/_sample_data.py b/tests/system/_sample_data.py new file mode 100644 index 0000000000..f23110c5dd --- /dev/null +++ b/tests/system/_sample_data.py @@ -0,0 +1,122 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import math + +from google.api_core import datetime_helpers +from google.cloud._helpers import UTC +from google.cloud import spanner_v1 +from .testdata import singer_pb2 + +TABLE = "contacts" +COLUMNS = ("contact_id", "first_name", "last_name", "email") +ROW_DATA = ( + (1, "Phred", "Phlyntstone", "phred@example.com"), + (2, "Bharney", "Rhubble", "bharney@example.com"), + (3, "Wylma", "Phlyntstone", "wylma@example.com"), +) +BATCH_WRITE_ROW_DATA = ( + (1, "Phred", "Phlyntstone", "phred@example.com"), + (2, "Bharney", "Rhubble", "bharney@example.com"), + (3, "Wylma", "Phlyntstone", "wylma@example.com"), + (4, "Pebbles", "Phlyntstone", "pebbles@example.com"), + (5, "Betty", "Rhubble", "betty@example.com"), + (6, "Slate", "Stephenson", "slate@example.com"), +) +ALL = spanner_v1.KeySet(all_=True) +SQL = "SELECT * FROM contacts ORDER BY contact_id" + +COUNTERS_TABLE = "counters" +COUNTERS_COLUMNS = ("name", "value") + +SINGERS_PROTO_TABLE = "singers" +SINGERS_PROTO_COLUMNS = ( + "singer_id", + "first_name", + "last_name", + "singer_info", + "singer_genre", +) +SINGER_INFO_1 = singer_pb2.SingerInfo() +SINGER_GENRE_1 = singer_pb2.Genre.ROCK +SINGER_INFO_1.singer_id = 1 +SINGER_INFO_1.birth_date = "January" +SINGER_INFO_1.nationality = "Country1" +SINGER_INFO_1.genre = SINGER_GENRE_1 +SINGER_INFO_2 = singer_pb2.SingerInfo() +SINGER_GENRE_2 = singer_pb2.Genre.FOLK +SINGER_INFO_2.singer_id = 2 +SINGER_INFO_2.birth_date = "February" +SINGER_INFO_2.nationality = "Country2" +SINGER_INFO_2.genre = SINGER_GENRE_2 +SINGERS_PROTO_ROW_DATA = ( + (1, "Singer1", "Singer1", SINGER_INFO_1, SINGER_GENRE_1), + (2, "Singer2", "Singer2", SINGER_INFO_2, SINGER_GENRE_2), +) + + +def _assert_timestamp(value, nano_value): + assert isinstance(value, datetime.datetime) + assert value.tzinfo is None + assert nano_value.tzinfo is UTC + + assert value.year == nano_value.year + assert value.month == nano_value.month + assert value.day == nano_value.day + assert value.hour == nano_value.hour + assert value.minute == nano_value.minute + assert value.second == nano_value.second + assert value.microsecond == nano_value.microsecond + + if isinstance(value, datetime_helpers.DatetimeWithNanoseconds): + assert value.nanosecond == nano_value.nanosecond + else: + assert value.microsecond * 1000 == nano_value.nanosecond + + +def _check_rows_data(rows_data, expected=ROW_DATA, recurse_into_lists=True): + assert len(rows_data) == len(expected) + + for row, expected in zip(rows_data, expected): + _check_row_data(row, expected, recurse_into_lists=recurse_into_lists) + + +def _check_row_data(row_data, expected, recurse_into_lists=True): + assert len(row_data) == len(expected) + + for found_cell, expected_cell in zip(row_data, expected): + _check_cell_data( + found_cell, expected_cell, recurse_into_lists=recurse_into_lists + ) + + +def _check_cell_data(found_cell, expected_cell, recurse_into_lists=True): + if isinstance(found_cell, datetime_helpers.DatetimeWithNanoseconds): + _assert_timestamp(expected_cell, found_cell) + + elif isinstance(found_cell, float) and math.isnan(found_cell): + assert math.isnan(expected_cell) + + elif isinstance(found_cell, list) and recurse_into_lists: + assert len(found_cell) == len(expected_cell) + + for found_item, expected_item in zip(found_cell, expected_cell): + _check_cell_data(found_item, expected_item) + + elif isinstance(found_cell, float) and not math.isinf(found_cell): + assert abs(found_cell - expected_cell) < 0.00001 + + else: + assert found_cell == expected_cell diff --git a/tests/system/conftest.py b/tests/system/conftest.py new file mode 100644 index 0000000000..bc94d065b2 --- /dev/null +++ b/tests/system/conftest.py @@ -0,0 +1,270 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import time + +import pytest + +from google.cloud import spanner_v1 +from google.cloud.spanner_admin_database_v1 import DatabaseDialect +from . import _helpers +from google.cloud.spanner_admin_database_v1.types.backup import ( + CreateBackupEncryptionConfig, +) + + +@pytest.fixture(scope="function") +def if_create_instance(): + if not _helpers.CREATE_INSTANCE: + pytest.skip(f"{_helpers.CREATE_INSTANCE_ENVVAR} not set in environment.") + + +@pytest.fixture(scope="function") +def no_create_instance(): + if _helpers.CREATE_INSTANCE: + pytest.skip(f"{_helpers.CREATE_INSTANCE_ENVVAR} set in environment.") + + +@pytest.fixture(scope="function") +def if_backup_tests(): + if _helpers.SKIP_BACKUP_TESTS: + pytest.skip(f"{_helpers.SKIP_BACKUP_TESTS_ENVVAR} set in environment.") + + +@pytest.fixture(scope="function") +def not_emulator(): + if _helpers.USE_EMULATOR: + pytest.skip(f"{_helpers.USE_EMULATOR_ENVVAR} set in environment.") + + +@pytest.fixture(scope="session") +def not_postgres(database_dialect): + if database_dialect == DatabaseDialect.POSTGRESQL: + pytest.skip( + f"{_helpers.DATABASE_DIALECT_ENVVAR} set to POSTGRES in environment." + ) + + +@pytest.fixture(scope="session") +def not_google_standard_sql(database_dialect): + if database_dialect == DatabaseDialect.GOOGLE_STANDARD_SQL: + pytest.skip( + f"{_helpers.DATABASE_DIALECT_ENVVAR} set to GOOGLE_STANDARD_SQL in environment." + ) + + +@pytest.fixture(scope="session") +def not_postgres_emulator(database_dialect): + if database_dialect == DatabaseDialect.POSTGRESQL and _helpers.USE_EMULATOR: + pytest.skip( + f"{_helpers.DATABASE_DIALECT_ENVVAR} set to POSTGRESQL and {_helpers.USE_EMULATOR_ENVVAR} set in " + "environment." + ) + + +@pytest.fixture(scope="session") +def database_dialect(): + return ( + DatabaseDialect[_helpers.DATABASE_DIALECT] + if _helpers.DATABASE_DIALECT + else DatabaseDialect.GOOGLE_STANDARD_SQL + ) + + +@pytest.fixture(scope="session") +def proto_descriptor_file(): + import os + + dirname = os.path.dirname(__file__) + filename = os.path.join(dirname, "testdata/descriptors.pb") + file = open(filename, "rb") + yield file.read() + file.close() + + +@pytest.fixture(scope="session") +def spanner_client(): + if _helpers.USE_EMULATOR: + from google.auth.credentials import AnonymousCredentials + + credentials = AnonymousCredentials() + return spanner_v1.Client( + project=_helpers.EMULATOR_PROJECT, + credentials=credentials, + ) + else: + client_options = {"api_endpoint": _helpers.API_ENDPOINT} + return spanner_v1.Client( + client_options=client_options + ) # use google.auth.default credentials + + +@pytest.fixture(scope="session") +def instance_operation_timeout(): + return _helpers.INSTANCE_OPERATION_TIMEOUT_IN_SECONDS + + +@pytest.fixture(scope="session") +def database_operation_timeout(): + return _helpers.DATABASE_OPERATION_TIMEOUT_IN_SECONDS + + +@pytest.fixture(scope="session") +def backup_operation_timeout(): + return _helpers.BACKUP_OPERATION_TIMEOUT_IN_SECONDS + + +@pytest.fixture(scope="session") +def shared_instance_id(): + if _helpers.CREATE_INSTANCE: + return f"{_helpers.unique_id('google-cloud')}" + + return _helpers.INSTANCE_ID + + +@pytest.fixture(scope="session") +def instance_configs(spanner_client): + configs = list(_helpers.retry_503(spanner_client.list_instance_configs)()) + + if not _helpers.USE_EMULATOR: + # Defend against back-end returning configs for regions we aren't + # actually allowed to use. + configs = [config for config in configs if "-us-" in config.name] + + yield configs + + +@pytest.fixture(scope="session") +def instance_config(instance_configs): + if not instance_configs: + raise ValueError("No instance configs found.") + + import random + + us_configs = [ + config + for config in instance_configs + if config.display_name in ["us-south1", "us-east4"] + ] + + config = ( + random.choice(us_configs) if us_configs else random.choice(instance_configs) + ) + yield config + + +@pytest.fixture(scope="session") +def existing_instances(spanner_client): + instances = list(_helpers.retry_503(spanner_client.list_instances)()) + + yield instances + + +@pytest.fixture(scope="session") +def shared_instance( + spanner_client, + instance_operation_timeout, + shared_instance_id, + instance_config, + existing_instances, # evalutate before creating one +): + _helpers.cleanup_old_instances(spanner_client) + + if _helpers.CREATE_INSTANCE: + create_time = str(int(time.time())) + labels = {"python-spanner-systests": "true", "created": create_time} + + instance = spanner_client.instance( + shared_instance_id, instance_config.name, labels=labels + ) + created_op = _helpers.retry_429_503(instance.create)() + created_op.result(instance_operation_timeout) # block until completion + + else: # reuse existing instance + instance = spanner_client.instance(shared_instance_id) + instance.reload() + + yield instance + + if _helpers.CREATE_INSTANCE: + _helpers.retry_429_503(instance.delete)() + + +@pytest.fixture(scope="session") +def shared_database( + shared_instance, database_operation_timeout, database_dialect, proto_descriptor_file +): + database_name = _helpers.unique_id("test_database") + pool = spanner_v1.BurstyPool(labels={"testcase": "database_api"}) + if database_dialect == DatabaseDialect.POSTGRESQL: + database = shared_instance.database( + database_name, + pool=pool, + database_dialect=database_dialect, + ) + operation = database.create() + operation.result(database_operation_timeout) # raises on failure / timeout. + + operation = database.update_ddl(ddl_statements=_helpers.DDL_STATEMENTS) + operation.result(database_operation_timeout) # raises on failure / timeout. + + else: + database = shared_instance.database( + database_name, + ddl_statements=_helpers.DDL_STATEMENTS, + pool=pool, + database_dialect=database_dialect, + proto_descriptors=proto_descriptor_file, + ) + operation = database.create() + operation.result(database_operation_timeout) # raises on failure / timeout. + + yield database + + database.drop() + + +@pytest.fixture(scope="session") +def shared_backup(shared_instance, shared_database, backup_operation_timeout): + backup_name = _helpers.unique_id("test_backup") + expire_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=3 + ) + source_encryption_enum = CreateBackupEncryptionConfig.EncryptionType + source_encryption_config = CreateBackupEncryptionConfig( + encryption_type=source_encryption_enum.GOOGLE_DEFAULT_ENCRYPTION, + ) + backup = shared_instance.backup( + backup_name, + database=shared_database, + expire_time=expire_time, + encryption_config=source_encryption_config, + ) + operation = backup.create() + operation.result(backup_operation_timeout) # raises on failure / timeout. + + yield backup + + backup.delete() + + +@pytest.fixture(scope="function") +def databases_to_delete(): + to_delete = [] + + yield to_delete + + for database in to_delete: + database.drop() diff --git a/tests/system/test_backup_api.py b/tests/system/test_backup_api.py new file mode 100644 index 0000000000..6ffc74283e --- /dev/null +++ b/tests/system/test_backup_api.py @@ -0,0 +1,566 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import time +from google.cloud.spanner_admin_database_v1.types.common import DatabaseDialect + +import pytest + +from google.api_core import exceptions +from google.cloud import spanner_v1 +from . import _helpers + +skip_env_reason = f"""\ +Remove {_helpers.SKIP_BACKUP_TESTS_ENVVAR} from environment to run these tests.\ +""" +skip_emulator_reason = "Backup operations not supported by emulator." + +pytestmark = [ + pytest.mark.skipif(_helpers.SKIP_BACKUP_TESTS, reason=skip_env_reason), + pytest.mark.skipif(_helpers.USE_EMULATOR, reason=skip_emulator_reason), +] + + +@pytest.fixture(scope="session") +def same_config_instance(spanner_client, shared_instance, instance_operation_timeout): + current_config = shared_instance.configuration_name + same_config_instance_id = _helpers.unique_id("same-config") + create_time = str(int(time.time())) + labels = {"python-spanner-systests": "true", "created": create_time} + same_config_instance = spanner_client.instance( + same_config_instance_id, current_config, labels=labels + ) + op = same_config_instance.create() + op.result(instance_operation_timeout) + + yield same_config_instance + + _helpers.scrub_instance_ignore_not_found(same_config_instance) + + +@pytest.fixture(scope="session") +def diff_config(shared_instance, instance_configs, not_postgres): + current_config = shared_instance.configuration_name + for config in reversed(instance_configs): + if "-us-" in config.name and config.name != current_config: + return config.name + return None + + +@pytest.fixture(scope="session") +def diff_config_instance( + spanner_client, + shared_instance, + instance_operation_timeout, + diff_config, +): + if diff_config is None: + return None + + diff_config_instance_id = _helpers.unique_id("diff-config") + create_time = str(int(time.time())) + labels = {"python-spanner-systests": "true", "created": create_time} + diff_config_instance = spanner_client.instance( + diff_config_instance_id, diff_config, labels=labels + ) + op = diff_config_instance.create() + op.result(instance_operation_timeout) + + yield diff_config_instance + + _helpers.scrub_instance_ignore_not_found(diff_config_instance) + + +@pytest.fixture(scope="session") +def database_version_time(shared_database): + shared_database.reload() + diff = ( + datetime.datetime.now(datetime.timezone.utc) + - shared_database.earliest_version_time + ) + return shared_database.earliest_version_time + diff / 2 + + +@pytest.fixture(scope="session") +def second_database( + shared_instance, database_operation_timeout, database_dialect, proto_descriptor_file +): + database_name = _helpers.unique_id("test_database2") + pool = spanner_v1.BurstyPool(labels={"testcase": "database_api"}) + if database_dialect == DatabaseDialect.POSTGRESQL: + database = shared_instance.database( + database_name, + pool=pool, + database_dialect=database_dialect, + ) + operation = database.create() + operation.result(database_operation_timeout) # raises on failure / timeout. + + operation = database.update_ddl(ddl_statements=_helpers.DDL_STATEMENTS) + operation.result(database_operation_timeout) # raises on failure / timeout. + + else: + database = shared_instance.database( + database_name, + ddl_statements=_helpers.DDL_STATEMENTS, + pool=pool, + database_dialect=database_dialect, + proto_descriptors=proto_descriptor_file, + ) + operation = database.create() + operation.result(database_operation_timeout) # raises on failure / timeout. + + yield database + + database.drop() + + +@pytest.fixture(scope="function") +def backups_to_delete(): + to_delete = [] + + yield to_delete + + for backup in to_delete: + _helpers.retry_429_503(backup.delete)() + + +def test_backup_workflow( + shared_instance, + shared_database, + database_dialect, + database_version_time, + backups_to_delete, + databases_to_delete, +): + from google.cloud.spanner_admin_database_v1 import ( + CreateBackupEncryptionConfig, + EncryptionConfig, + EncryptionInfo, + RestoreDatabaseEncryptionConfig, + ) + + backup_id = _helpers.unique_id("backup_id", separator="_") + expire_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=3 + ) + encryption_enum = CreateBackupEncryptionConfig.EncryptionType + encryption_config = CreateBackupEncryptionConfig( + encryption_type=encryption_enum.GOOGLE_DEFAULT_ENCRYPTION, + ) + + # Create backup. + backup = shared_instance.backup( + backup_id, + database=shared_database, + expire_time=expire_time, + version_time=database_version_time, + encryption_config=encryption_config, + ) + operation = backup.create() + backups_to_delete.append(backup) + + # Check metadata. + metadata = operation.metadata + assert backup.name == metadata.name + assert shared_database.name == metadata.database + operation.result() # blocks indefinitely + + # Check backup object. + backup.reload() + assert shared_database.name == backup._database + assert expire_time == backup.expire_time + assert backup.create_time is not None + assert database_version_time == backup.version_time + assert backup.size_bytes is not None + assert backup.state is not None + assert ( + EncryptionInfo.Type.GOOGLE_DEFAULT_ENCRYPTION + == backup.encryption_info.encryption_type + ) + + # Update with valid argument. + valid_expire_time = datetime.datetime.now( + datetime.timezone.utc + ) + datetime.timedelta(days=7) + backup.update_expire_time(valid_expire_time) + assert valid_expire_time == backup.expire_time + + # Restore database to same instance. + restored_id = _helpers.unique_id("restored_db", separator="_") + encryption_config = RestoreDatabaseEncryptionConfig( + encryption_type=RestoreDatabaseEncryptionConfig.EncryptionType.GOOGLE_DEFAULT_ENCRYPTION, + ) + database = shared_instance.database( + restored_id, + encryption_config=encryption_config, + ) + databases_to_delete.append(database) + operation = database.restore(source=backup) + restored_db = operation.result() # blocks indefinitely + assert database_version_time == restored_db.restore_info.backup_info.version_time + + metadata = operation.metadata + assert database_version_time == metadata.backup_info.version_time + + database.reload() + expected_encryption_config = EncryptionConfig() + assert expected_encryption_config == database.encryption_config + assert database_dialect == database.database_dialect + + database.drop() + backup.delete() + assert not backup.exists() + + +def test_copy_backup_workflow( + shared_instance, + shared_backup, + backups_to_delete, +): + from google.cloud.spanner_admin_database_v1 import ( + CopyBackupEncryptionConfig, + EncryptionInfo, + ) + + backup_id = _helpers.unique_id("backup_id", separator="_") + expire_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=3 + ) + copy_encryption_enum = CopyBackupEncryptionConfig.EncryptionType + copy_encryption_config = CopyBackupEncryptionConfig( + encryption_type=copy_encryption_enum.GOOGLE_DEFAULT_ENCRYPTION, + ) + + # Create backup. + shared_backup.reload() + # Create a copy backup + copy_backup = shared_instance.copy_backup( + backup_id=backup_id, + source_backup=shared_backup.name, + expire_time=expire_time, + encryption_config=copy_encryption_config, + ) + operation = copy_backup.create() + backups_to_delete.append(copy_backup) + + # Check metadata. + metadata = operation.metadata + assert copy_backup.name == metadata.name + operation.result() # blocks indefinitely + + # Check backup object. + copy_backup.reload() + assert expire_time == copy_backup.expire_time + assert copy_backup.create_time is not None + assert copy_backup.size_bytes is not None + assert copy_backup.state is not None + assert ( + EncryptionInfo.Type.GOOGLE_DEFAULT_ENCRYPTION + == copy_backup.encryption_info.encryption_type + ) + + # Update with valid argument. + valid_expire_time = datetime.datetime.now( + datetime.timezone.utc + ) + datetime.timedelta(days=7) + copy_backup.update_expire_time(valid_expire_time) + assert valid_expire_time == copy_backup.expire_time + + copy_backup.delete() + assert not copy_backup.exists() + + +def test_backup_create_w_version_time_dflt_to_create_time( + shared_instance, + shared_database, + backups_to_delete, + databases_to_delete, +): + backup_id = _helpers.unique_id("backup_id", separator="_") + expire_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=3 + ) + + # Create backup. + backup = shared_instance.backup( + backup_id, + database=shared_database, + expire_time=expire_time, + ) + operation = backup.create() + backups_to_delete.append(backup) + + # Check metadata. + metadata = operation.metadata + assert backup.name == metadata.name + assert shared_database.name == metadata.database + operation.result() # blocks indefinitely + + # Check backup object. + backup.reload() + assert shared_database.name == backup._database + assert backup.create_time is not None + assert backup.create_time == backup.version_time + + backup.delete() + assert not backup.exists() + + +def test_backup_create_w_invalid_expire_time(shared_instance, shared_database): + backup_id = _helpers.unique_id("backup_id", separator="_") + expire_time = datetime.datetime.now(datetime.timezone.utc) + + backup = shared_instance.backup( + backup_id, database=shared_database, expire_time=expire_time + ) + + with pytest.raises(exceptions.InvalidArgument): + op = backup.create() + op.result() # blocks indefinitely + + +def test_backup_create_w_invalid_version_time_past( + shared_instance, + shared_database, +): + backup_id = _helpers.unique_id("backup_id", separator="_") + expire_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=3 + ) + version_time = datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta( + days=10 + ) + + backup = shared_instance.backup( + backup_id, + database=shared_database, + expire_time=expire_time, + version_time=version_time, + ) + + with pytest.raises(exceptions.InvalidArgument): + op = backup.create() + op.result() # blocks indefinitely + + +def test_backup_create_w_invalid_version_time_future( + shared_instance, + shared_database, +): + backup_id = _helpers.unique_id("backup_id", separator="_") + expire_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=3 + ) + version_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=2 + ) + + backup = shared_instance.backup( + backup_id, + database=shared_database, + expire_time=expire_time, + version_time=version_time, + ) + + with pytest.raises(exceptions.InvalidArgument): + op = backup.create() + op.result() # blocks indefinitely + + +def test_database_restore_to_diff_instance( + shared_instance, + shared_database, + backups_to_delete, + same_config_instance, + databases_to_delete, +): + backup_id = _helpers.unique_id("backup_id", separator="_") + expire_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=3 + ) + + # Create backup. + backup = shared_instance.backup( + backup_id, + database=shared_database, + expire_time=expire_time, + ) + op = backup.create() + backups_to_delete.append(backup) + op.result() + + # Restore database to different instance with same config. + restored_id = _helpers.unique_id("restored_db") + database = same_config_instance.database(restored_id) + databases_to_delete.append(database) + operation = database.restore(source=backup) + operation.result() # blocks indefinitely + + database.drop() + backup.delete() + assert not backup.exists() + + +def test_multi_create_cancel_update_error_restore_errors( + shared_instance, + shared_database, + second_database, + diff_config_instance, + backups_to_delete, + databases_to_delete, +): + backup_id_1 = _helpers.unique_id("backup_id1", separator="_") + backup_id_2 = _helpers.unique_id("backup_id2", separator="_") + expire_time = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=3 + ) + + backup1 = shared_instance.backup( + backup_id_1, database=shared_database, expire_time=expire_time + ) + backup2 = shared_instance.backup( + backup_id_2, database=second_database, expire_time=expire_time + ) + + # Create two backups. + op1 = backup1.create() + backups_to_delete.append(backup1) + op2 = backup2.create() + backups_to_delete.append(backup2) + + backup1.reload() + assert not backup1.is_ready() + + backup2.reload() + assert not backup2.is_ready() + + # Cancel a create operation. + op2.cancel() + assert op2.cancelled() + + op1.result() # blocks indefinitely + backup1.reload() + assert backup1.is_ready() + + # Update expire time to invalid value. + max_expire_days = 366 # documented maximum + invalid_expire_time = datetime.datetime.now( + datetime.timezone.utc + ) + datetime.timedelta(days=max_expire_days + 1) + with pytest.raises(exceptions.InvalidArgument): + backup1.update_expire_time(invalid_expire_time) + + # Restore to existing database. + with pytest.raises(exceptions.AlreadyExists): + shared_database.restore(source=backup1) + + # Restore to instance with different config. + if diff_config_instance is not None: + new_db = diff_config_instance.database("diff_config") + + with pytest.raises(exceptions.InvalidArgument): + new_db.restore(source=backup1) + + +def test_instance_list_backups( + shared_instance, + shared_database, + second_database, + backups_to_delete, +): + # Remove un-scrubbed backups FBO count below. + _helpers.scrub_instance_backups(shared_instance) + + backup_id_1 = _helpers.unique_id("backup_id1", separator="_") + backup_id_2 = _helpers.unique_id("backup_id2", separator="_") + + expire_time_1 = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=21 + ) + expire_time_1_stamp = expire_time_1.strftime("%Y-%m-%dT%H:%M:%S.%fZ") + + backup1 = shared_instance.backup( + backup_id_1, + database=shared_database, + expire_time=expire_time_1, + ) + + expire_time_2 = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta( + days=1 + ) + backup2 = shared_instance.backup( + backup_id_2, database=second_database, expire_time=expire_time_2 + ) + + # Create two backups. + op1 = backup1.create() + backups_to_delete.append(backup1) + op1.result() # blocks indefinitely + backup1.reload() + + create_time_compare = datetime.datetime.now(datetime.timezone.utc) + create_time_stamp = create_time_compare.strftime("%Y-%m-%dT%H:%M:%S.%fZ") + + backup2.create() + # This test doesn't block for the result of the 'backup2.create()' call + # because it wants to find `backup2` in the upcoming search for + # backups matching 'state;CREATING`: inherently racy, but probably + # safe, given how long it takes to create a backup (on the order of + # minutes, not seconds). + backups_to_delete.append(backup2) + + # List backups filtered by state. + filter_ = "state:CREATING" + for backup in shared_instance.list_backups(filter_=filter_): + assert backup.name == backup2.name + + # List backups filtered by backup name. + filter_ = f"name:{backup_id_1}" + for backup in shared_instance.list_backups(filter_=filter_): + assert backup.name == backup1.name + + # List backups filtered by database name. + filter_ = f"database:{shared_database.name}" + for backup in shared_instance.list_backups(filter_=filter_): + assert backup.name == backup1.name + + # List backups filtered by create time. + filter_ = f'create_time > "{create_time_stamp}"' + for backup in shared_instance.list_backups(filter_=filter_): + assert backup.name == backup2.name + + # List backups filtered by version time. + filter_ = f'version_time > "{create_time_stamp}"' + for backup in shared_instance.list_backups(filter_=filter_): + assert backup.name == backup2.name + + # List backups filtered by expire time. + filter_ = f'expire_time > "{expire_time_1_stamp}"' + for backup in shared_instance.list_backups(filter_=filter_): + assert backup.name == backup1.name + + # List backups filtered by size bytes. + # XXX: this one may only pass if other tests have run first, + # munging 'shared_database' so that its backup will be bigger? + filter_ = f"size_bytes < {backup1.size_bytes}" + for backup in shared_instance.list_backups(filter_=filter_): + assert backup.name == backup2.name + + # List backups using pagination. + count = 0 + for page in shared_instance.list_backups(page_size=1): + count += 1 + assert count == 2 diff --git a/tests/system/test_database_api.py b/tests/system/test_database_api.py new file mode 100644 index 0000000000..e3c18ece10 --- /dev/null +++ b/tests/system/test_database_api.py @@ -0,0 +1,929 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import datetime +import time +import uuid + +import pytest + +from google.api_core import exceptions +from google.iam.v1 import policy_pb2 +from google.cloud import spanner_v1 +from google.cloud.spanner_v1.pool import FixedSizePool, PingingPool +from google.cloud.spanner_admin_database_v1 import DatabaseDialect +from google.cloud.spanner_v1 import DirectedReadOptions +from google.type import expr_pb2 +from . import _helpers +from . import _sample_data + + +DBAPI_OPERATION_TIMEOUT = 240 # seconds +FKADC_CUSTOMERS_COLUMNS = ("CustomerId", "CustomerName") +FKADC_SHOPPING_CARTS_COLUMNS = ("CartId", "CustomerId", "CustomerName") +ALL_KEYSET = spanner_v1.KeySet(all_=True) +DIRECTED_READ_OPTIONS = { + "include_replicas": { + "replica_selections": [ + { + "location": "us-west1", + "type_": DirectedReadOptions.ReplicaSelection.Type.READ_ONLY, + }, + ], + "auto_failover_disabled": True, + }, +} + + +@pytest.fixture(scope="module") +def multiregion_instance(spanner_client, instance_operation_timeout, not_postgres): + multi_region_instance_id = _helpers.unique_id("multi-region") + multi_region_config = "nam3" + config_name = "{}/instanceConfigs/{}".format( + spanner_client.project_name, multi_region_config + ) + create_time = str(int(time.time())) + labels = {"python-spanner-systests": "true", "created": create_time} + multiregion_instance = spanner_client.instance( + instance_id=multi_region_instance_id, + configuration_name=config_name, + labels=labels, + ) + operation = _helpers.retry_429_503(multiregion_instance.create)() + operation.result(instance_operation_timeout) + + yield multiregion_instance + + _helpers.retry_429_503(multiregion_instance.delete)() + + +def test_list_databases(shared_instance, shared_database): + # Since `shared_instance` is newly created in `setUpModule`, the + # database created in `setUpClass` here will be the only one. + database_names = [database.name for database in shared_instance.list_databases()] + assert shared_database.name in database_names + + +def test_create_database(shared_instance, databases_to_delete, database_dialect): + pool = spanner_v1.BurstyPool(labels={"testcase": "create_database"}) + temp_db_id = _helpers.unique_id("temp_db") + temp_db = shared_instance.database( + temp_db_id, pool=pool, database_dialect=database_dialect + ) + operation = temp_db.create() + databases_to_delete.append(temp_db) + + # We want to make sure the operation completes. + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + database_ids = [database.name for database in shared_instance.list_databases()] + assert temp_db.name in database_ids + + +def test_database_binding_of_fixed_size_pool( + not_emulator, + shared_instance, + databases_to_delete, + not_postgres, + proto_descriptor_file, +): + temp_db_id = _helpers.unique_id("fixed_size_db", separator="_") + temp_db = shared_instance.database(temp_db_id) + + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + # Create role and grant select permission on table contacts for parent role. + ddl_statements = _helpers.DDL_STATEMENTS + [ + "CREATE ROLE parent", + "GRANT SELECT ON TABLE contacts TO ROLE parent", + ] + operation = temp_db.update_ddl( + ddl_statements, proto_descriptors=proto_descriptor_file + ) + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + pool = FixedSizePool( + size=1, + default_timeout=500, + database_role="parent", + ) + database = shared_instance.database(temp_db_id, pool=pool) + assert database._pool.database_role == "parent" + + +def test_database_binding_of_pinging_pool( + not_emulator, + shared_instance, + databases_to_delete, + not_postgres, + proto_descriptor_file, +): + temp_db_id = _helpers.unique_id("binding_db", separator="_") + temp_db = shared_instance.database(temp_db_id) + + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + # Create role and grant select permission on table contacts for parent role. + ddl_statements = _helpers.DDL_STATEMENTS + [ + "CREATE ROLE parent", + "GRANT SELECT ON TABLE contacts TO ROLE parent", + ] + operation = temp_db.update_ddl( + ddl_statements, proto_descriptors=proto_descriptor_file + ) + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + pool = PingingPool( + size=1, + default_timeout=500, + ping_interval=100, + database_role="parent", + ) + database = shared_instance.database(temp_db_id, pool=pool) + assert database._pool.database_role == "parent" + + +def test_create_database_pitr_invalid_retention_period( + not_emulator, # PITR-lite features are not supported by the emulator + not_postgres, + shared_instance, +): + pool = spanner_v1.BurstyPool(labels={"testcase": "create_database_pitr"}) + temp_db_id = _helpers.unique_id("pitr_inv_db", separator="_") + retention_period = "0d" + ddl_statements = [ + f"ALTER DATABASE {temp_db_id}" + f" SET OPTIONS (version_retention_period = '{retention_period}')" + ] + temp_db = shared_instance.database( + temp_db_id, pool=pool, ddl_statements=ddl_statements + ) + with pytest.raises(exceptions.InvalidArgument): + temp_db.create() + + +def test_create_database_pitr_success( + not_emulator, # PITR-lite features are not supported by the emulator + not_postgres, + shared_instance, + databases_to_delete, +): + pool = spanner_v1.BurstyPool(labels={"testcase": "create_database_pitr"}) + temp_db_id = _helpers.unique_id("pitr_db", separator="_") + retention_period = "7d" + ddl_statements = [ + f"ALTER DATABASE {temp_db_id}" + f" SET OPTIONS (version_retention_period = '{retention_period}')" + ] + temp_db = shared_instance.database( + temp_db_id, pool=pool, ddl_statements=ddl_statements + ) + operation = temp_db.create() + databases_to_delete.append(temp_db) + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + database_ids = [database.name for database in shared_instance.list_databases()] + assert temp_db.name in database_ids + + temp_db.reload() + temp_db.version_retention_period == retention_period + + with temp_db.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT OPTION_VALUE AS version_retention_period " + "FROM INFORMATION_SCHEMA.DATABASE_OPTIONS " + "WHERE SCHEMA_NAME = '' " + "AND OPTION_NAME = 'version_retention_period'" + ) + for result in results: + assert result[0] == retention_period + + +def test_create_database_with_default_leader_success( + not_emulator, # Default leader setting not supported by the emulator + not_postgres, + multiregion_instance, + databases_to_delete, +): + pool = spanner_v1.BurstyPool(labels={"testcase": "create_database_default_leader"}) + + temp_db_id = _helpers.unique_id("dflt_ldr_db", separator="_") + default_leader = "us-east4" + ddl_statements = [ + f"ALTER DATABASE {temp_db_id}" + f" SET OPTIONS (default_leader = '{default_leader}')" + ] + temp_db = multiregion_instance.database( + temp_db_id, pool=pool, ddl_statements=ddl_statements + ) + operation = temp_db.create() + databases_to_delete.append(temp_db) + operation.result(30) # raises on failure / timeout. + + database_ids = [database.name for database in multiregion_instance.list_databases()] + assert temp_db.name in database_ids + + temp_db.reload() + assert temp_db.default_leader == default_leader + + with temp_db.snapshot() as snapshot: + results = snapshot.execute_sql( + "SELECT OPTION_VALUE AS default_leader " + "FROM INFORMATION_SCHEMA.DATABASE_OPTIONS " + "WHERE SCHEMA_NAME = '' AND OPTION_NAME = 'default_leader'" + ) + for result in results: + assert result[0] == default_leader + + +def test_iam_policy( + not_emulator, + shared_instance, + databases_to_delete, +): + pool = spanner_v1.BurstyPool(labels={"testcase": "iam_policy"}) + temp_db_id = _helpers.unique_id("iam_db", separator="_") + create_table = ( + "CREATE TABLE policy (\n" + + " Id STRING(36) NOT NULL,\n" + + " Field1 STRING(36) NOT NULL\n" + + ") PRIMARY KEY (Id)" + ) + create_role = "CREATE ROLE parent" + + temp_db = shared_instance.database( + temp_db_id, + ddl_statements=[create_table, create_role], + pool=pool, + ) + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) + policy = temp_db.get_iam_policy(3) + + assert policy.version == 0 + assert policy.etag == b"\x00 \x01" + + new_binding = policy_pb2.Binding( + role="roles/spanner.fineGrainedAccessUser", + members=["user:asthamohta@google.com"], + condition=expr_pb2.Expr( + title="condition title", + expression='resource.name.endsWith("/databaseRoles/parent")', + ), + ) + + policy.version = 3 + policy.bindings.append(new_binding) + temp_db.set_iam_policy(policy) + + new_policy = temp_db.get_iam_policy(3) + assert new_policy.version == 3 + assert len(new_policy.bindings) == 1 + assert new_policy.bindings[0] == new_binding + + +def test_table_not_found(shared_instance): + temp_db_id = _helpers.unique_id("tbl_not_found", separator="_") + + correct_table = "MyTable" + incorrect_table = "NotMyTable" + + create_table = ( + f"CREATE TABLE {correct_table} (\n" + f" Id STRING(36) NOT NULL,\n" + f" Field1 STRING(36) NOT NULL\n" + f") PRIMARY KEY (Id)" + ) + create_index = f"CREATE INDEX IDX ON {incorrect_table} (Field1)" + + temp_db = shared_instance.database( + temp_db_id, ddl_statements=[create_table, create_index] + ) + with pytest.raises(exceptions.NotFound): + temp_db.create() + + +def test_update_ddl_w_operation_id( + shared_instance, databases_to_delete, database_dialect, proto_descriptor_file +): + # We used to have: + # @pytest.mark.skip( + # reason="'Database.update_ddl' has a flaky timeout. See: " + # https://github.com/GoogleCloudPlatform/google-cloud-python/issues/5629 + # ) + pool = spanner_v1.BurstyPool(labels={"testcase": "update_database_ddl"}) + temp_db_id = _helpers.unique_id("update_ddl", separator="_") + temp_db = shared_instance.database( + temp_db_id, pool=pool, database_dialect=database_dialect + ) + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + # random but shortish always start with letter + operation_id = f"a{str(uuid.uuid4())[:8]}" + operation = temp_db.update_ddl( + _helpers.DDL_STATEMENTS, + operation_id=operation_id, + proto_descriptors=proto_descriptor_file, + ) + + assert operation_id == operation.operation.name.split("/")[-1] + + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + temp_db.reload() + + assert len(temp_db.ddl_statements) == len(_helpers.DDL_STATEMENTS) + + +def test_update_ddl_w_pitr_invalid( + not_emulator, + not_postgres, + shared_instance, + databases_to_delete, + proto_descriptor_file, +): + pool = spanner_v1.BurstyPool(labels={"testcase": "update_database_ddl_pitr"}) + temp_db_id = _helpers.unique_id("pitr_upd_ddl_inv", separator="_") + retention_period = "0d" + temp_db = shared_instance.database(temp_db_id, pool=pool) + + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + assert temp_db.version_retention_period is None + + ddl_statements = _helpers.DDL_STATEMENTS + [ + f"ALTER DATABASE {temp_db_id}" + f" SET OPTIONS (version_retention_period = '{retention_period}')" + ] + with pytest.raises(exceptions.InvalidArgument): + temp_db.update_ddl(ddl_statements, proto_descriptors=proto_descriptor_file) + + +def test_update_ddl_w_pitr_success( + not_emulator, + not_postgres, + shared_instance, + databases_to_delete, + proto_descriptor_file, +): + pool = spanner_v1.BurstyPool(labels={"testcase": "update_database_ddl_pitr"}) + temp_db_id = _helpers.unique_id("pitr_upd_ddl_inv", separator="_") + retention_period = "7d" + temp_db = shared_instance.database(temp_db_id, pool=pool) + + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + assert temp_db.version_retention_period is None + + ddl_statements = _helpers.DDL_STATEMENTS + [ + f"ALTER DATABASE {temp_db_id}" + f" SET OPTIONS (version_retention_period = '{retention_period}')" + ] + operation = temp_db.update_ddl( + ddl_statements, proto_descriptors=proto_descriptor_file + ) + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + temp_db.reload() + assert temp_db.version_retention_period == retention_period + assert len(temp_db.ddl_statements) == len(ddl_statements) + + +def test_update_ddl_w_default_leader_success( + not_emulator, + not_postgres, + multiregion_instance, + databases_to_delete, + proto_descriptor_file, +): + pool = spanner_v1.BurstyPool( + labels={"testcase": "update_database_ddl_default_leader"}, + ) + + temp_db_id = _helpers.unique_id("dfl_ldrr_upd_ddl", separator="_") + default_leader = "us-east4" + temp_db = multiregion_instance.database(temp_db_id, pool=pool) + + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + assert temp_db.default_leader is None + + ddl_statements = _helpers.DDL_STATEMENTS + [ + f"ALTER DATABASE {temp_db_id}" + f" SET OPTIONS (default_leader = '{default_leader}')" + ] + operation = temp_db.update_ddl( + ddl_statements, proto_descriptors=proto_descriptor_file + ) + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + temp_db.reload() + assert temp_db.default_leader == default_leader + assert len(temp_db.ddl_statements) == len(ddl_statements) + + +def test_create_role_grant_access_success( + not_emulator, + shared_instance, + databases_to_delete, + database_dialect, + proto_descriptor_file, +): + creator_role_parent = _helpers.unique_id("role_parent", separator="_") + creator_role_orphan = _helpers.unique_id("role_orphan", separator="_") + + temp_db_id = _helpers.unique_id("dfl_ldrr_upd_ddl", separator="_") + temp_db = shared_instance.database(temp_db_id, database_dialect=database_dialect) + + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + # Create role and grant select permission on table contacts for parent role. + if database_dialect == DatabaseDialect.GOOGLE_STANDARD_SQL: + ddl_statements = _helpers.DDL_STATEMENTS + [ + f"CREATE ROLE {creator_role_parent}", + f"CREATE ROLE {creator_role_orphan}", + f"GRANT SELECT ON TABLE contacts TO ROLE {creator_role_parent}", + ] + elif database_dialect == DatabaseDialect.POSTGRESQL: + ddl_statements = _helpers.DDL_STATEMENTS + [ + f"CREATE ROLE {creator_role_parent}", + f"CREATE ROLE {creator_role_orphan}", + f"GRANT SELECT ON TABLE contacts TO {creator_role_parent}", + ] + + operation = temp_db.update_ddl( + ddl_statements, proto_descriptors=proto_descriptor_file + ) + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + # Perform select with orphan role on table contacts. + # Expect PermissionDenied exception. + temp_db = shared_instance.database(temp_db_id, database_role=creator_role_orphan) + with pytest.raises(exceptions.PermissionDenied): + with temp_db.snapshot() as snapshot: + results = snapshot.execute_sql("SELECT * FROM contacts") + for row in results: + pass + + # Perform select with parent role on table contacts. Expect success. + temp_db = shared_instance.database(temp_db_id, database_role=creator_role_parent) + with temp_db.snapshot() as snapshot: + snapshot.execute_sql("SELECT * FROM contacts") + + if database_dialect == DatabaseDialect.GOOGLE_STANDARD_SQL: + ddl_remove_roles = [ + f"REVOKE SELECT ON TABLE contacts FROM ROLE {creator_role_parent}", + f"DROP ROLE {creator_role_parent}", + f"DROP ROLE {creator_role_orphan}", + ] + elif database_dialect == DatabaseDialect.POSTGRESQL: + ddl_remove_roles = [ + f"REVOKE SELECT ON TABLE contacts FROM {creator_role_parent}", + f"DROP ROLE {creator_role_parent}", + f"DROP ROLE {creator_role_orphan}", + ] + # Revoke permission and Delete roles. + operation = temp_db.update_ddl(ddl_remove_roles) + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + +def test_list_database_role_success( + not_emulator, + shared_instance, + databases_to_delete, + database_dialect, + proto_descriptor_file, +): + creator_role_parent = _helpers.unique_id("role_parent", separator="_") + creator_role_orphan = _helpers.unique_id("role_orphan", separator="_") + + temp_db_id = _helpers.unique_id("dfl_ldrr_upd_ddl", separator="_") + temp_db = shared_instance.database(temp_db_id, database_dialect=database_dialect) + + create_op = temp_db.create() + databases_to_delete.append(temp_db) + create_op.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + # Create role and grant select permission on table contacts for parent role. + ddl_statements = _helpers.DDL_STATEMENTS + [ + f"CREATE ROLE {creator_role_parent}", + f"CREATE ROLE {creator_role_orphan}", + ] + operation = temp_db.update_ddl( + ddl_statements, proto_descriptors=proto_descriptor_file + ) + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + # List database roles. + roles_list = [] + for role in temp_db.list_database_roles(): + roles_list.append(role.name.split("/")[-1]) + assert creator_role_parent in roles_list + assert creator_role_orphan in roles_list + + +def test_db_batch_insert_then_db_snapshot_read(shared_database): + _helpers.retry_has_all_dll(shared_database.reload)() + sd = _sample_data + + with shared_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + batch.insert(sd.TABLE, sd.COLUMNS, sd.ROW_DATA) + + with shared_database.snapshot(read_timestamp=batch.committed) as snapshot: + from_snap = list(snapshot.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + + sd._check_rows_data(from_snap) + + +def test_db_run_in_transaction_then_snapshot_execute_sql(shared_database): + _helpers.retry_has_all_dll(shared_database.reload)() + sd = _sample_data + + with shared_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + def _unit_of_work(transaction, test): + # TODO: Remove query and execute a read instead when the Emulator has been fixed + # and returns pre-commit tokens for streaming read results. + rows = list(transaction.execute_sql(sd.SQL)) + # rows = list(transaction.read(test.TABLE, test.COLUMNS, sd.ALL)) + assert rows == [] + + transaction.insert_or_update(test.TABLE, test.COLUMNS, test.ROW_DATA) + + shared_database.run_in_transaction(_unit_of_work, test=sd) + + with shared_database.snapshot() as after: + rows = list(after.execute_sql(sd.SQL)) + + sd._check_rows_data(rows) + + +def test_db_run_in_transaction_twice(shared_database): + _helpers.retry_has_all_dll(shared_database.reload)() + sd = _sample_data + + with shared_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + def _unit_of_work(transaction, test): + transaction.insert_or_update(test.TABLE, test.COLUMNS, test.ROW_DATA) + + shared_database.run_in_transaction(_unit_of_work, test=sd) + shared_database.run_in_transaction(_unit_of_work, test=sd) + + with shared_database.snapshot() as after: + rows = list(after.execute_sql(sd.SQL)) + sd._check_rows_data(rows) + + +def test_db_run_in_transaction_twice_4181(shared_database): + # See https://github.com/googleapis/google-cloud-python/issues/4181 + _helpers.retry_has_all_dll(shared_database.reload)() + sd = _sample_data + + with shared_database.batch() as batch: + batch.delete(sd.COUNTERS_TABLE, sd.ALL) + + def _unit_of_work(transaction, name): + transaction.insert(sd.COUNTERS_TABLE, sd.COUNTERS_COLUMNS, [[name, 0]]) + + shared_database.run_in_transaction(_unit_of_work, name="id_1") + + with pytest.raises(exceptions.AlreadyExists): + shared_database.run_in_transaction(_unit_of_work, name="id_1") + + shared_database.run_in_transaction(_unit_of_work, name="id_2") + + with shared_database.snapshot() as after: + rows = list(after.read(sd.COUNTERS_TABLE, sd.COUNTERS_COLUMNS, sd.ALL)) + + assert len(rows) == 2 + + +def test_insertion_in_referencing_table_fkadc(not_emulator, shared_database): + with shared_database.batch() as batch: + batch.insert( + table="Customers", + columns=FKADC_CUSTOMERS_COLUMNS, + values=[ + (1, "Marc"), + (2, "Catalina"), + ], + ) + + with shared_database.batch() as batch: + batch.insert( + table="ShoppingCarts", + columns=FKADC_SHOPPING_CARTS_COLUMNS, + values=[ + (1, 1, "Marc"), + ], + ) + + with shared_database.snapshot() as snapshot: + rows = list( + snapshot.read( + "ShoppingCarts", ("CartId", "CustomerId", "CustomerName"), ALL_KEYSET + ) + ) + + assert len(rows) == 1 + + +def test_insertion_in_referencing_table_error_fkadc(not_emulator, shared_database): + with pytest.raises(exceptions.FailedPrecondition): + with shared_database.batch() as batch: + batch.insert( + table="ShoppingCarts", + columns=FKADC_SHOPPING_CARTS_COLUMNS, + values=[ + (4, 4, "Naina"), + ], + ) + + +def test_insertion_then_deletion_in_referenced_table_fkadc( + not_emulator, shared_database +): + with shared_database.batch() as batch: + batch.insert( + table="Customers", + columns=FKADC_CUSTOMERS_COLUMNS, + values=[ + (3, "Sara"), + ], + ) + + with shared_database.batch() as batch: + batch.insert( + table="ShoppingCarts", + columns=FKADC_SHOPPING_CARTS_COLUMNS, + values=[ + (3, 3, "Sara"), + ], + ) + + with shared_database.snapshot() as snapshot: + rows = list(snapshot.read("ShoppingCarts", ["CartId"], ALL_KEYSET)) + + assert [3] in rows + + with shared_database.batch() as batch: + batch.delete(table="Customers", keyset=spanner_v1.KeySet(keys=[[3]])) + + with shared_database.snapshot() as snapshot: + rows = list(snapshot.read("ShoppingCarts", ["CartId"], ALL_KEYSET)) + + assert [3] not in rows + + +def test_insert_then_delete_referenced_key_error_fkadc(not_emulator, shared_database): + with pytest.raises(exceptions.FailedPrecondition): + with shared_database.batch() as batch: + batch.insert( + table="Customers", + columns=FKADC_CUSTOMERS_COLUMNS, + values=[ + (3, "Sara"), + ], + ) + batch.delete(table="Customers", keyset=spanner_v1.KeySet(keys=[[3]])) + + +def test_insert_referencing_key_then_delete_referenced_key_error_fkadc( + not_emulator, shared_database +): + with shared_database.batch() as batch: + batch.insert( + table="Customers", + columns=FKADC_CUSTOMERS_COLUMNS, + values=[ + (4, "Huda"), + ], + ) + + with pytest.raises(exceptions.FailedPrecondition): + with shared_database.batch() as batch: + batch.insert( + table="ShoppingCarts", + columns=FKADC_SHOPPING_CARTS_COLUMNS, + values=[ + (4, 4, "Huda"), + ], + ) + batch.delete(table="Customers", keyset=spanner_v1.KeySet(keys=[[4]])) + + +def test_information_schema_referential_constraints_fkadc( + not_emulator, shared_database +): + with shared_database.snapshot() as snapshot: + rows = list( + snapshot.execute_sql( + "SELECT DELETE_RULE " + "FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS " + "WHERE CONSTRAINT_NAME = 'FKShoppingCartsCustomerId'" + ) + ) + + assert any("CASCADE" in stmt for stmt in rows) + + +def test_update_database_success( + not_emulator, shared_database, shared_instance, database_operation_timeout +): + old_protection = shared_database.enable_drop_protection + new_protection = True + shared_database.enable_drop_protection = new_protection + operation = shared_database.update(["enable_drop_protection"]) + + # We want to make sure the operation completes. + operation.result(database_operation_timeout) # raises on failure / timeout. + + # Create a new database instance and reload it. + database_alt = shared_instance.database(shared_database.name.split("/")[-1]) + assert database_alt.enable_drop_protection != new_protection + + database_alt.reload() + assert database_alt.enable_drop_protection == new_protection + + with pytest.raises(exceptions.FailedPrecondition): + database_alt.drop() + + with pytest.raises(exceptions.FailedPrecondition): + shared_instance.delete() + + # Make sure to put the database back the way it was for the + # other test cases. + shared_database.enable_drop_protection = old_protection + shared_database.update(["enable_drop_protection"]) + + +def test_update_database_invalid(not_emulator, shared_database): + shared_database.enable_drop_protection = True + + # Empty `fields` is not supported. + with pytest.raises(exceptions.InvalidArgument): + shared_database.update([]) + + +def test_snapshot_read_w_directed_read_options( + shared_database, not_postgres, not_emulator +): + _helpers.retry_has_all_dll(shared_database.reload)() + table = "users_history" + columns = ["id", "commit_ts", "name", "email", "deleted"] + user_id = 1234 + name = "phred" + email = "phred@example.com" + row_data = [[user_id, spanner_v1.COMMIT_TIMESTAMP, name, email, False]] + sd = _sample_data + + with shared_database.batch() as batch: + batch.delete(table, sd.ALL) + batch.insert(table, columns, row_data) + + with shared_database.snapshot() as snapshot: + rows = list( + snapshot.read( + table, columns, sd.ALL, directed_read_options=DIRECTED_READ_OPTIONS + ) + ) + + assert len(rows) == 1 + + +def test_execute_sql_w_directed_read_options( + shared_database, not_postgres, not_emulator +): + _helpers.retry_has_all_dll(shared_database.reload)() + sd = _sample_data + + with shared_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + def _unit_of_work(transaction, test): + transaction.insert_or_update(test.TABLE, test.COLUMNS, test.ROW_DATA) + + shared_database.run_in_transaction(_unit_of_work, test=sd) + + with shared_database.snapshot() as snapshot: + rows = list( + snapshot.execute_sql(sd.SQL, directed_read_options=DIRECTED_READ_OPTIONS) + ) + sd._check_rows_data(rows) + + +def test_readwrite_transaction_w_directed_read_options_w_error( + shared_database, not_emulator, not_postgres +): + _helpers.retry_has_all_dll(shared_database.reload)() + sd = _sample_data + + def _transaction_read(transaction): + list( + transaction.read( + sd.TABLE, + sd.COLUMNS, + sd.ALL, + directed_read_options=DIRECTED_READ_OPTIONS, + ) + ) + + with pytest.raises(exceptions.InvalidArgument): + shared_database.run_in_transaction(_transaction_read) + + +def test_db_batch_insert_w_max_commit_delay(shared_database): + _helpers.retry_has_all_dll(shared_database.reload)() + sd = _sample_data + + with shared_database.batch( + max_commit_delay=datetime.timedelta(milliseconds=100) + ) as batch: + batch.delete(sd.TABLE, sd.ALL) + batch.insert(sd.TABLE, sd.COLUMNS, sd.ROW_DATA) + + with shared_database.snapshot(read_timestamp=batch.committed) as snapshot: + from_snap = list(snapshot.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + + sd._check_rows_data(from_snap) + + +def test_db_run_in_transaction_w_max_commit_delay(shared_database): + _helpers.retry_has_all_dll(shared_database.reload)() + sd = _sample_data + + with shared_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + def _unit_of_work(transaction, test): + # TODO: Remove query and execute a read instead when the Emulator has been fixed + # and returns pre-commit tokens for streaming read results. + rows = list(transaction.execute_sql(sd.SQL)) + # rows = list(transaction.read(test.TABLE, test.COLUMNS, sd.ALL)) + assert rows == [] + + transaction.insert_or_update(test.TABLE, test.COLUMNS, test.ROW_DATA) + + shared_database.run_in_transaction( + _unit_of_work, test=sd, max_commit_delay=datetime.timedelta(milliseconds=100) + ) + + with shared_database.snapshot() as after: + rows = list(after.execute_sql(sd.SQL)) + + sd._check_rows_data(rows) + + +def test_create_table_with_proto_columns( + not_postgres, + shared_instance, + databases_to_delete, + proto_descriptor_file, +): + proto_cols_db_id = _helpers.unique_id("proto-columns") + extra_ddl = [ + "CREATE PROTO BUNDLE (examples.spanner.music.SingerInfo, examples.spanner.music.Genre,)" + ] + + proto_cols_database = shared_instance.database( + proto_cols_db_id, + ddl_statements=extra_ddl + _helpers.PROTO_COLUMNS_DDL_STATEMENTS, + proto_descriptors=proto_descriptor_file, + ) + operation = proto_cols_database.create() + operation.result(DBAPI_OPERATION_TIMEOUT) # raises on failure / timeout. + + databases_to_delete.append(proto_cols_database) + + proto_cols_database.reload() + assert proto_cols_database.proto_descriptors is not None + assert any("PROTO BUNDLE" in stmt for stmt in proto_cols_database.ddl_statements) diff --git a/tests/system/test_dbapi.py b/tests/system/test_dbapi.py new file mode 100644 index 0000000000..4cc718e275 --- /dev/null +++ b/tests/system/test_dbapi.py @@ -0,0 +1,1661 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import base64 +import datetime +from collections import defaultdict + +import pytest +import time +import decimal + +from google.cloud import spanner_v1 +from google.cloud._helpers import UTC + +from google.cloud.spanner_dbapi.connection import Connection, connect +from google.cloud.spanner_dbapi.exceptions import ( + ProgrammingError, + OperationalError, + RetryAborted, +) +from google.cloud.spanner_dbapi.parsed_statement import AutocommitDmlMode +from google.cloud.spanner_v1 import JsonObject +from google.cloud.spanner_v1 import gapic_version as package_version +from google.api_core.datetime_helpers import DatetimeWithNanoseconds + +from google.cloud.spanner_v1.database_sessions_manager import TransactionType +from . import _helpers +from tests._helpers import is_multiplexed_enabled + +DATABASE_NAME = "dbapi-txn" +SPANNER_RPC_PREFIX = "/google.spanner.v1.Spanner/" +EXECUTE_BATCH_DML_METHOD = SPANNER_RPC_PREFIX + "ExecuteBatchDml" +COMMIT_METHOD = SPANNER_RPC_PREFIX + "Commit" +EXECUTE_SQL_METHOD = SPANNER_RPC_PREFIX + "ExecuteSql" +EXECUTE_STREAMING_SQL_METHOD = SPANNER_RPC_PREFIX + "ExecuteStreamingSql" + +DDL = """CREATE TABLE contacts ( + contact_id INT64, + first_name STRING(1024), + last_name STRING(1024), + email STRING(1024) + ) + PRIMARY KEY (contact_id); + CREATE VIEW contacts_emails + SQL SECURITY INVOKER + AS + SELECT c.email + FROM contacts AS c; + + CREATE TABLE all_types ( + id int64, + col_bool bool, + col_bytes bytes(max), + col_date date, + col_float32 float32, + col_float64 float64, + col_int64 int64, + col_json json, + col_numeric numeric, + col_string string(max), + coL_timestamp timestamp, + ) primary key (col_int64); + """ + +DDL_STATEMENTS = [stmt.strip() for stmt in DDL.split(";") if stmt.strip()] + + +@pytest.fixture(scope="session") +def raw_database(shared_instance, database_operation_timeout, not_postgres): + database_id = _helpers.unique_id("dbapi-txn") + pool = spanner_v1.BurstyPool(labels={"testcase": "database_api"}) + database = shared_instance.database( + database_id, + ddl_statements=DDL_STATEMENTS, + pool=pool, + enable_interceptors_in_tests=True, + ) + op = database.create() + op.result(database_operation_timeout) # raises on failure / timeout. + + yield database + + database.drop() + + +class TestDbApi: + @staticmethod + def clear_table(transaction): + transaction.execute_update("DELETE FROM contacts WHERE true") + + @pytest.fixture(scope="function") + def dbapi_database(self, raw_database): + # Resetting the count so that each test gives correct count of the api + # methods called during that test + raw_database._method_count_interceptor._counts = defaultdict(int) + raw_database.run_in_transaction(self.clear_table) + + yield raw_database + + raw_database.run_in_transaction(self.clear_table) + + @pytest.fixture(autouse=True) + def init_connection(self, request, shared_instance, dbapi_database): + if "noautofixt" not in request.keywords: + self._conn = Connection(shared_instance, dbapi_database) + self._cursor = self._conn.cursor() + yield + if "noautofixt" not in request.keywords: + self._cursor.close() + self._conn.close() + + def _execute_common_statements(self, cursor): + # execute several DML statements within one transaction + cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + cursor.execute( + """ + UPDATE contacts + SET first_name = 'updated-first-name' + WHERE first_name = 'first-name' + """ + ) + cursor.execute( + """ + UPDATE contacts + SET email = 'test.email_updated@domen.ru' + WHERE email = 'test.email@domen.ru' + """ + ) + return ( + 1, + "updated-first-name", + "last-name", + "test.email_updated@domen.ru", + ) + + @pytest.mark.parametrize("client_side", [True, False]) + def test_commit(self, client_side): + """Test committing a transaction with several statements.""" + updated_row = self._execute_common_statements(self._cursor) + if client_side: + self._cursor.execute("""COMMIT""") + else: + self._conn.commit() + + # read the resulting data from the database + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + + assert got_rows == [updated_row] + + @pytest.mark.skipif( + _helpers.USE_EMULATOR, + reason="Emulator does not support multiple parallel transactions.", + ) + def test_commit_exception(self): + """Test that if exception during commit method is caught, then + subsequent operations on same Cursor and Connection object works + properly.""" + + if is_multiplexed_enabled(transaction_type=TransactionType.READ_WRITE): + pytest.skip( + "Mutiplexed session can't be deleted and this test relies on session deletion." + ) + + self._execute_common_statements(self._cursor) + # deleting the session to fail the commit + self._conn._session.delete() + try: + self._conn.commit() + except Exception: + pass + + # Testing that the connection and Cursor are in proper state post commit + # and a new transaction is started + updated_row = self._execute_common_statements(self._cursor) + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + + assert got_rows == [updated_row] + + @pytest.mark.skipif( + _helpers.USE_EMULATOR, + reason="Emulator does not support multiple parallel transactions.", + ) + def test_rollback_exception(self): + """Test that if exception during rollback method is caught, then + subsequent operations on same Cursor and Connection object works + properly.""" + self._execute_common_statements(self._cursor) + # deleting the session to fail the rollback + self._conn._session.delete() + try: + self._conn.rollback() + except Exception: + pass + + # Testing that the connection and Cursor are in proper state post + # exception in rollback and a new transaction is started + updated_row = self._execute_common_statements(self._cursor) + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + + assert got_rows == [updated_row] + + def test_cursor_execute_exception(self): + """Test that if exception in Cursor's execute method is caught when + Connection is not in autocommit mode, then subsequent operations on + same Cursor and Connection object works properly.""" + updated_row = self._execute_common_statements(self._cursor) + try: + self._cursor.execute("SELECT * FROM unknown_table") + except Exception: + pass + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + assert got_rows == [updated_row] + + # Testing that the connection and Cursor are in proper state post commit + # and a new transaction is started + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + assert got_rows == [updated_row] + + def test_cursor_execute_exception_autocommit(self): + """Test that if exception in Cursor's execute method is caught when + Connection is in autocommit mode, then subsequent operations on + same Cursor and Connection object works properly.""" + self._conn.autocommit = True + updated_row = self._execute_common_statements(self._cursor) + try: + self._cursor.execute("SELECT * FROM unknown_table") + except Exception: + pass + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + assert got_rows == [updated_row] + + def test_cursor_execute_exception_begin_client_side(self): + """Test that if exception in Cursor's execute method is caught when + beginning a transaction using client side statement, then subsequent + operations on same Cursor and Connection object works properly.""" + self._conn.autocommit = True + self._cursor.execute("begin transaction") + updated_row = self._execute_common_statements(self._cursor) + try: + self._cursor.execute("SELECT * FROM unknown_table") + except Exception: + pass + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + assert got_rows == [updated_row] + + # Testing that the connection and Cursor are in proper state post commit + self._conn.autocommit = False + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + assert got_rows == [updated_row] + + @pytest.mark.noautofixt + def test_begin_client_side(self, shared_instance, dbapi_database): + """Test beginning a transaction using client side statement, + where connection is in autocommit mode.""" + + conn1 = Connection(shared_instance, dbapi_database) + conn1.autocommit = True + cursor1 = conn1.cursor() + cursor1.execute("begin transaction") + updated_row = self._execute_common_statements(cursor1) + + assert conn1._transaction_begin_marked is True + conn1.commit() + assert conn1._transaction_begin_marked is False + cursor1.close() + conn1.close() + + # As the connection conn1 is committed a new connection should see its results + conn3 = Connection(shared_instance, dbapi_database) + cursor3 = conn3.cursor() + cursor3.execute("SELECT * FROM contacts") + got_rows = cursor3.fetchall() + conn3.commit() + cursor3.close() + conn3.close() + assert got_rows == [updated_row] + + def test_noop_sql_statements(self, dbapi_database): + """Test beginning and then committing a transaction is a Noop""" + dbapi_database._method_count_interceptor.reset() + self._cursor.execute("begin transaction") + self._cursor.execute("commit transaction") + assert dbapi_database._method_count_interceptor._counts == {} + self._cursor.execute("SELECT * FROM contacts") + self._conn.commit() + assert self._cursor.fetchall() == [] + + """Test beginning and then rolling back a transaction is a Noop""" + dbapi_database._method_count_interceptor.reset() + self._cursor.execute("begin transaction") + self._cursor.execute("rollback transaction") + assert dbapi_database._method_count_interceptor._counts == {} + self._cursor.execute("SELECT * FROM contacts") + assert self._cursor.fetchall() == [] + self._conn.commit() + + dbapi_database._method_count_interceptor.reset() + self._cursor.execute("start batch dml") + self._cursor.execute("run batch") + assert dbapi_database._method_count_interceptor._counts == {} + + def test_read_and_commit_timestamps(self): + """Test COMMIT_TIMESTAMP is not available after read statement and + READ_TIMESTAMP is not available after write statement in autocommit + mode.""" + self._conn.autocommit = True + self._cursor.execute("SELECT * FROM contacts") + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + + self._cursor.execute("SHOW VARIABLE COMMIT_TIMESTAMP") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 1 + + self._cursor.execute("SHOW VARIABLE READ_TIMESTAMP") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 0 + + self._cursor.execute("SELECT * FROM contacts") + + self._cursor.execute("SHOW VARIABLE COMMIT_TIMESTAMP") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 0 + + self._cursor.execute("SHOW VARIABLE READ_TIMESTAMP") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 1 + + def test_commit_timestamp_client_side_transaction(self): + """Test executing SHOW_COMMIT_TIMESTAMP client side statement in a + transaction.""" + + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._cursor.execute("SHOW VARIABLE COMMIT_TIMESTAMP") + got_rows = self._cursor.fetchall() + # As the connection is not committed we will get 0 rows + assert len(got_rows) == 0 + assert len(self._cursor.description) == 1 + + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._conn.commit() + self._cursor.execute("SHOW VARIABLE COMMIT_TIMESTAMP") + + got_rows = self._cursor.fetchall() + assert len(got_rows) == 1 + assert len(got_rows[0]) == 1 + assert len(self._cursor.description) == 1 + assert self._cursor.description[0].name == "SHOW_COMMIT_TIMESTAMP" + assert isinstance(got_rows[0][0], DatetimeWithNanoseconds) + + def test_commit_timestamp_client_side_autocommit(self): + """Test executing SHOW_COMMIT_TIMESTAMP client side statement in a + transaction when connection is in autocommit mode.""" + + self._conn.autocommit = True + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._cursor.execute("SHOW VARIABLE COMMIT_TIMESTAMP") + + got_rows = self._cursor.fetchall() + assert len(got_rows) == 1 + assert len(got_rows[0]) == 1 + assert len(self._cursor.description) == 1 + assert self._cursor.description[0].name == "SHOW_COMMIT_TIMESTAMP" + assert isinstance(got_rows[0][0], DatetimeWithNanoseconds) + + def test_read_timestamp_client_side(self): + """Test executing SHOW_READ_TIMESTAMP client side statement in a + transaction.""" + + self._conn.read_only = True + self._cursor.execute("SELECT * FROM contacts") + assert self._cursor.fetchall() == [] + + self._cursor.execute("SHOW VARIABLE READ_TIMESTAMP") + read_timestamp_query_result_1 = self._cursor.fetchall() + + self._cursor.execute("SELECT * FROM contacts") + assert self._cursor.fetchall() == [] + + self._cursor.execute("SHOW VARIABLE READ_TIMESTAMP") + read_timestamp_query_result_2 = self._cursor.fetchall() + + self._conn.commit() + + self._cursor.execute("SHOW VARIABLE READ_TIMESTAMP") + read_timestamp_query_result_3 = self._cursor.fetchall() + assert len(self._cursor.description) == 1 + assert self._cursor.description[0].name == "SHOW_READ_TIMESTAMP" + + assert ( + read_timestamp_query_result_1 + == read_timestamp_query_result_2 + == read_timestamp_query_result_3 + ) + assert len(read_timestamp_query_result_1) == 1 + assert len(read_timestamp_query_result_1[0]) == 1 + assert isinstance(read_timestamp_query_result_1[0][0], DatetimeWithNanoseconds) + + self._cursor.execute("SELECT * FROM contacts") + self._cursor.execute("SHOW VARIABLE READ_TIMESTAMP") + read_timestamp_query_result_4 = self._cursor.fetchall() + self._conn.commit() + assert read_timestamp_query_result_1 != read_timestamp_query_result_4 + + def test_read_timestamp_client_side_autocommit(self): + """Test executing SHOW_READ_TIMESTAMP client side statement in a + transaction when connection is in autocommit mode.""" + + self._conn.autocommit = True + + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._conn.read_only = True + self._cursor.execute("SELECT * FROM contacts") + assert self._cursor.fetchall() == [ + (2, "first-name", "last-name", "test.email@domen.ru") + ] + self._cursor.execute("SHOW VARIABLE READ_TIMESTAMP") + read_timestamp_query_result_1 = self._cursor.fetchall() + + assert len(read_timestamp_query_result_1) == 1 + assert len(read_timestamp_query_result_1[0]) == 1 + assert len(self._cursor.description) == 1 + assert self._cursor.description[0].name == "SHOW_READ_TIMESTAMP" + assert isinstance(read_timestamp_query_result_1[0][0], DatetimeWithNanoseconds) + + self._conn.read_only = False + self._insert_row(3) + + self._conn.read_only = True + self._cursor.execute("SELECT * FROM contacts") + self._cursor.execute("SHOW VARIABLE READ_TIMESTAMP") + read_timestamp_query_result_2 = self._cursor.fetchall() + assert read_timestamp_query_result_1 != read_timestamp_query_result_2 + + @pytest.mark.parametrize("auto_commit", [False, True]) + def test_batch_dml(self, auto_commit, dbapi_database): + """Test batch dml.""" + + method_count_interceptor = dbapi_database._method_count_interceptor + method_count_interceptor.reset() + if auto_commit: + self._conn.autocommit = True + self._insert_row(1) + + self._cursor.execute("start batch dml") + self._insert_row(2) + self._insert_row(3) + self._cursor.execute("run batch") + + self._insert_row(4) + + # Test starting another dml batch in same transaction works + self._cursor.execute("start batch dml") + self._insert_row(5) + self._insert_row(6) + self._cursor.execute("run batch") + + if not auto_commit: + self._conn.commit() + + self._cursor.execute("SELECT * FROM contacts") + assert ( + self._cursor.fetchall().sort() + == ( + [ + (1, "first-name-1", "last-name-1", "test.email@domen.ru"), + (2, "first-name-2", "last-name-2", "test.email@domen.ru"), + (3, "first-name-3", "last-name-3", "test.email@domen.ru"), + (4, "first-name-4", "last-name-4", "test.email@domen.ru"), + (5, "first-name-5", "last-name-5", "test.email@domen.ru"), + (6, "first-name-6", "last-name-6", "test.email@domen.ru"), + ] + ).sort() + ) + + # Test starting another dml batch in same connection post commit works + self._cursor.execute("start batch dml") + self._insert_row(7) + self._insert_row(8) + self._cursor.execute("run batch") + + self._insert_row(9) + + if not auto_commit: + self._conn.commit() + + self._cursor.execute("SELECT * FROM contacts") + assert len(self._cursor.fetchall()) == 9 + # Test that ExecuteBatchDml rpc is called + assert method_count_interceptor._counts[EXECUTE_BATCH_DML_METHOD] == 3 + + def test_abort_batch_dml(self): + """Test abort batch dml.""" + + self._cursor.execute("start batch dml") + self._insert_row(1) + self._insert_row(2) + self._cursor.execute("abort batch") + + self._insert_row(3) + self._conn.commit() + + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 1 + assert got_rows == [(3, "first-name-3", "last-name-3", "test.email@domen.ru")] + + def test_batch_dml_invalid_statements(self): + """Test batch dml having invalid statements.""" + + # Test first statement in batch is invalid + self._cursor.execute("start batch dml") + self._cursor.execute( + """ + INSERT INTO unknown_table (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._insert_row(1) + self._insert_row(2) + with pytest.raises(OperationalError): + self._cursor.execute("run batch") + + # Test middle statement in batch is invalid + self._cursor.execute("start batch dml") + self._insert_row(1) + self._cursor.execute( + """ + INSERT INTO unknown_table (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._insert_row(2) + with pytest.raises(OperationalError): + self._cursor.execute("run batch") + + # Test last statement in batch is invalid + self._cursor.execute("start batch dml") + self._insert_row(1) + self._insert_row(2) + self._cursor.execute( + """ + INSERT INTO unknown_table (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + with pytest.raises(OperationalError): + self._cursor.execute("run batch") + + def test_partitioned_query(self): + """Test partition query works in read-only mode.""" + self._cursor.execute("start batch dml") + for i in range(1, 11): + self._insert_row(i) + self._cursor.execute("run batch") + self._conn.commit() + + self._conn.read_only = True + self._cursor.execute("PARTITION SELECT * FROM contacts") + partition_id_rows = self._cursor.fetchall() + assert len(partition_id_rows) > 0 + + rows = [] + for partition_id_row in partition_id_rows: + self._cursor.execute("RUN PARTITION " + partition_id_row[0]) + rows = rows + self._cursor.fetchall() + assert len(rows) == 10 + self._conn.commit() + + def test_partitioned_query_in_rw_transaction(self): + """Test partition query throws exception when connection is not in + read-only mode and neither in auto-commit mode.""" + self._cursor.execute("start batch dml") + for i in range(1, 11): + self._insert_row(i) + self._cursor.execute("run batch") + self._conn.commit() + + with pytest.raises(ProgrammingError): + self._cursor.execute("PARTITION SELECT * FROM contacts") + + def test_partitioned_query_with_dml_query(self): + """Test partition query throws exception when sql query is a DML query.""" + self._cursor.execute("start batch dml") + for i in range(1, 11): + self._insert_row(i) + self._cursor.execute("run batch") + self._conn.commit() + + self._conn.read_only = True + with pytest.raises(ProgrammingError): + self._cursor.execute( + """ + PARTITION INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1111, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + + def test_partitioned_query_in_autocommit_mode(self): + """Test partition query works when connection is not in read-only mode + but is in auto-commit mode.""" + self._cursor.execute("start batch dml") + for i in range(1, 11): + self._insert_row(i) + self._cursor.execute("run batch") + self._conn.commit() + + self._conn.autocommit = True + self._cursor.execute("PARTITION SELECT * FROM contacts") + partition_id_rows = self._cursor.fetchall() + assert len(partition_id_rows) > 0 + + rows = [] + for partition_id_row in partition_id_rows: + self._cursor.execute("RUN PARTITION " + partition_id_row[0]) + rows = rows + self._cursor.fetchall() + assert len(rows) == 10 + self._conn.commit() + + def test_partitioned_query_with_client_transaction_started(self): + """Test partition query throws exception when connection is in + auto-commit mode but transaction started using client side statement.""" + self._cursor.execute("start batch dml") + for i in range(1, 11): + self._insert_row(i) + self._cursor.execute("run batch") + self._conn.commit() + + self._conn.autocommit = True + self._cursor.execute("begin transaction") + with pytest.raises(ProgrammingError): + self._cursor.execute("PARTITION SELECT * FROM contacts") + + def test_run_partitioned_query(self): + """Test run partitioned query works in read-only mode.""" + self._cursor.execute("start batch dml") + for i in range(1, 11): + self._insert_row(i) + self._cursor.execute("run batch") + self._conn.commit() + + self._conn.read_only = True + self._cursor.execute("RUN PARTITIONED QUERY SELECT * FROM contacts") + assert self._cursor.description is not None + assert self._cursor.rowcount == -1 + rows = self._cursor.fetchall() + assert len(rows) == 10 + self._conn.commit() + + def test_partitioned_dml_query(self): + """Test partitioned_dml query works in autocommit mode.""" + self._cursor.execute("start batch dml") + for i in range(1, 11): + self._insert_row(i) + self._cursor.execute("run batch") + self._conn.commit() + + self._conn.autocommit = True + self._cursor.execute("set autocommit_dml_mode = PARTITIONED_NON_ATOMIC") + self._cursor.execute("DELETE FROM contacts WHERE contact_id > 3") + assert self._cursor.rowcount == 7 + + self._cursor.execute("set autocommit_dml_mode = TRANSACTIONAL") + assert self._conn.autocommit_dml_mode == AutocommitDmlMode.TRANSACTIONAL + + self._conn.autocommit = False + # Test changing autocommit_dml_mode is not allowed when connection is in autocommit mode + with pytest.raises(ProgrammingError): + self._cursor.execute("set autocommit_dml_mode = PARTITIONED_NON_ATOMIC") + + def _insert_row(self, i): + self._cursor.execute( + f""" + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES ({i}, 'first-name-{i}', 'last-name-{i}', 'test.email@domen.ru') + """ + ) + + def test_commit_abort_retry(self, dbapi_database): + """Test that when commit failed with Abort exception, then the retry + succeeds with transaction having insert as well as query type of + statements along with batch dml statements. + We are trying to test all types of statements like execute, executemany, + fetchone, fetchmany, fetchall""" + + method_count_interceptor = dbapi_database._method_count_interceptor + method_count_interceptor.reset() + # called 2 times + self._insert_row(1) + # called 2 times + self._cursor.execute("SELECT * FROM contacts") + self._cursor.fetchall() + self._cursor.execute("start batch dml") + self._insert_row(2) + self._insert_row(3) + # called 2 times for batch dml rpc + self._cursor.execute("run batch") + row_data = [ + (4, "first-name4", "last-name4", "test.email4@example.com"), + (5, "first-name5", "last-name5", "test.email5@example.com"), + ] + # called 2 times for batch dml rpc + self._cursor.executemany( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (%s, %s, %s, %s) + """, + row_data, + ) + # called 2 times and as this would make 3 execute streaming sql calls + # so total 6 calls + self._cursor.executemany( + """SELECT * FROM contacts WHERE contact_id = %s""", + ((1,), (2,), (3,)), + ) + self._cursor.fetchone() + self._cursor.fetchmany(2) + dbapi_database._method_abort_interceptor.set_method_to_abort( + COMMIT_METHOD, self._conn + ) + # called (at least) 2 times + self._conn.commit() + dbapi_database._method_abort_interceptor.reset() + # Verify the number of calls. + # We don't know the exact number of calls, as Spanner could also + # abort the transaction. + assert method_count_interceptor._counts[COMMIT_METHOD] >= 2 + assert method_count_interceptor._counts[EXECUTE_BATCH_DML_METHOD] >= 4 + assert method_count_interceptor._counts[EXECUTE_STREAMING_SQL_METHOD] >= 10 + + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 5 + + @pytest.mark.skipif( + _helpers.USE_EMULATOR, + reason="Emulator does not support concurrent transactions.", + ) + def test_retry_aborted_exception(self, shared_instance, dbapi_database): + """Test that retry fails with RetryAborted error when rows are updated during retry.""" + + conn1 = Connection(shared_instance, dbapi_database) + cursor1 = conn1.cursor() + cursor1.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + conn1.commit() + cursor1.execute("SELECT * FROM contacts") + cursor1.fetchall() + + conn2 = Connection(shared_instance, dbapi_database) + cursor2 = conn2.cursor() + cursor2.execute( + """ + UPDATE contacts + SET email = 'test.email_updated@domen.ru' + WHERE contact_id = 1 + """ + ) + conn2.commit() + + dbapi_database._method_abort_interceptor.set_method_to_abort( + COMMIT_METHOD, conn1 + ) + with pytest.raises(RetryAborted): + conn1.commit() + dbapi_database._method_abort_interceptor.reset() + + def test_execute_sql_abort_retry_multiple_times(self, dbapi_database): + """Test that when execute sql failed 2 times with Abort exception, then + the retry succeeds 3rd time.""" + + method_count_interceptor = dbapi_database._method_count_interceptor + method_count_interceptor.reset() + self._cursor.execute("start batch dml") + self._insert_row(1) + self._insert_row(2) + self._cursor.execute("run batch") + # aborting method 2 times before succeeding + dbapi_database._method_abort_interceptor.set_method_to_abort( + EXECUTE_STREAMING_SQL_METHOD, self._conn, 2 + ) + self._cursor.execute("SELECT * FROM contacts") + self._cursor.fetchmany(2) + dbapi_database._method_abort_interceptor.reset() + self._conn.commit() + # Check that all RPCs except commit should be called at least 3 times + # We don't know the exact number of attempts, as the transaction could + # also be aborted by Spanner (and not only the test interceptor). + assert method_count_interceptor._counts[COMMIT_METHOD] >= 1 + assert method_count_interceptor._counts[EXECUTE_BATCH_DML_METHOD] >= 3 + assert method_count_interceptor._counts[EXECUTE_STREAMING_SQL_METHOD] >= 3 + + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 2 + + def test_execute_batch_dml_abort_retry(self, dbapi_database): + """Test that when any execute batch dml failed with Abort exception, + then the retry succeeds with transaction having insert as well as query + type of statements along with batch dml statements.""" + + method_count_interceptor = dbapi_database._method_count_interceptor + method_count_interceptor.reset() + # called 3 times + self._insert_row(1) + # called 3 times + self._cursor.execute("SELECT * FROM contacts") + self._cursor.fetchall() + self._cursor.execute("start batch dml") + self._insert_row(2) + self._insert_row(3) + dbapi_database._method_abort_interceptor.set_method_to_abort( + EXECUTE_BATCH_DML_METHOD, self._conn, 2 + ) + # called 3 times + self._cursor.execute("run batch") + dbapi_database._method_abort_interceptor.reset() + self._conn.commit() + assert method_count_interceptor._counts[COMMIT_METHOD] >= 1 + assert method_count_interceptor._counts[EXECUTE_BATCH_DML_METHOD] >= 3 + assert method_count_interceptor._counts[EXECUTE_STREAMING_SQL_METHOD] >= 6 + + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 3 + + def test_multiple_aborts_in_transaction(self, dbapi_database): + """Test that when there are multiple Abort exceptions in a transaction + on different statements, then the retry succeeds.""" + + method_count_interceptor = dbapi_database._method_count_interceptor + method_count_interceptor.reset() + # called at least 3 times + self._insert_row(1) + dbapi_database._method_abort_interceptor.set_method_to_abort( + EXECUTE_STREAMING_SQL_METHOD, self._conn + ) + # called at least 3 times + self._cursor.execute("SELECT * FROM contacts") + dbapi_database._method_abort_interceptor.reset() + self._cursor.fetchall() + # called at least 2 times + self._insert_row(2) + # called at least 2 times + self._cursor.execute("SELECT * FROM contacts") + self._cursor.fetchone() + dbapi_database._method_abort_interceptor.set_method_to_abort( + COMMIT_METHOD, self._conn + ) + # called at least 2 times + self._conn.commit() + dbapi_database._method_abort_interceptor.reset() + assert method_count_interceptor._counts[COMMIT_METHOD] >= 2 + assert method_count_interceptor._counts[EXECUTE_STREAMING_SQL_METHOD] >= 10 + + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 2 + + def test_consecutive_aborted_transactions(self, dbapi_database): + """Test 2 consecutive transactions with Abort exceptions on the same + connection works.""" + + method_count_interceptor = dbapi_database._method_count_interceptor + method_count_interceptor.reset() + self._insert_row(1) + self._insert_row(2) + self._cursor.execute("SELECT * FROM contacts") + self._cursor.fetchall() + dbapi_database._method_abort_interceptor.set_method_to_abort( + COMMIT_METHOD, self._conn + ) + self._conn.commit() + dbapi_database._method_abort_interceptor.reset() + assert method_count_interceptor._counts[COMMIT_METHOD] >= 2 + assert method_count_interceptor._counts[EXECUTE_STREAMING_SQL_METHOD] >= 6 + + method_count_interceptor = dbapi_database._method_count_interceptor + method_count_interceptor.reset() + self._insert_row(3) + self._insert_row(4) + self._cursor.execute("SELECT * FROM contacts") + self._cursor.fetchall() + dbapi_database._method_abort_interceptor.set_method_to_abort( + COMMIT_METHOD, self._conn + ) + self._conn.commit() + dbapi_database._method_abort_interceptor.reset() + assert method_count_interceptor._counts[COMMIT_METHOD] >= 2 + assert method_count_interceptor._counts[EXECUTE_STREAMING_SQL_METHOD] >= 6 + + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + assert len(got_rows) == 4 + + def test_abort_retry_multiple_cursors(self, dbapi_database): + """Test that retry works when multiple cursors are involved in the transaction.""" + + self._insert_row(1) + self._insert_row(2) + self._insert_row(3) + self._insert_row(4) + self._conn.commit() + + cur1 = self._conn.cursor() + cur1.execute("SELECT * FROM contacts WHERE contact_id IN (1, 2)") + cur2 = self._conn.cursor() + cur2.execute("SELECT * FROM contacts WHERE contact_id IN (3, 4)") + row1 = cur1.fetchone() + row2 = cur2.fetchone() + row3 = cur1.fetchone() + row4 = cur2.fetchone() + dbapi_database._method_abort_interceptor.set_method_to_abort( + COMMIT_METHOD, self._conn + ) + self._conn.commit() + dbapi_database._method_abort_interceptor.reset() + + assert set([row1, row3]) == set( + [ + (1, "first-name-1", "last-name-1", "test.email@domen.ru"), + (2, "first-name-2", "last-name-2", "test.email@domen.ru"), + ] + ) + assert set([row2, row4]) == set( + [ + (3, "first-name-3", "last-name-3", "test.email@domen.ru"), + (4, "first-name-4", "last-name-4", "test.email@domen.ru"), + ] + ) + + def test_begin_success_post_commit(self): + """Test beginning a new transaction post commiting an existing transaction + is possible on a connection, when connection is in autocommit mode.""" + want_row = (2, "first-name", "last-name", "test.email@domen.ru") + self._conn.autocommit = True + self._cursor.execute("begin transaction") + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._conn.commit() + + self._cursor.execute("begin transaction") + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + assert got_rows == [want_row] + + def test_begin_error_before_commit(self): + """Test beginning a new transaction before commiting an existing transaction is not possible on a connection, when connection is in autocommit mode.""" + self._conn.autocommit = True + self._cursor.execute("begin transaction") + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + + with pytest.raises(OperationalError): + self._cursor.execute("begin transaction") + + @pytest.mark.parametrize("client_side", [False, True]) + def test_rollback(self, client_side): + """Test rollbacking a transaction with several statements.""" + want_row = (2, "first-name", "last-name", "test.email@domen.ru") + + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._conn.commit() + + # execute several DMLs with one transaction + self._cursor.execute( + """ + UPDATE contacts + SET first_name = 'updated-first-name' + WHERE first_name = 'first-name' + """ + ) + self._cursor.execute( + """ + UPDATE contacts + SET email = 'test.email_updated@domen.ru' + WHERE email = 'test.email@domen.ru' + """ + ) + + if client_side: + self._cursor.execute("ROLLBACK") + else: + self._conn.rollback() + + # read the resulting data from the database + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + self._conn.commit() + + assert got_rows == [want_row] + + def test_autocommit_mode_change(self): + """Test auto committing a transaction on `autocommit` mode change.""" + want_row = ( + 2, + "updated-first-name", + "last-name", + "test.email@domen.ru", + ) + + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (2, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + self._cursor.execute( + """ + UPDATE contacts + SET first_name = 'updated-first-name' + WHERE first_name = 'first-name' + """ + ) + self._conn.autocommit = True + + # read the resulting data from the database + self._cursor.execute("SELECT * FROM contacts") + got_rows = self._cursor.fetchall() + + assert got_rows == [want_row] + + @pytest.mark.noautofixt + def test_rollback_on_connection_closing(self, shared_instance, dbapi_database): + """ + When closing a connection all the pending transactions + must be rollbacked. Testing if it's working this way. + """ + want_row = (1, "first-name", "last-name", "test.email@domen.ru") + # connect to the test database + conn = Connection(shared_instance, dbapi_database) + cursor = conn.cursor() + + cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@domen.ru') + """ + ) + conn.commit() + + cursor.execute( + """ + UPDATE contacts + SET first_name = 'updated-first-name' + WHERE first_name = 'first-name' + """ + ) + conn.close() + + # connect again, as the previous connection is no-op after closing + conn = Connection(shared_instance, dbapi_database) + cursor = conn.cursor() + + # read the resulting data from the database + cursor.execute("SELECT * FROM contacts") + got_rows = cursor.fetchall() + conn.commit() + + assert got_rows == [want_row] + + cursor.close() + conn.close() + + def test_execute_many(self): + row_data = [ + (1, "first-name", "last-name", "test.email@example.com"), + (2, "first-name2", "last-name2", "test.email2@example.com"), + ] + self._cursor.executemany( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (%s, %s, %s, %s) + """, + row_data, + ) + self._conn.commit() + + self._cursor.executemany( + """SELECT * FROM contacts WHERE contact_id = %s""", + ((1,), (2,)), + ) + res = self._cursor.fetchall() + self._conn.commit() + + assert len(res) == len(row_data) + for found, expected in zip(res, row_data): + assert found[0] == expected[0] + + # checking that execute() and executemany() + # results are not mixed together + self._cursor.execute( + """ + SELECT * FROM contacts WHERE contact_id = 1 + """, + ) + res = self._cursor.fetchone() + self._conn.commit() + + assert res[0] == 1 + + @pytest.mark.noautofixt + def test_DDL_autocommit(self, shared_instance, dbapi_database): + """Check that DDLs in autocommit mode are immediately executed.""" + + try: + conn = Connection(shared_instance, dbapi_database) + conn.autocommit = True + + cur = conn.cursor() + cur.execute( + """ + CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + Name STRING(1024), + ) PRIMARY KEY (SingerId) + """ + ) + conn.close() + + # if previous DDL wasn't committed, the next DROP TABLE + # statement will fail with a ProgrammingError + conn = Connection(shared_instance, dbapi_database) + cur = conn.cursor() + + cur.execute("DROP TABLE Singers") + conn.commit() + finally: + # Delete table + table = dbapi_database.table("Singers") + if table.exists(): + op = dbapi_database.update_ddl(["DROP TABLE Singers"]) + op.result() + + def test_ddl_execute_autocommit_true(self, dbapi_database): + """Check that DDL statement in autocommit mode results in successful + DDL statement execution for execute method.""" + + self._conn.autocommit = True + self._cursor.execute( + """ + CREATE TABLE DdlExecuteAutocommit ( + SingerId INT64 NOT NULL, + Name STRING(1024), + ) PRIMARY KEY (SingerId) + """ + ) + table = dbapi_database.table("DdlExecuteAutocommit") + assert table.exists() is True + + def test_ddl_executemany_autocommit_true(self, dbapi_database): + """Check that DDL statement in autocommit mode results in exception for + executemany method .""" + + self._conn.autocommit = True + with pytest.raises(ProgrammingError): + self._cursor.executemany( + """ + CREATE TABLE DdlExecuteManyAutocommit ( + SingerId INT64 NOT NULL, + Name STRING(1024), + ) PRIMARY KEY (SingerId) + """, + [], + ) + table = dbapi_database.table("DdlExecuteManyAutocommit") + assert table.exists() is False + + def test_ddl_executemany_autocommit_false(self, dbapi_database): + """Check that DDL statement in non-autocommit mode results in exception for + executemany method .""" + with pytest.raises(ProgrammingError): + self._cursor.executemany( + """ + CREATE TABLE DdlExecuteManyAutocommit ( + SingerId INT64 NOT NULL, + Name STRING(1024), + ) PRIMARY KEY (SingerId) + """, + [], + ) + table = dbapi_database.table("DdlExecuteManyAutocommit") + assert table.exists() is False + + def test_ddl_execute(self, dbapi_database): + """Check that DDL statement followed by non-DDL execute statement in + non autocommit mode results in successful DDL statement execution.""" + + want_row = ( + 1, + "first-name", + ) + self._cursor.execute( + """ + CREATE TABLE DdlExecute ( + SingerId INT64 NOT NULL, + Name STRING(1024), + ) PRIMARY KEY (SingerId) + """ + ) + table = dbapi_database.table("DdlExecute") + assert table.exists() is False + + self._cursor.execute( + """ + INSERT INTO DdlExecute (SingerId, Name) + VALUES (1, "first-name") + """ + ) + assert table.exists() is True + self._conn.commit() + + # read the resulting data from the database + self._cursor.execute("SELECT * FROM DdlExecute") + got_rows = self._cursor.fetchall() + + assert got_rows == [want_row] + + def test_ddl_executemany(self, dbapi_database): + """Check that DDL statement followed by non-DDL executemany statement in + non autocommit mode results in successful DDL statement execution.""" + + want_row = ( + 1, + "first-name", + ) + self._cursor.execute( + """ + CREATE TABLE DdlExecuteMany ( + SingerId INT64 NOT NULL, + Name STRING(1024), + ) PRIMARY KEY (SingerId) + """ + ) + table = dbapi_database.table("DdlExecuteMany") + assert table.exists() is False + + self._cursor.executemany( + """ + INSERT INTO DdlExecuteMany (SingerId, Name) + VALUES (%s, %s) + """, + [want_row], + ) + assert table.exists() is True + self._conn.commit() + + # read the resulting data from the database + self._cursor.execute("SELECT * FROM DdlExecuteMany") + got_rows = self._cursor.fetchall() + + assert got_rows == [want_row] + + @pytest.mark.skipif(_helpers.USE_EMULATOR, reason="Emulator does not support json.") + def test_autocommit_with_json_data(self, dbapi_database): + """ + Check that DDLs in autocommit mode are immediately + executed for json fields. + """ + try: + self._conn.autocommit = True + self._cursor.execute( + """ + CREATE TABLE JsonDetails ( + DataId INT64 NOT NULL, + Details JSON, + ) PRIMARY KEY (DataId) + """ + ) + + # Insert data to table + self._cursor.execute( + sql="INSERT INTO JsonDetails (DataId, Details) VALUES (%s, %s)", + args=(123, JsonObject({"name": "Jakob", "age": "26"})), + ) + + # Read back the data. + self._cursor.execute("""select * from JsonDetails;""") + got_rows = self._cursor.fetchall() + + # Assert the response + assert len(got_rows) == 1 + assert got_rows[0][0] == 123 + assert got_rows[0][1] == {"age": "26", "name": "Jakob"} + + # Drop the table + self._cursor.execute("DROP TABLE JsonDetails") + self._conn.commit() + finally: + # Delete table + table = dbapi_database.table("JsonDetails") + if table.exists(): + op = dbapi_database.update_ddl(["DROP TABLE JsonDetails"]) + op.result() + + @pytest.mark.skipif(_helpers.USE_EMULATOR, reason="Emulator does not support json.") + def test_json_array(self, dbapi_database): + try: + # Create table + self._conn.autocommit = True + + self._cursor.execute( + """ + CREATE TABLE JsonDetails ( + DataId INT64 NOT NULL, + Details JSON, + ) PRIMARY KEY (DataId) + """ + ) + self._cursor.execute( + "INSERT INTO JsonDetails (DataId, Details) VALUES (%s, %s)", + [1, JsonObject([1, 2, 3])], + ) + + self._cursor.execute("SELECT * FROM JsonDetails WHERE DataId = 1") + row = self._cursor.fetchone() + assert isinstance(row[1], JsonObject) + assert row[1].serialize() == "[1,2,3]" + + self._cursor.execute("DROP TABLE JsonDetails") + finally: + # Delete table + table = dbapi_database.table("JsonDetails") + if table.exists(): + op = dbapi_database.update_ddl(["DROP TABLE JsonDetails"]) + op.result() + + @pytest.mark.noautofixt + def test_DDL_commit(self, shared_instance, dbapi_database): + """Check that DDLs in commit mode are executed on calling `commit()`.""" + try: + conn = Connection(shared_instance, dbapi_database) + cur = conn.cursor() + + cur.execute( + """ + CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + Name STRING(1024), + ) PRIMARY KEY (SingerId) + """ + ) + conn.commit() + conn.close() + + # if previous DDL wasn't committed, the next DROP TABLE + # statement will fail with a ProgrammingError + conn = Connection(shared_instance, dbapi_database) + cur = conn.cursor() + + cur.execute("DROP TABLE Singers") + conn.commit() + finally: + # Delete table + table = dbapi_database.table("Singers") + if table.exists(): + op = dbapi_database.update_ddl(["DROP TABLE Singers"]) + op.result() + + def test_ping(self): + """Check connection validation method.""" + self._conn.validate() + + @pytest.mark.noautofixt + def test_user_agent(self, shared_instance, dbapi_database): + """Check that DB API uses an appropriate user agent.""" + conn = connect(shared_instance.name, dbapi_database.name) + assert ( + conn.instance._client._client_info.user_agent + == "gl-dbapi/" + package_version.__version__ + ) + assert ( + conn.instance._client._client_info.client_library_version + == package_version.__version__ + ) + + def test_read_only(self): + """ + Check that connection set to `read_only=True` uses + ReadOnly transactions. + """ + + self._conn.read_only = True + self._cursor.execute("SELECT * FROM contacts") + assert self._cursor.fetchall() == [] + self._conn.commit() + + def test_read_only_dml(self): + """ + Check that connection set to `read_only=True` leads to exception when + executing dml statements. + """ + + self._conn.read_only = True + with pytest.raises(ProgrammingError): + self._cursor.execute( + """ + UPDATE contacts + SET first_name = 'updated-first-name' + WHERE first_name = 'first-name' + """ + ) + + def test_staleness(self): + """Check the DB API `staleness` option.""" + + before_insert = datetime.datetime.utcnow().replace(tzinfo=UTC) + time.sleep(0.25) + + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@example.com') + """ + ) + self._conn.commit() + + self._conn.read_only = True + self._conn.staleness = {"read_timestamp": before_insert} + self._cursor.execute("SELECT * FROM contacts") + self._conn.commit() + assert len(self._cursor.fetchall()) == 0 + + self._conn.staleness = None + self._cursor.execute("SELECT * FROM contacts") + self._conn.commit() + assert len(self._cursor.fetchall()) == 1 + + @pytest.mark.parametrize("autocommit", [False, True]) + def test_rowcount(self, dbapi_database, autocommit): + try: + self._conn.autocommit = autocommit + + self._cursor.execute( + """ + CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + Name STRING(1024), + ) PRIMARY KEY (SingerId) + """ + ) + self._conn.commit() + + # executemany sets rowcount to the total modified rows + rows = [(i, f"Singer {i}") for i in range(100)] + self._cursor.executemany( + "INSERT INTO Singers (SingerId, Name) VALUES (%s, %s)", rows[:98] + ) + assert self._cursor.rowcount == 98 + + # execute with INSERT + self._cursor.execute( + "INSERT INTO Singers (SingerId, Name) VALUES (%s, %s), (%s, %s)", + [x for row in rows[98:] for x in row], + ) + assert self._cursor.rowcount == 2 + + # execute with UPDATE + self._cursor.execute("UPDATE Singers SET Name = 'Cher' WHERE SingerId < 25") + assert self._cursor.rowcount == 25 + + # execute with SELECT + self._cursor.execute("SELECT Name FROM Singers WHERE SingerId < 75") + assert len(self._cursor.fetchall()) == 75 + # rowcount is not available for SELECT + assert self._cursor.rowcount == -1 + + # execute with DELETE + self._cursor.execute("DELETE FROM Singers") + assert self._cursor.rowcount == 100 + + # execute with UPDATE matching 0 rows + self._cursor.execute("UPDATE Singers SET Name = 'Cher' WHERE SingerId < 25") + assert self._cursor.rowcount == 0 + + self._conn.commit() + self._cursor.execute("DROP TABLE Singers") + self._conn.commit() + finally: + # Delete table + table = dbapi_database.table("Singers") + if table.exists(): + op = dbapi_database.update_ddl(["DROP TABLE Singers"]) + op.result() + + @pytest.mark.parametrize("autocommit", [False, True]) + @pytest.mark.skipif( + _helpers.USE_EMULATOR, reason="Emulator does not support DML Returning." + ) + def test_dml_returning_insert(self, autocommit): + self._conn.autocommit = autocommit + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@example.com') + THEN RETURN contact_id, first_name + """ + ) + assert self._cursor.fetchone() == (1, "first-name") + assert self._cursor.rowcount == 1 + self._conn.commit() + + @pytest.mark.parametrize("autocommit", [False, True]) + @pytest.mark.skipif( + _helpers.USE_EMULATOR, reason="Emulator does not support DML Returning." + ) + def test_dml_returning_update(self, autocommit): + self._conn.autocommit = autocommit + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@example.com') + """ + ) + assert self._cursor.rowcount == 1 + self._cursor.execute( + """ + UPDATE contacts SET first_name = 'new-name' WHERE contact_id = 1 + THEN RETURN contact_id, first_name + """ + ) + assert self._cursor.fetchone() == (1, "new-name") + assert self._cursor.rowcount == 1 + self._conn.commit() + + @pytest.mark.parametrize("autocommit", [False, True]) + @pytest.mark.skipif( + _helpers.USE_EMULATOR, reason="Emulator does not support DML Returning." + ) + def test_dml_returning_delete(self, autocommit): + self._conn.autocommit = autocommit + self._cursor.execute( + """ + INSERT INTO contacts (contact_id, first_name, last_name, email) + VALUES (1, 'first-name', 'last-name', 'test.email@example.com') + """ + ) + assert self._cursor.rowcount == 1 + self._cursor.execute( + """ + DELETE FROM contacts WHERE contact_id = 1 + THEN RETURN contact_id, first_name + """ + ) + assert self._cursor.fetchone() == (1, "first-name") + assert self._cursor.rowcount == 1 + self._conn.commit() + + @pytest.mark.parametrize("include_views", [True, False]) + def test_list_tables(self, include_views): + tables = self._cursor.list_tables(include_views=include_views) + table_names = set(table[0] for table in tables) + + assert "contacts" in table_names + + if include_views: + assert "contacts_emails" in table_names + else: # if not include_views: + assert "contacts_emails" not in table_names + + def test_invalid_statement_error(self): + with pytest.raises(ProgrammingError): + self._cursor.execute("-- comment only") + + def test_insert_all_types(self): + """Test inserting all supported data types""" + + self._conn.autocommit = True + self._cursor.execute( + """ + INSERT INTO all_types (id, col_bool, col_bytes, col_date, col_float32, col_float64, + col_int64, col_json, col_numeric, col_string, col_timestamp) + VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s) + """, + ( + 1, + True, + base64.b64encode(b"test-bytes"), + datetime.date(2024, 12, 3), + 3.14, + 3.14, + 123, + JsonObject({"key": "value"}), + decimal.Decimal("3.14"), + "test-string", + datetime.datetime(2024, 12, 3, 17, 30, 14), + ), + ) + assert self._cursor.rowcount == 1 diff --git a/tests/system/test_instance_api.py b/tests/system/test_instance_api.py new file mode 100644 index 0000000000..fe962d2ccb --- /dev/null +++ b/tests/system/test_instance_api.py @@ -0,0 +1,141 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +from test_utils import retry + +from . import _helpers + + +@pytest.fixture(scope="function") +def instances_to_delete(): + to_delete = [] + + yield to_delete + + for instance in to_delete: + _helpers.scrub_instance_ignore_not_found(instance) + + +def test_list_instances( + no_create_instance, + spanner_client, + existing_instances, + shared_instance, +): + instances = list(spanner_client.list_instances()) + + for instance in instances: + assert instance in existing_instances or instance is shared_instance + + +def test_reload_instance(spanner_client, shared_instance_id, shared_instance): + # Use same arguments as shared_instance_id so we can use 'reload()' + # on a fresh instance. + instance = spanner_client.instance(shared_instance_id) + + # Unset metadata before reloading. + instance.display_name = None + + def _expected_display_name(instance): + return instance.display_name == shared_instance.display_name + + retry_until = retry.RetryInstanceState(_expected_display_name) + + retry_until(instance.reload)() + + assert instance.display_name == shared_instance.display_name + + +def test_create_instance( + if_create_instance, + spanner_client, + instance_config, + instances_to_delete, + instance_operation_timeout, +): + alt_instance_id = _helpers.unique_id("new") + instance = spanner_client.instance(alt_instance_id, instance_config.name) + operation = instance.create() + # Make sure this instance gets deleted after the test case. + instances_to_delete.append(instance) + + # We want to make sure the operation completes. + operation.result(instance_operation_timeout) # raises on failure / timeout. + + # Create a new instance instance and make sure it is the same. + instance_alt = spanner_client.instance(alt_instance_id, instance_config.name) + instance_alt.reload() + + assert instance == instance_alt + instance.display_name == instance_alt.display_name + + +def test_create_instance_with_processing_units( + if_create_instance, + spanner_client, + instance_config, + instances_to_delete, + instance_operation_timeout, +): + alt_instance_id = _helpers.unique_id("wpn") + processing_units = 5000 + instance = spanner_client.instance( + instance_id=alt_instance_id, + configuration_name=instance_config.name, + processing_units=processing_units, + ) + operation = instance.create() + # Make sure this instance gets deleted after the test case. + instances_to_delete.append(instance) + + # We want to make sure the operation completes. + operation.result(instance_operation_timeout) # raises on failure / timeout. + + # Create a new instance instance and make sure it is the same. + instance_alt = spanner_client.instance(alt_instance_id, instance_config.name) + instance_alt.reload() + + assert instance == instance_alt + assert instance.display_name == instance_alt.display_name + assert instance.processing_units == instance_alt.processing_units + + +def test_update_instance( + not_emulator, + spanner_client, + shared_instance, + shared_instance_id, + instance_operation_timeout, +): + old_display_name = shared_instance.display_name + new_display_name = "Foo Bar Baz" + shared_instance.display_name = new_display_name + operation = shared_instance.update() + + # We want to make sure the operation completes. + operation.result(instance_operation_timeout) # raises on failure / timeout. + + # Create a new instance instance and reload it. + instance_alt = spanner_client.instance(shared_instance_id, None) + assert instance_alt.display_name != new_display_name + + instance_alt.reload() + assert instance_alt.display_name == new_display_name + + # Make sure to put the instance back the way it was for the + # other test cases. + shared_instance.display_name = old_display_name + shared_instance.update() diff --git a/tests/system/test_observability_options.py b/tests/system/test_observability_options.py new file mode 100644 index 0000000000..8ebcffcb7f --- /dev/null +++ b/tests/system/test_observability_options.py @@ -0,0 +1,552 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest +from mock import PropertyMock, patch + +from google.cloud.spanner_v1.session import Session +from google.cloud.spanner_v1.database_sessions_manager import TransactionType +from . import _helpers +from google.cloud.spanner_v1 import Client +from google.api_core.exceptions import Aborted +from google.auth.credentials import AnonymousCredentials +from google.rpc import code_pb2 + +from .._helpers import is_multiplexed_enabled + +HAS_OTEL_INSTALLED = False + +try: + from opentelemetry.sdk.trace.export import SimpleSpanProcessor + from opentelemetry.sdk.trace.export.in_memory_span_exporter import ( + InMemorySpanExporter, + ) + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.sampling import ALWAYS_ON + from opentelemetry import trace + + HAS_OTEL_INSTALLED = True +except ImportError: + pass + + +@pytest.mark.skipif( + not HAS_OTEL_INSTALLED, reason="OpenTelemetry is necessary to test traces." +) +@pytest.mark.skipif( + not _helpers.USE_EMULATOR, reason="Emulator is necessary to test traces." +) +def test_observability_options_propagation(): + PROJECT = _helpers.EMULATOR_PROJECT + CONFIGURATION_NAME = "config-name" + INSTANCE_ID = _helpers.INSTANCE_ID + DISPLAY_NAME = "display-name" + DATABASE_ID = _helpers.unique_id("temp_db") + NODE_COUNT = 5 + LABELS = {"test": "true"} + + def test_propagation(enable_extended_tracing): + global_tracer_provider = TracerProvider(sampler=ALWAYS_ON) + trace.set_tracer_provider(global_tracer_provider) + global_trace_exporter = InMemorySpanExporter() + global_tracer_provider.add_span_processor( + SimpleSpanProcessor(global_trace_exporter) + ) + + inject_tracer_provider = TracerProvider(sampler=ALWAYS_ON) + inject_trace_exporter = InMemorySpanExporter() + inject_tracer_provider.add_span_processor( + SimpleSpanProcessor(inject_trace_exporter) + ) + observability_options = dict( + tracer_provider=inject_tracer_provider, + enable_extended_tracing=enable_extended_tracing, + ) + client = Client( + project=PROJECT, + observability_options=observability_options, + credentials=_make_credentials(), + ) + + instance = client.instance( + INSTANCE_ID, + CONFIGURATION_NAME, + display_name=DISPLAY_NAME, + node_count=NODE_COUNT, + labels=LABELS, + ) + + try: + instance.create() + except Exception: + pass + + db = instance.database(DATABASE_ID) + try: + db.create() + except Exception: + pass + + assert db.observability_options == observability_options + with db.snapshot() as snapshot: + res = snapshot.execute_sql("SELECT 1") + for val in res: + _ = val + + from_global_spans = global_trace_exporter.get_finished_spans() + target_spans = inject_trace_exporter.get_finished_spans() + from_inject_spans = sorted(target_spans, key=lambda v1: v1.start_time) + assert ( + len(from_global_spans) == 0 + ) # "Expecting no spans from the global trace exporter" + assert ( + len(from_inject_spans) >= 2 + ) # "Expecting at least 2 spans from the injected trace exporter" + gotNames = [span.name for span in from_inject_spans] + + # Check if multiplexed sessions are enabled + multiplexed_enabled = is_multiplexed_enabled(TransactionType.READ_ONLY) + + # Determine expected session span name based on multiplexed sessions + expected_session_span_name = ( + "CloudSpanner.CreateMultiplexedSession" + if multiplexed_enabled + else "CloudSpanner.CreateSession" + ) + + wantNames = [ + expected_session_span_name, + "CloudSpanner.Snapshot.execute_sql", + ] + assert gotNames == wantNames + + # Check for conformance of enable_extended_tracing + lastSpan = from_inject_spans[len(from_inject_spans) - 1] + wantAnnotatedSQL = "SELECT 1" + if not enable_extended_tracing: + wantAnnotatedSQL = None + assert ( + lastSpan.attributes.get("db.statement", None) == wantAnnotatedSQL + ) # "Mismatch in annotated sql" + + try: + db.delete() + instance.delete() + except Exception: + pass + + # Test the respective options for enable_extended_tracing + test_propagation(True) + test_propagation(False) + + +def create_db_trace_exporter(): + from opentelemetry.sdk.trace.export import SimpleSpanProcessor + from opentelemetry.sdk.trace.export.in_memory_span_exporter import ( + InMemorySpanExporter, + ) + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.sampling import ALWAYS_ON + + PROJECT = _helpers.EMULATOR_PROJECT + CONFIGURATION_NAME = "config-name" + INSTANCE_ID = _helpers.INSTANCE_ID + DISPLAY_NAME = "display-name" + DATABASE_ID = _helpers.unique_id("temp_db") + NODE_COUNT = 5 + LABELS = {"test": "true"} + + tracer_provider = TracerProvider(sampler=ALWAYS_ON) + trace_exporter = InMemorySpanExporter() + tracer_provider.add_span_processor(SimpleSpanProcessor(trace_exporter)) + observability_options = dict( + tracer_provider=tracer_provider, + enable_extended_tracing=True, + ) + + client = Client( + project=PROJECT, + observability_options=observability_options, + credentials=AnonymousCredentials(), + ) + + instance = client.instance( + INSTANCE_ID, + CONFIGURATION_NAME, + display_name=DISPLAY_NAME, + node_count=NODE_COUNT, + labels=LABELS, + ) + + try: + instance.create() + except Exception: + pass + + db = instance.database(DATABASE_ID) + try: + db.create() + except Exception: + pass + + return db, trace_exporter + + +@pytest.mark.skipif( + not _helpers.USE_EMULATOR, + reason="Emulator needed to run this test", +) +@pytest.mark.skipif( + not HAS_OTEL_INSTALLED, + reason="Tracing requires OpenTelemetry", +) +@patch.object(Session, "session_id", new_callable=PropertyMock) +def test_transaction_abort_then_retry_spans(mock_session_id): + from opentelemetry.trace.status import StatusCode + + mock_session_id.return_value = session_id = "session-id" + multiplexed = is_multiplexed_enabled(TransactionType.READ_WRITE) + + db, trace_exporter = create_db_trace_exporter() + + counters = dict(aborted=0) + + def select_in_txn(txn): + results = txn.execute_sql("SELECT 1") + for row in results: + _ = row + + if counters["aborted"] == 0: + counters["aborted"] = 1 + raise Aborted( + "Thrown from ClientInterceptor for testing", + errors=[_helpers.FauxCall(code_pb2.ABORTED)], + ) + + db.run_in_transaction(select_in_txn) + + got_statuses, got_events = finished_spans_statuses(trace_exporter) + + # Check for the series of events + if multiplexed: + # With multiplexed sessions, there are no pool-related events + want_events = [ + ("Creating Session", {}), + ("Using session", {"id": session_id, "multiplexed": multiplexed}), + ("Returning session", {"id": session_id, "multiplexed": multiplexed}), + ( + "Transaction was aborted in user operation, retrying", + {"delay_seconds": "EPHEMERAL", "cause": "EPHEMERAL", "attempt": 1}, + ), + ("Starting Commit", {}), + ("Commit Done", {}), + ] + else: + # With regular sessions, include pool-related events + want_events = [ + ("Acquiring session", {"kind": "BurstyPool"}), + ("Waiting for a session to become available", {"kind": "BurstyPool"}), + ("No sessions available in pool. Creating session", {"kind": "BurstyPool"}), + ("Creating Session", {}), + ("Using session", {"id": session_id, "multiplexed": multiplexed}), + ("Returning session", {"id": session_id, "multiplexed": multiplexed}), + ( + "Transaction was aborted in user operation, retrying", + {"delay_seconds": "EPHEMERAL", "cause": "EPHEMERAL", "attempt": 1}, + ), + ("Starting Commit", {}), + ("Commit Done", {}), + ] + assert got_events == want_events + + # Check for the statues. + codes = StatusCode + if multiplexed: + # With multiplexed sessions, the session span name is different + want_statuses = [ + ("CloudSpanner.Database.run_in_transaction", codes.OK, None), + ("CloudSpanner.CreateMultiplexedSession", codes.OK, None), + ("CloudSpanner.Session.run_in_transaction", codes.OK, None), + ("CloudSpanner.Transaction.execute_sql", codes.OK, None), + ("CloudSpanner.Transaction.execute_sql", codes.OK, None), + ("CloudSpanner.Transaction.commit", codes.OK, None), + ] + else: + # With regular sessions + want_statuses = [ + ("CloudSpanner.Database.run_in_transaction", codes.OK, None), + ("CloudSpanner.CreateSession", codes.OK, None), + ("CloudSpanner.Session.run_in_transaction", codes.OK, None), + ("CloudSpanner.Transaction.execute_sql", codes.OK, None), + ("CloudSpanner.Transaction.execute_sql", codes.OK, None), + ("CloudSpanner.Transaction.commit", codes.OK, None), + ] + assert got_statuses == want_statuses + + +def finished_spans_statuses(trace_exporter): + span_list = trace_exporter.get_finished_spans() + # Sort the spans by their start time in the hierarchy. + span_list = sorted(span_list, key=lambda span: span.start_time) + + got_events = [] + got_statuses = [] + + # Some event attributes are noisy/highly ephemeral + # and can't be directly compared against. + imprecise_event_attributes = ["exception.stacktrace", "delay_seconds", "cause"] + for span in span_list: + got_statuses.append( + (span.name, span.status.status_code, span.status.description) + ) + + for event in span.events: + evt_attributes = event.attributes.copy() + for attr_name in imprecise_event_attributes: + if attr_name in evt_attributes: + evt_attributes[attr_name] = "EPHEMERAL" + + got_events.append((event.name, evt_attributes)) + + return got_statuses, got_events + + +@pytest.mark.skipif( + not _helpers.USE_EMULATOR, + reason="Emulator needed to run this tests", +) +@pytest.mark.skipif( + not HAS_OTEL_INSTALLED, + reason="Tracing requires OpenTelemetry", +) +def test_transaction_update_implicit_begin_nested_inside_commit(): + # Tests to ensure that transaction.commit() without a began transaction + # has transaction.begin() inlined and nested under the commit span. + from google.auth.credentials import AnonymousCredentials + from opentelemetry.sdk.trace.export import SimpleSpanProcessor + from opentelemetry.sdk.trace.export.in_memory_span_exporter import ( + InMemorySpanExporter, + ) + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.sampling import ALWAYS_ON + + PROJECT = _helpers.EMULATOR_PROJECT + CONFIGURATION_NAME = "config-name" + INSTANCE_ID = _helpers.INSTANCE_ID + DISPLAY_NAME = "display-name" + DATABASE_ID = _helpers.unique_id("temp_db") + NODE_COUNT = 5 + LABELS = {"test": "true"} + + def tx_update(txn): + txn.insert( + "Singers", + columns=["SingerId", "FirstName"], + values=[["1", "Bryan"], ["2", "Slash"]], + ) + + tracer_provider = TracerProvider(sampler=ALWAYS_ON) + trace_exporter = InMemorySpanExporter() + tracer_provider.add_span_processor(SimpleSpanProcessor(trace_exporter)) + observability_options = dict( + tracer_provider=tracer_provider, + enable_extended_tracing=True, + ) + + client = Client( + project=PROJECT, + observability_options=observability_options, + credentials=AnonymousCredentials(), + ) + + instance = client.instance( + INSTANCE_ID, + CONFIGURATION_NAME, + display_name=DISPLAY_NAME, + node_count=NODE_COUNT, + labels=LABELS, + ) + + try: + instance.create() + except Exception: + pass + + db = instance.database(DATABASE_ID) + try: + db._ddl_statements = [ + """CREATE TABLE Singers ( + SingerId INT64 NOT NULL, + FirstName STRING(1024), + LastName STRING(1024), + SingerInfo BYTES(MAX), + FullName STRING(2048) AS ( + ARRAY_TO_STRING([FirstName, LastName], " ") + ) STORED + ) PRIMARY KEY (SingerId)""", + """CREATE TABLE Albums ( + SingerId INT64 NOT NULL, + AlbumId INT64 NOT NULL, + AlbumTitle STRING(MAX), + MarketingBudget INT64, + ) PRIMARY KEY (SingerId, AlbumId), + INTERLEAVE IN PARENT Singers ON DELETE CASCADE""", + ] + db.create() + except Exception: + pass + + try: + db.run_in_transaction(tx_update) + except Exception: + pass + + span_list = trace_exporter.get_finished_spans() + # Sort the spans by their start time in the hierarchy. + span_list = sorted(span_list, key=lambda span: span.start_time) + got_span_names = [span.name for span in span_list] + + # Check if multiplexed sessions are enabled for read-write transactions + multiplexed_enabled = is_multiplexed_enabled(TransactionType.READ_WRITE) + + # Determine expected session span name based on multiplexed sessions + expected_session_span_name = ( + "CloudSpanner.CreateMultiplexedSession" + if multiplexed_enabled + else "CloudSpanner.CreateSession" + ) + + want_span_names = [ + "CloudSpanner.Database.run_in_transaction", + expected_session_span_name, + "CloudSpanner.Session.run_in_transaction", + "CloudSpanner.Transaction.commit", + "CloudSpanner.Transaction.begin", + ] + + assert got_span_names == want_span_names + + # Our object is to ensure that .begin() is a child of .commit() + span_tx_begin = span_list[-1] + span_tx_commit = span_list[-2] + assert span_tx_begin.parent.span_id == span_tx_commit.context.span_id + + +@pytest.mark.skipif( + not _helpers.USE_EMULATOR, + reason="Emulator needed to run this test", +) +@pytest.mark.skipif( + not HAS_OTEL_INSTALLED, + reason="Tracing requires OpenTelemetry", +) +def test_database_partitioned_error(): + from opentelemetry.trace.status import StatusCode + + db, trace_exporter = create_db_trace_exporter() + + try: + db.execute_partitioned_dml("UPDATE NonExistent SET name = 'foo' WHERE id > 1") + except Exception: + pass + + got_statuses, got_events = finished_spans_statuses(trace_exporter) + multiplexed_enabled = is_multiplexed_enabled(TransactionType.PARTITIONED) + + if multiplexed_enabled: + expected_event_names = [ + "Creating Session", + "Using session", + "Starting BeginTransaction", + "Returning session", + "exception", + "exception", + ] + assert len(got_events) == len(expected_event_names) + for i, expected_name in enumerate(expected_event_names): + assert got_events[i][0] == expected_name + + assert got_events[1][1]["multiplexed"] is True + + assert got_events[3][1]["multiplexed"] is True + + for i in [4, 5]: + assert ( + got_events[i][1]["exception.type"] + == "google.api_core.exceptions.InvalidArgument" + ) + assert ( + "Table not found: NonExistent" in got_events[i][1]["exception.message"] + ) + else: + expected_event_names = [ + "Acquiring session", + "Waiting for a session to become available", + "No sessions available in pool. Creating session", + "Creating Session", + "Using session", + "Starting BeginTransaction", + "Returning session", + "exception", + "exception", + ] + + assert len(got_events) == len(expected_event_names) + for i, expected_name in enumerate(expected_event_names): + assert got_events[i][0] == expected_name + + assert got_events[0][1]["kind"] == "BurstyPool" + assert got_events[1][1]["kind"] == "BurstyPool" + assert got_events[2][1]["kind"] == "BurstyPool" + + assert got_events[4][1]["multiplexed"] is False + + assert got_events[6][1]["multiplexed"] is False + + for i in [7, 8]: + assert ( + got_events[i][1]["exception.type"] + == "google.api_core.exceptions.InvalidArgument" + ) + assert ( + "Table not found: NonExistent" in got_events[i][1]["exception.message"] + ) + + codes = StatusCode + + expected_session_span_name = ( + "CloudSpanner.CreateMultiplexedSession" + if multiplexed_enabled + else "CloudSpanner.CreateSession" + ) + want_statuses = [ + ( + "CloudSpanner.Database.execute_partitioned_pdml", + codes.ERROR, + "InvalidArgument: 400 Table not found: NonExistent [at 1:8]\nUPDATE NonExistent SET name = 'foo' WHERE id > 1\n ^", + ), + (expected_session_span_name, codes.OK, None), + ( + "CloudSpanner.ExecuteStreamingSql", + codes.ERROR, + "InvalidArgument: 400 Table not found: NonExistent [at 1:8]\nUPDATE NonExistent SET name = 'foo' WHERE id > 1\n ^", + ), + ] + assert got_statuses == want_statuses + + +def _make_credentials(): + from google.auth.credentials import AnonymousCredentials + + return AnonymousCredentials() diff --git a/tests/system/test_session_api.py b/tests/system/test_session_api.py new file mode 100644 index 0000000000..04d8ad799a --- /dev/null +++ b/tests/system/test_session_api.py @@ -0,0 +1,3452 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import base64 +import collections +import datetime +import decimal + +import math +import struct +import threading +import time +import pytest + +import grpc +from google.rpc import code_pb2 +from google.api_core import datetime_helpers +from google.api_core import exceptions +from google.cloud import spanner_v1 +from google.cloud.spanner_admin_database_v1 import DatabaseDialect +from google.cloud._helpers import UTC + +from google.cloud.spanner_v1._helpers import AtomicCounter +from google.cloud.spanner_v1.data_types import JsonObject +from google.cloud.spanner_v1.database_sessions_manager import TransactionType +from .testdata import singer_pb2 +from tests import _helpers as ot_helpers +from . import _helpers +from . import _sample_data +from google.cloud.spanner_v1.request_id_header import ( + REQ_RAND_PROCESS_ID, + parse_request_id, + build_request_id, +) +from tests._helpers import is_multiplexed_enabled + +SOME_DATE = datetime.date(2011, 1, 17) +SOME_TIME = datetime.datetime(1989, 1, 17, 17, 59, 12, 345612) +NANO_TIME = datetime_helpers.DatetimeWithNanoseconds(1995, 8, 31, nanosecond=987654321) +POS_INF = float("+inf") +NEG_INF = float("-inf") +(OTHER_NAN,) = struct.unpack("--Session.run_in_transaction----------| + # |---------DMLTransaction-------| + # + # |>----Transaction.commit---| + + # CreateSession should have a trace of its own, with no children + # nor being a child of any other span. + session_span = span_list[0] + test_span = span_list[2] + # assert session_span.context.trace_id != test_span.context.trace_id + for span in span_list[1:]: + if span.parent: + assert span.parent.span_id != session_span.context.span_id + + def assert_parent_and_children(parent_span, children): + for span in children: + assert span.context.trace_id == parent_span.context.trace_id + assert span.parent.span_id == parent_span.context.span_id + + # [CreateSession --> Batch] should have their own trace. + session_run_in_txn_span = span_list[3] + children_of_test_span = [session_run_in_txn_span] + assert_parent_and_children(test_span, children_of_test_span) + + dml_txn_span = span_list[4] + batch_commit_txn_span = span_list[5] + children_of_session_run_in_txn_span = [dml_txn_span, batch_commit_txn_span] + assert_parent_and_children( + session_run_in_txn_span, children_of_session_run_in_txn_span + ) + + +def test_execute_partitioned_dml( + not_postgres_emulator, sessions_database, database_dialect +): + # [START spanner_test_dml_partioned_dml_update] + sd = _sample_data + param_types = spanner_v1.param_types + + delete_statement = f"DELETE FROM {sd.TABLE} WHERE true" + + def _setup_table(txn): + txn.execute_update(delete_statement) + for insert_statement in _generate_insert_statements(): + txn.execute_update(insert_statement) + + committed = sessions_database.run_in_transaction(_setup_table) + + with sessions_database.snapshot(read_timestamp=committed) as snapshot: + before_pdml = list(snapshot.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + + sd._check_rows_data(before_pdml) + + keys = ( + ["p1", "p2"] + if database_dialect == DatabaseDialect.POSTGRESQL + else ["email", "target"] + ) + placeholders = ( + ["$1", "$2"] + if database_dialect == DatabaseDialect.POSTGRESQL + else [f"@{key}" for key in keys] + ) + nonesuch = "nonesuch@example.com" + target = "phred@example.com" + update_statement = ( + f"UPDATE contacts SET email = {placeholders[0]} WHERE email = {placeholders[1]}" + ) + + row_count = sessions_database.execute_partitioned_dml( + update_statement, + params={keys[0]: nonesuch, keys[1]: target}, + param_types={keys[0]: param_types.STRING, keys[1]: param_types.STRING}, + request_options=spanner_v1.RequestOptions( + priority=spanner_v1.RequestOptions.Priority.PRIORITY_MEDIUM + ), + ) + assert row_count == 1 + + row = sd.ROW_DATA[0] + updated = [row[:3] + (nonesuch,)] + list(sd.ROW_DATA[1:]) + + with sessions_database.snapshot(read_timestamp=committed) as snapshot: + after_update = list(snapshot.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_rows_data(after_update, updated) + + row_count = sessions_database.execute_partitioned_dml(delete_statement) + assert row_count == len(sd.ROW_DATA) + + with sessions_database.snapshot(read_timestamp=committed) as snapshot: + after_delete = list(snapshot.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + + sd._check_rows_data(after_delete, []) + # [END spanner_test_dml_partioned_dml_update] + + +def _transaction_concurrency_helper( + sessions_database, unit_of_work, pkey, database_dialect=None +): + initial_value = 123 + num_threads = 3 # conforms to equivalent Java systest. + + with sessions_database.batch() as batch: + batch.insert_or_update( + COUNTERS_TABLE, COUNTERS_COLUMNS, [[pkey, initial_value]] + ) + + # We don't want to run the threads' transactions in the current + # session, which would fail. + txn_sessions = [] + + for _ in range(num_threads): + txn_sessions.append(sessions_database) + + args = ( + (unit_of_work, pkey, database_dialect) + if database_dialect + else (unit_of_work, pkey) + ) + + threads = [ + threading.Thread(target=txn_session.run_in_transaction, args=args) + for txn_session in txn_sessions + ] + + for thread in threads: + thread.start() + + for thread in threads: + thread.join() + + with sessions_database.snapshot() as snapshot: + keyset = spanner_v1.KeySet(keys=[(pkey,)]) + rows = list(snapshot.read(COUNTERS_TABLE, COUNTERS_COLUMNS, keyset)) + assert len(rows) == 1 + _, value = rows[0] + multiplexed_enabled = is_multiplexed_enabled(TransactionType.READ_WRITE) + if multiplexed_enabled: + # Allow for partial success due to transaction aborts + assert initial_value < value <= initial_value + num_threads + else: + assert value == initial_value + num_threads + + +def _read_w_concurrent_update(transaction, pkey): + keyset = spanner_v1.KeySet(keys=[(pkey,)]) + rows = list(transaction.read(COUNTERS_TABLE, COUNTERS_COLUMNS, keyset)) + assert len(rows) == 1 + pkey, value = rows[0] + transaction.update(COUNTERS_TABLE, COUNTERS_COLUMNS, [[pkey, value + 1]]) + + +def test_transaction_read_w_concurrent_updates( + sessions_database, + # TODO: Re-enable when the Emulator returns pre-commit tokens for streaming reads. + not_emulator, +): + pkey = "read_w_concurrent_updates" + _transaction_concurrency_helper(sessions_database, _read_w_concurrent_update, pkey) + + +def _query_w_concurrent_update(transaction, pkey, database_dialect): + param_types = spanner_v1.param_types + key = "p1" if database_dialect == DatabaseDialect.POSTGRESQL else "name" + placeholder = "$1" if database_dialect == DatabaseDialect.POSTGRESQL else f"@{key}" + sql = f"SELECT * FROM {COUNTERS_TABLE} WHERE name = {placeholder}" + rows = list( + transaction.execute_sql( + sql, params={key: pkey}, param_types={key: param_types.STRING} + ) + ) + assert len(rows) == 1 + pkey, value = rows[0] + transaction.update(COUNTERS_TABLE, COUNTERS_COLUMNS, [[pkey, value + 1]]) + + +def test_transaction_query_w_concurrent_updates(sessions_database, database_dialect): + pkey = "query_w_concurrent_updates" + _transaction_concurrency_helper( + sessions_database, _query_w_concurrent_update, pkey, database_dialect + ) + + +def test_transaction_read_w_abort(not_emulator, sessions_database): + sd = _sample_data + trigger = _ReadAbortTrigger() + + with sessions_database.batch() as batch: + batch.delete(COUNTERS_TABLE, sd.ALL) + batch.insert( + COUNTERS_TABLE, COUNTERS_COLUMNS, [[trigger.KEY1, 0], [trigger.KEY2, 0]] + ) + + provoker = threading.Thread(target=trigger.provoke_abort, args=(sessions_database,)) + handler = threading.Thread(target=trigger.handle_abort, args=(sessions_database,)) + + provoker.start() + trigger.provoker_started.wait() + + handler.start() + trigger.handler_done.wait() + + provoker.join() + handler.join() + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(COUNTERS_TABLE, COUNTERS_COLUMNS, sd.ALL)) + sd._check_row_data(rows, expected=[[trigger.KEY1, 1], [trigger.KEY2, 1]]) + + +def _row_data(max_index): + for index in range(max_index): + yield ( + index, + f"First{index:09}", + f"Last{max_index - index:09}", + f"test-{index:09}@example.com", + ) + + +def _set_up_table(database, row_count): + sd = _sample_data + + def _unit_of_work(transaction): + transaction.delete(sd.TABLE, sd.ALL) + transaction.insert(sd.TABLE, sd.COLUMNS, _row_data(row_count)) + + committed = database.run_in_transaction(_unit_of_work) + + return committed + + +def _set_up_proto_table(database): + sd = _sample_data + + def _unit_of_work(transaction): + transaction.delete(sd.SINGERS_PROTO_TABLE, sd.ALL) + transaction.insert( + sd.SINGERS_PROTO_TABLE, sd.SINGERS_PROTO_COLUMNS, sd.SINGERS_PROTO_ROW_DATA + ) + + committed = database.run_in_transaction(_unit_of_work) + + return committed + + +def test_read_with_single_keys_index(sessions_database): + # [START spanner_test_single_key_index_read] + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + _set_up_table(sessions_database, row_count) + + expected = [[row[1], row[2]] for row in _row_data(row_count)] + row = 5 + keyset = [[expected[row][0], expected[row][1]]] + with sessions_database.snapshot() as snapshot: + results_iter = snapshot.read( + sd.TABLE, columns, spanner_v1.KeySet(keys=keyset), index="name" + ) + rows = list(results_iter) + assert rows == [expected[row]] + + # [END spanner_test_single_key_index_read] + + +def test_empty_read_with_single_keys_index(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + _set_up_table(sessions_database, row_count) + keyset = [["Non", "Existent"]] + + with sessions_database.snapshot() as snapshot: + results_iter = snapshot.read( + sd.TABLE, columns, spanner_v1.KeySet(keys=keyset), index="name" + ) + rows = list(results_iter) + assert rows == [] + + +def test_read_with_multiple_keys_index(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + _set_up_table(sessions_database, row_count) + expected = [[row[1], row[2]] for row in _row_data(row_count)] + + with sessions_database.snapshot() as snapshot: + rows = list( + snapshot.read( + sd.TABLE, + columns, + spanner_v1.KeySet(keys=expected), + index="name", + ) + ) + assert rows == expected + + +def test_snapshot_read_w_various_staleness(sessions_database): + sd = _sample_data + row_count = 400 + committed = _set_up_table(sessions_database, row_count) + all_data_rows = list(_row_data(row_count)) + + before_reads = datetime.datetime.utcnow().replace(tzinfo=UTC) + + # Test w/ read timestamp + with sessions_database.snapshot(read_timestamp=committed) as read_tx: + rows = list(read_tx.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(rows, all_data_rows) + + # Test w/ min read timestamp + with sessions_database.snapshot(min_read_timestamp=committed) as min_read_ts: + rows = list(min_read_ts.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(rows, all_data_rows) + + staleness = datetime.datetime.utcnow().replace(tzinfo=UTC) - before_reads + + # Test w/ max staleness + with sessions_database.snapshot(max_staleness=staleness) as max_staleness: + rows = list(max_staleness.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(rows, all_data_rows) + + # Test w/ exact staleness + with sessions_database.snapshot(exact_staleness=staleness) as exact_staleness: + rows = list(exact_staleness.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(rows, all_data_rows) + + # Test w/ strong + with sessions_database.snapshot() as strong: + rows = list(strong.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(rows, all_data_rows) + + +def test_multiuse_snapshot_read_isolation_strong(sessions_database): + sd = _sample_data + row_count = 40 + _set_up_table(sessions_database, row_count) + all_data_rows = list(_row_data(row_count)) + with sessions_database.snapshot(multi_use=True) as strong: + before = list(strong.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(before, all_data_rows) + + with sessions_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + after = list(strong.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(after, all_data_rows) + + +def test_multiuse_snapshot_read_isolation_read_timestamp(sessions_database): + sd = _sample_data + row_count = 40 + committed = _set_up_table(sessions_database, row_count) + all_data_rows = list(_row_data(row_count)) + + with sessions_database.snapshot( + read_timestamp=committed, multi_use=True + ) as read_ts: + before = list(read_ts.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(before, all_data_rows) + + with sessions_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + after = list(read_ts.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(after, all_data_rows) + + +def test_multiuse_snapshot_read_isolation_exact_staleness(sessions_database): + sd = _sample_data + row_count = 40 + + _set_up_table(sessions_database, row_count) + all_data_rows = list(_row_data(row_count)) + + time.sleep(1) + delta = datetime.timedelta(microseconds=1000) + + with sessions_database.snapshot(exact_staleness=delta, multi_use=True) as exact: + before = list(exact.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(before, all_data_rows) + + with sessions_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + after = list(exact.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + sd._check_row_data(after, all_data_rows) + + +def test_read_w_index( + shared_instance, + database_operation_timeout, + databases_to_delete, + database_dialect, + proto_descriptor_file, +): + # Indexed reads cannot return non-indexed columns + sd = _sample_data + row_count = 2000 + my_columns = sd.COLUMNS[0], sd.COLUMNS[2] + + # Create an alternate dataase w/ index. + extra_ddl = ["CREATE INDEX contacts_by_last_name ON contacts(last_name)"] + pool = spanner_v1.BurstyPool(labels={"testcase": "read_w_index"}) + + if database_dialect == DatabaseDialect.POSTGRESQL: + temp_db = shared_instance.database( + _helpers.unique_id("test_read", separator="_"), + pool=pool, + database_dialect=database_dialect, + ) + operation = temp_db.create() + operation.result(database_operation_timeout) + + operation = temp_db.update_ddl( + ddl_statements=_helpers.DDL_STATEMENTS + extra_ddl, + ) + operation.result(database_operation_timeout) + + else: + temp_db = shared_instance.database( + _helpers.unique_id("test_read", separator="_"), + ddl_statements=_helpers.DDL_STATEMENTS + + extra_ddl + + _helpers.PROTO_COLUMNS_DDL_STATEMENTS, + pool=pool, + database_dialect=database_dialect, + proto_descriptors=proto_descriptor_file, + ) + operation = temp_db.create() + operation.result(database_operation_timeout) # raises on failure / timeout. + + databases_to_delete.append(temp_db) + committed = _set_up_table(temp_db, row_count) + + with temp_db.snapshot(read_timestamp=committed) as snapshot: + rows = list( + snapshot.read(sd.TABLE, my_columns, sd.ALL, index="contacts_by_last_name") + ) + + expected = list(reversed([(row[0], row[2]) for row in _row_data(row_count)])) + sd._check_rows_data(rows, expected) + + # Test indexes on proto column types + if database_dialect == DatabaseDialect.GOOGLE_STANDARD_SQL: + # Indexed reads cannot return non-indexed columns + my_columns = ( + sd.SINGERS_PROTO_COLUMNS[0], + sd.SINGERS_PROTO_COLUMNS[1], + sd.SINGERS_PROTO_COLUMNS[4], + ) + committed = _set_up_proto_table(temp_db) + with temp_db.snapshot(read_timestamp=committed) as snapshot: + rows = list( + snapshot.read( + sd.SINGERS_PROTO_TABLE, + my_columns, + spanner_v1.KeySet(keys=[[singer_pb2.Genre.ROCK]]), + index="SingerByGenre", + ) + ) + row = sd.SINGERS_PROTO_ROW_DATA[0] + expected = list([(row[0], row[1], row[4])]) + sd._check_rows_data(rows, expected) + + +def test_read_w_single_key(sessions_database): + # [START spanner_test_single_key_read] + sd = _sample_data + row_count = 40 + committed = _set_up_table(sessions_database, row_count) + + with sessions_database.snapshot(read_timestamp=committed) as snapshot: + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, spanner_v1.KeySet(keys=[(0,)]))) + + all_data_rows = list(_row_data(row_count)) + expected = [all_data_rows[0]] + sd._check_row_data(rows, expected) + # [END spanner_test_single_key_read] + + +def test_empty_read(sessions_database): + # [START spanner_test_empty_read] + sd = _sample_data + row_count = 40 + _set_up_table(sessions_database, row_count) + with sessions_database.snapshot() as snapshot: + rows = list( + snapshot.read(sd.TABLE, sd.COLUMNS, spanner_v1.KeySet(keys=[(40,)])) + ) + sd._check_row_data(rows, []) + # [END spanner_test_empty_read] + + +def test_read_w_multiple_keys(sessions_database): + sd = _sample_data + row_count = 40 + indices = [0, 5, 17] + committed = _set_up_table(sessions_database, row_count) + + with sessions_database.snapshot(read_timestamp=committed) as snapshot: + rows = list( + snapshot.read( + sd.TABLE, + sd.COLUMNS, + spanner_v1.KeySet(keys=[(index,) for index in indices]), + ) + ) + + all_data_rows = list(_row_data(row_count)) + expected = [row for row in all_data_rows if row[0] in indices] + sd._check_row_data(rows, expected) + + +def test_read_w_limit(sessions_database): + sd = _sample_data + row_count = 3000 + limit = 100 + committed = _set_up_table(sessions_database, row_count) + + with sessions_database.snapshot(read_timestamp=committed) as snapshot: + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, sd.ALL, limit=limit)) + + all_data_rows = list(_row_data(row_count)) + expected = all_data_rows[:limit] + sd._check_row_data(rows, expected) + + +def test_read_w_ranges(sessions_database): + sd = _sample_data + row_count = 3000 + start = 1000 + end = 2000 + committed = _set_up_table(sessions_database, row_count) + with sessions_database.snapshot( + read_timestamp=committed, + multi_use=True, + ) as snapshot: + all_data_rows = list(_row_data(row_count)) + + single_key = spanner_v1.KeyRange(start_closed=[start], end_open=[start + 1]) + keyset = spanner_v1.KeySet(ranges=(single_key,)) + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, keyset)) + expected = all_data_rows[start : start + 1] + sd._check_rows_data(rows, expected) + + closed_closed = spanner_v1.KeyRange(start_closed=[start], end_closed=[end]) + keyset = spanner_v1.KeySet(ranges=(closed_closed,)) + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, keyset)) + expected = all_data_rows[start : end + 1] + sd._check_row_data(rows, expected) + + closed_open = spanner_v1.KeyRange(start_closed=[start], end_open=[end]) + keyset = spanner_v1.KeySet(ranges=(closed_open,)) + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, keyset)) + expected = all_data_rows[start:end] + sd._check_row_data(rows, expected) + + open_open = spanner_v1.KeyRange(start_open=[start], end_open=[end]) + keyset = spanner_v1.KeySet(ranges=(open_open,)) + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, keyset)) + expected = all_data_rows[start + 1 : end] + sd._check_row_data(rows, expected) + + open_closed = spanner_v1.KeyRange(start_open=[start], end_closed=[end]) + keyset = spanner_v1.KeySet(ranges=(open_closed,)) + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, keyset)) + expected = all_data_rows[start + 1 : end + 1] + sd._check_row_data(rows, expected) + + +def test_read_partial_range_until_end(sessions_database): + sd = _sample_data + row_count = 3000 + start = 1000 + committed = _set_up_table(sessions_database, row_count) + with sessions_database.snapshot( + read_timestamp=committed, + multi_use=True, + ) as snapshot: + all_data_rows = list(_row_data(row_count)) + + expected_map = { + ("start_closed", "end_closed"): all_data_rows[start:], + ("start_closed", "end_open"): [], + ("start_open", "end_closed"): all_data_rows[start + 1 :], + ("start_open", "end_open"): [], + } + + for start_arg in ("start_closed", "start_open"): + for end_arg in ("end_closed", "end_open"): + range_kwargs = {start_arg: [start], end_arg: []} + keyset = spanner_v1.KeySet( + ranges=(spanner_v1.KeyRange(**range_kwargs),) + ) + + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, keyset)) + expected = expected_map[(start_arg, end_arg)] + sd._check_row_data(rows, expected) + + +def test_read_partial_range_from_beginning(sessions_database): + sd = _sample_data + row_count = 3000 + end = 2000 + committed = _set_up_table(sessions_database, row_count) + + all_data_rows = list(_row_data(row_count)) + + expected_map = { + ("start_closed", "end_closed"): all_data_rows[: end + 1], + ("start_closed", "end_open"): all_data_rows[:end], + ("start_open", "end_closed"): [], + ("start_open", "end_open"): [], + } + + for start_arg in ("start_closed", "start_open"): + for end_arg in ("end_closed", "end_open"): + range_kwargs = {start_arg: [], end_arg: [end]} + keyset = spanner_v1.KeySet(ranges=(spanner_v1.KeyRange(**range_kwargs),)) + + with sessions_database.snapshot( + read_timestamp=committed, + multi_use=True, + ) as snapshot: + rows = list(snapshot.read(sd.TABLE, sd.COLUMNS, keyset)) + expected = expected_map[(start_arg, end_arg)] + sd._check_row_data(rows, expected) + + +def test_read_with_range_keys_index_single_key(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start = 3 + krange = spanner_v1.KeyRange(start_closed=data[start], end_open=data[start + 1]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + assert rows == data[start : start + 1] + + +def test_read_with_range_keys_index_closed_closed(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start, end = 3, 7 + krange = spanner_v1.KeyRange(start_closed=data[start], end_closed=data[end]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + assert rows == data[start : end + 1] + + +def test_read_with_range_keys_index_closed_open(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start, end = 3, 7 + krange = spanner_v1.KeyRange(start_closed=data[start], end_open=data[end]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + assert rows == data[start:end] + + +def test_read_with_range_keys_index_open_closed(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start, end = 3, 7 + krange = spanner_v1.KeyRange(start_open=data[start], end_closed=data[end]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + assert rows == data[start + 1 : end + 1] + + +def test_read_with_range_keys_index_open_open(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start, end = 3, 7 + krange = spanner_v1.KeyRange(start_open=data[start], end_open=data[end]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + assert rows == data[start + 1 : end] + + +def test_read_with_range_keys_index_limit_closed_closed(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start, end, limit = 3, 7, 2 + krange = spanner_v1.KeyRange(start_closed=data[start], end_closed=data[end]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name", limit=limit)) + expected = data[start : end + 1] + assert rows == expected[:limit] + + +def test_read_with_range_keys_index_limit_closed_open(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start, end, limit = 3, 7, 2 + krange = spanner_v1.KeyRange(start_closed=data[start], end_open=data[end]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name", limit=limit)) + expected = data[start:end] + assert rows == expected[:limit] + + +def test_read_with_range_keys_index_limit_open_closed(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start, end, limit = 3, 7, 2 + krange = spanner_v1.KeyRange(start_open=data[start], end_closed=data[end]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name", limit=limit)) + expected = data[start + 1 : end + 1] + assert rows == expected[:limit] + + +def test_read_with_range_keys_index_limit_open_open(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + data = [[row[1], row[2]] for row in _row_data(row_count)] + _set_up_table(sessions_database, row_count) + start, end, limit = 3, 7, 2 + krange = spanner_v1.KeyRange(start_open=data[start], end_open=data[end]) + keyset = spanner_v1.KeySet(ranges=(krange,)) + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name", limit=limit)) + expected = data[start + 1 : end] + assert rows == expected[:limit] + + +def test_read_with_range_keys_and_index_closed_closed(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + + _set_up_table(sessions_database, row_count) + data = [[row[1], row[2]] for row in _row_data(row_count)] + keyrow, start, end = 1, 3, 7 + closed_closed = spanner_v1.KeyRange(start_closed=data[start], end_closed=data[end]) + keys = [data[keyrow]] + keyset = spanner_v1.KeySet(keys=keys, ranges=(closed_closed,)) + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + expected = [data[keyrow]] + data[start : end + 1] + assert rows == expected + + +def test_read_with_range_keys_and_index_closed_open(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + _set_up_table(sessions_database, row_count) + data = [[row[1], row[2]] for row in _row_data(row_count)] + keyrow, start, end = 1, 3, 7 + closed_open = spanner_v1.KeyRange(start_closed=data[start], end_open=data[end]) + keys = [data[keyrow]] + keyset = spanner_v1.KeySet(keys=keys, ranges=(closed_open,)) + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + expected = [data[keyrow]] + data[start:end] + assert rows == expected + + +def test_read_with_range_keys_and_index_open_closed(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + _set_up_table(sessions_database, row_count) + data = [[row[1], row[2]] for row in _row_data(row_count)] + keyrow, start, end = 1, 3, 7 + open_closed = spanner_v1.KeyRange(start_open=data[start], end_closed=data[end]) + keys = [data[keyrow]] + keyset = spanner_v1.KeySet(keys=keys, ranges=(open_closed,)) + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + expected = [data[keyrow]] + data[start + 1 : end + 1] + assert rows == expected + + +def test_read_with_range_keys_and_index_open_open(sessions_database): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + _set_up_table(sessions_database, row_count) + data = [[row[1], row[2]] for row in _row_data(row_count)] + keyrow, start, end = 1, 3, 7 + open_open = spanner_v1.KeyRange(start_open=data[start], end_open=data[end]) + keys = [data[keyrow]] + keyset = spanner_v1.KeySet(keys=keys, ranges=(open_open,)) + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.read(sd.TABLE, columns, keyset, index="name")) + expected = [data[keyrow]] + data[start + 1 : end] + assert rows == expected + + +def test_partition_read_w_index(sessions_database, not_emulator): + sd = _sample_data + row_count = 10 + columns = sd.COLUMNS[1], sd.COLUMNS[2] + committed = _set_up_table(sessions_database, row_count) + + expected = [[row[1], row[2]] for row in _row_data(row_count)] + union = [] + + batch_txn = sessions_database.batch_snapshot(read_timestamp=committed) + batches = batch_txn.generate_read_batches( + sd.TABLE, + columns, + spanner_v1.KeySet(all_=True), + index="name", + data_boost_enabled=True, + ) + for batch in batches: + p_results_iter = batch_txn.process(batch) + union.extend(list(p_results_iter)) + + assert union == expected + batch_txn.close() + + +def test_execute_sql_w_manual_consume(sessions_database): + sd = _sample_data + row_count = 3000 + committed = _set_up_table(sessions_database, row_count) + + for lazy_decode in [False, True]: + with sessions_database.snapshot(read_timestamp=committed) as snapshot: + streamed = snapshot.execute_sql(sd.SQL, lazy_decode=lazy_decode) + + keyset = spanner_v1.KeySet(all_=True) + + with sessions_database.snapshot(read_timestamp=committed) as snapshot: + rows = list( + snapshot.read(sd.TABLE, sd.COLUMNS, keyset, lazy_decode=lazy_decode) + ) + + assert list(streamed) == rows + assert streamed._current_row == [] + assert streamed._pending_chunk is None + + +def test_execute_sql_w_to_dict_list(sessions_database): + sd = _sample_data + row_count = 40 + _set_up_table(sessions_database, row_count) + + with sessions_database.snapshot() as snapshot: + rows = snapshot.execute_sql(sd.SQL).to_dict_list() + all_data_rows = list(_row_data(row_count)) + row_data = [list(row.values()) for row in rows] + sd._check_row_data(row_data, all_data_rows) + assert all(set(row.keys()) == set(sd.COLUMNS) for row in rows) + + +def _check_sql_results( + database, + sql, + params, + param_types=None, + expected=None, + order=True, + recurse_into_lists=True, + column_info=None, +): + if order and "ORDER" not in sql: + sql += " ORDER BY pkey" + + for lazy_decode in [False, True]: + with database.snapshot() as snapshot: + iterator = snapshot.execute_sql( + sql, + params=params, + param_types=param_types, + column_info=column_info, + lazy_decode=lazy_decode, + ) + rows = list(iterator) + if lazy_decode: + for index, row in enumerate(rows): + rows[index] = iterator.decode_row(row) + + _sample_data._check_rows_data( + rows, expected=expected, recurse_into_lists=recurse_into_lists + ) + + +def test_multiuse_snapshot_execute_sql_isolation_strong(sessions_database): + sd = _sample_data + row_count = 40 + _set_up_table(sessions_database, row_count) + all_data_rows = list(_row_data(row_count)) + + with sessions_database.snapshot(multi_use=True) as strong: + before = list(strong.execute_sql(sd.SQL)) + sd._check_row_data(before, all_data_rows) + + with sessions_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + after = list(strong.execute_sql(sd.SQL)) + sd._check_row_data(after, all_data_rows) + + +def test_execute_sql_returning_array_of_struct(sessions_database, not_postgres): + sql = ( + "SELECT ARRAY(SELECT AS STRUCT C1, C2 " + "FROM (SELECT 'a' AS C1, 1 AS C2 " + "UNION ALL SELECT 'b' AS C1, 2 AS C2) " + "ORDER BY C1 ASC)" + ) + _check_sql_results( + sessions_database, + sql=sql, + params=None, + param_types=None, + expected=[[[["a", 1], ["b", 2]]]], + ) + + +def test_execute_sql_returning_empty_array_of_struct(sessions_database, not_postgres): + sql = ( + "SELECT ARRAY(SELECT AS STRUCT C1, C2 " + "FROM (SELECT 2 AS C1) X " + "JOIN (SELECT 1 AS C2) Y " + "ON X.C1 = Y.C2 " + "ORDER BY C1 ASC)" + ) + sessions_database.snapshot(multi_use=True) + + _check_sql_results( + sessions_database, sql=sql, params=None, param_types=None, expected=[[[]]] + ) + + +def test_invalid_type(sessions_database): + sd = _sample_data + table = "counters" + columns = ("name", "value") + + valid_input = (("", 0),) + with sessions_database.batch() as batch: + batch.delete(table, sd.ALL) + batch.insert(table, columns, valid_input) + + invalid_input = ((0, ""),) + with pytest.raises(exceptions.FailedPrecondition): + with sessions_database.batch() as batch: + batch.delete(table, sd.ALL) + batch.insert(table, columns, invalid_input) + + +def test_execute_sql_select_1(sessions_database): + sessions_database.snapshot(multi_use=True) + + # Hello, world query + _check_sql_results( + sessions_database, + sql="SELECT 1", + params=None, + param_types=None, + expected=[(1,)], + order=False, + ) + + +def _bind_test_helper( + database, + database_dialect, + param_type, + single_value, + array_value, + expected_array_value=None, + recurse_into_lists=True, + column_info=None, + expected_single_value=None, +): + database.snapshot(multi_use=True) + + key = "p1" if database_dialect == DatabaseDialect.POSTGRESQL else "v" + placeholder = "$1" if database_dialect == DatabaseDialect.POSTGRESQL else f"@{key}" + + if expected_single_value is None: + expected_single_value = single_value + + # Bind a non-null + _check_sql_results( + database, + sql=f"SELECT {placeholder} as column", + params={key: single_value}, + param_types={key: param_type}, + expected=[(expected_single_value,)], + order=False, + recurse_into_lists=recurse_into_lists, + column_info=column_info, + ) + + # Bind a null + _check_sql_results( + database, + sql=f"SELECT {placeholder} as column", + params={key: None}, + param_types={key: param_type}, + expected=[(None,)], + order=False, + recurse_into_lists=recurse_into_lists, + column_info=column_info, + ) + + # Bind an array of + array_element_type = param_type + array_type = spanner_v1.Type( + code=spanner_v1.TypeCode.ARRAY, array_element_type=array_element_type + ) + + if expected_array_value is None: + expected_array_value = array_value + + _check_sql_results( + database, + sql=f"SELECT {placeholder} as column", + params={key: array_value}, + param_types={key: array_type}, + expected=[(expected_array_value,)], + order=False, + recurse_into_lists=recurse_into_lists, + column_info=column_info, + ) + + # Bind an empty array of + _check_sql_results( + database, + sql=f"SELECT {placeholder} as column", + params={key: []}, + param_types={key: array_type}, + expected=[([],)], + order=False, + recurse_into_lists=recurse_into_lists, + column_info=column_info, + ) + + # Bind a null array of + _check_sql_results( + database, + sql=f"SELECT {placeholder} as column", + params={key: None}, + param_types={key: array_type}, + expected=[(None,)], + order=False, + recurse_into_lists=recurse_into_lists, + column_info=column_info, + ) + + +def test_execute_sql_w_string_bindings(sessions_database, database_dialect): + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.STRING, + "Phred", + ["Phred", "Bharney"], + ) + + +def test_execute_sql_w_bool_bindings(sessions_database, database_dialect): + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.BOOL, + True, + [True, False, True], + ) + + +def test_execute_sql_w_int64_bindings(sessions_database, database_dialect): + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.INT64, + 42, + [123, 456, 789], + ) + + +def test_execute_sql_w_float64_bindings(sessions_database, database_dialect): + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.FLOAT64, + 42.3, + [12.3, 456.0, 7.89], + ) + + +def test_execute_sql_w_float_bindings_transfinite(sessions_database, database_dialect): + key = "p1" if database_dialect == DatabaseDialect.POSTGRESQL else "neg_inf" + placeholder = "$1" if database_dialect == DatabaseDialect.POSTGRESQL else f"@{key}" + + # Find -inf + _check_sql_results( + sessions_database, + sql=f"SELECT {placeholder}", + params={key: NEG_INF}, + param_types={key: spanner_v1.param_types.FLOAT64}, + expected=[(NEG_INF,)], + order=False, + ) + + key = "p1" if database_dialect == DatabaseDialect.POSTGRESQL else "pos_inf" + placeholder = "$1" if database_dialect == DatabaseDialect.POSTGRESQL else f"@{key}" + # Find +inf + _check_sql_results( + sessions_database, + sql=f"SELECT {placeholder}", + params={key: POS_INF}, + param_types={key: spanner_v1.param_types.FLOAT64}, + expected=[(POS_INF,)], + order=False, + ) + + +def test_execute_sql_w_float32_bindings(sessions_database, database_dialect): + pytest.skip("float32 is not yet supported in production.") + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.FLOAT32, + 42.3, + [12.3, 456.0, 7.89], + ) + + +def test_execute_sql_w_float32_bindings_transfinite( + sessions_database, database_dialect +): + pytest.skip("float32 is not yet supported in production.") + key = "p1" if database_dialect == DatabaseDialect.POSTGRESQL else "neg_inf" + placeholder = "$1" if database_dialect == DatabaseDialect.POSTGRESQL else f"@{key}" + + # Find -inf + _check_sql_results( + sessions_database, + sql=f"SELECT {placeholder}", + params={key: NEG_INF}, + param_types={key: spanner_v1.param_types.FLOAT32}, + expected=[(NEG_INF,)], + order=False, + ) + + key = "p1" if database_dialect == DatabaseDialect.POSTGRESQL else "pos_inf" + placeholder = "$1" if database_dialect == DatabaseDialect.POSTGRESQL else f"@{key}" + # Find +inf + _check_sql_results( + sessions_database, + sql=f"SELECT {placeholder}", + params={key: POS_INF}, + param_types={key: spanner_v1.param_types.FLOAT32}, + expected=[(POS_INF,)], + order=False, + ) + + +def test_execute_sql_w_bytes_bindings(sessions_database, database_dialect): + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.BYTES, + b"DEADBEEF", + [b"FACEDACE", b"DEADBEEF"], + ) + + +def test_execute_sql_w_timestamp_bindings(sessions_database, database_dialect): + timestamp_1 = datetime_helpers.DatetimeWithNanoseconds( + 1989, 1, 17, 17, 59, 12, nanosecond=345612789 + ) + + timestamp_2 = datetime_helpers.DatetimeWithNanoseconds( + 1989, 1, 17, 17, 59, 13, nanosecond=456127893 + ) + + timestamps = [timestamp_1, timestamp_2] + + # In round-trip, timestamps acquire a timezone value. + expected_timestamps = [timestamp.replace(tzinfo=UTC) for timestamp in timestamps] + + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.TIMESTAMP, + timestamp_1, + timestamps, + expected_timestamps, + recurse_into_lists=False, + ) + + +def test_execute_sql_w_date_bindings(sessions_database, not_postgres, database_dialect): + dates = [SOME_DATE, SOME_DATE + datetime.timedelta(days=1)] + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.DATE, + SOME_DATE, + dates, + ) + + +def test_execute_sql_w_numeric_bindings( + not_emulator, sessions_database, database_dialect +): + if database_dialect == DatabaseDialect.POSTGRESQL: + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.PG_NUMERIC, + NUMERIC_1, + [NUMERIC_1, NUMERIC_2], + ) + else: + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.NUMERIC, + NUMERIC_1, + [NUMERIC_1, NUMERIC_2], + ) + + +def test_execute_sql_w_json_bindings( + not_emulator, not_postgres, sessions_database, database_dialect +): + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.JSON, + JSON_1, + [JSON_1, JSON_2], + ) + + +def test_execute_sql_w_jsonb_bindings( + not_google_standard_sql, sessions_database, database_dialect +): + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.PG_JSONB, + JSON_1, + [JSON_1, JSON_2], + ) + + +def test_execute_sql_w_oid_bindings( + not_google_standard_sql, sessions_database, database_dialect +): + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.PG_OID, + 123, + [123, 456], + ) + + +def test_execute_sql_w_query_param_struct(sessions_database, not_postgres): + name = "Phred" + count = 123 + size = 23.456 + height = 188.0 + weight = 97.6 + param_types = spanner_v1.param_types + + record_type = param_types.Struct( + [ + param_types.StructField("name", param_types.STRING), + param_types.StructField("count", param_types.INT64), + param_types.StructField("size", param_types.FLOAT64), + param_types.StructField( + "nested", + param_types.Struct( + [ + param_types.StructField("height", param_types.FLOAT64), + param_types.StructField("weight", param_types.FLOAT64), + ] + ), + ), + ] + ) + + # Query with null struct, explicit type + _check_sql_results( + sessions_database, + sql="SELECT @r.name, @r.count, @r.size, @r.nested.weight", + params={"r": None}, + param_types={"r": record_type}, + expected=[(None, None, None, None)], + order=False, + ) + + # Query with non-null struct, explicit type, NULL values + _check_sql_results( + sessions_database, + sql="SELECT @r.name, @r.count, @r.size, @r.nested.weight", + params={"r": (None, None, None, None)}, + param_types={"r": record_type}, + expected=[(None, None, None, None)], + order=False, + ) + + # Query with non-null struct, explicit type, nested NULL values + _check_sql_results( + sessions_database, + sql="SELECT @r.nested.weight", + params={"r": (None, None, None, (None, None))}, + param_types={"r": record_type}, + expected=[(None,)], + order=False, + ) + + # Query with non-null struct, explicit type + _check_sql_results( + sessions_database, + sql="SELECT @r.name, @r.count, @r.size, @r.nested.weight", + params={"r": (name, count, size, (height, weight))}, + param_types={"r": record_type}, + expected=[(name, count, size, weight)], + order=False, + ) + + # Query with empty struct, explicitly empty type + empty_type = param_types.Struct([]) + _check_sql_results( + sessions_database, + sql="SELECT @r IS NULL", + params={"r": ()}, + param_types={"r": empty_type}, + expected=[(False,)], + order=False, + ) + + # Query with null struct, explicitly empty type + _check_sql_results( + sessions_database, + sql="SELECT @r IS NULL", + params={"r": None}, + param_types={"r": empty_type}, + expected=[(True,)], + order=False, + ) + + # Query with equality check for struct value + struct_equality_query = ( + "SELECT " '@struct_param=STRUCT(1,"bob")' + ) + struct_type = param_types.Struct( + [ + param_types.StructField("threadf", param_types.INT64), + param_types.StructField("userf", param_types.STRING), + ] + ) + _check_sql_results( + sessions_database, + sql=struct_equality_query, + params={"struct_param": (1, "bob")}, + param_types={"struct_param": struct_type}, + expected=[(True,)], + order=False, + ) + + # Query with nullness test for struct + _check_sql_results( + sessions_database, + sql="SELECT @struct_param IS NULL", + params={"struct_param": None}, + param_types={"struct_param": struct_type}, + expected=[(True,)], + order=False, + ) + + # Query with null array-of-struct + array_elem_type = param_types.Struct( + [param_types.StructField("threadid", param_types.INT64)] + ) + array_type = param_types.Array(array_elem_type) + _check_sql_results( + sessions_database, + sql="SELECT a.threadid FROM UNNEST(@struct_arr_param) a", + params={"struct_arr_param": None}, + param_types={"struct_arr_param": array_type}, + expected=[], + order=False, + ) + + # Query with non-null array-of-struct + _check_sql_results( + sessions_database, + sql="SELECT a.threadid FROM UNNEST(@struct_arr_param) a", + params={"struct_arr_param": [(123,), (456,)]}, + param_types={"struct_arr_param": array_type}, + expected=[(123,), (456,)], + order=False, + ) + + # Query with null array-of-struct field + struct_type_with_array_field = param_types.Struct( + [ + param_types.StructField("intf", param_types.INT64), + param_types.StructField("arraysf", array_type), + ] + ) + _check_sql_results( + sessions_database, + sql="SELECT a.threadid FROM UNNEST(@struct_param.arraysf) a", + params={"struct_param": (123, None)}, + param_types={"struct_param": struct_type_with_array_field}, + expected=[], + order=False, + ) + + # Query with non-null array-of-struct field + _check_sql_results( + sessions_database, + sql="SELECT a.threadid FROM UNNEST(@struct_param.arraysf) a", + params={"struct_param": (123, ((456,), (789,)))}, + param_types={"struct_param": struct_type_with_array_field}, + expected=[(456,), (789,)], + order=False, + ) + + # Query with anonymous / repeated-name fields + anon_repeated_array_elem_type = param_types.Struct( + [ + param_types.StructField("", param_types.INT64), + param_types.StructField("", param_types.STRING), + ] + ) + anon_repeated_array_type = param_types.Array(anon_repeated_array_elem_type) + _check_sql_results( + sessions_database, + sql="SELECT CAST(t as STRUCT).* " + "FROM UNNEST(@struct_param) t", + params={"struct_param": [(123, "abcdef")]}, + param_types={"struct_param": anon_repeated_array_type}, + expected=[(123, "abcdef")], + order=False, + ) + + # Query and return a struct parameter + value_type = param_types.Struct( + [ + param_types.StructField("message", param_types.STRING), + param_types.StructField("repeat", param_types.INT64), + ] + ) + value_query = ( + "SELECT ARRAY(SELECT AS STRUCT message, repeat " + "FROM (SELECT @value.message AS message, " + "@value.repeat AS repeat)) AS value" + ) + _check_sql_results( + sessions_database, + sql=value_query, + params={"value": ("hello", 1)}, + param_types={"value": value_type}, + expected=[([["hello", 1]],)], + order=False, + ) + + +def test_execute_sql_w_proto_message_bindings( + not_postgres, sessions_database, database_dialect +): + singer_info = _sample_data.SINGER_INFO_1 + singer_info_bytes = base64.b64encode(singer_info.SerializeToString()) + + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.ProtoMessage(singer_info), + singer_info, + [singer_info, None], + column_info={"column": singer_pb2.SingerInfo()}, + ) + + # Tests compatibility between proto message and bytes column types + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.ProtoMessage(singer_info), + singer_info_bytes, + [singer_info_bytes, None], + expected_single_value=singer_info, + expected_array_value=[singer_info, None], + column_info={"column": singer_pb2.SingerInfo()}, + ) + + # Tests compatibility between proto message and bytes column types + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.BYTES, + singer_info, + [singer_info, None], + expected_single_value=singer_info_bytes, + expected_array_value=[singer_info_bytes, None], + ) + + +def test_execute_sql_w_proto_enum_bindings( + not_emulator, not_postgres, sessions_database, database_dialect +): + singer_genre = _sample_data.SINGER_GENRE_1 + + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.ProtoEnum(singer_pb2.Genre), + singer_genre, + [singer_genre, None], + ) + + # Tests compatibility between proto enum and int64 column types + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.ProtoEnum(singer_pb2.Genre), + 3, + [3, None], + expected_single_value="ROCK", + expected_array_value=["ROCK", None], + column_info={"column": singer_pb2.Genre}, + ) + + # Tests compatibility between proto enum and int64 column types + _bind_test_helper( + sessions_database, + database_dialect, + spanner_v1.param_types.INT64, + singer_genre, + [singer_genre, None], + ) + + +def test_execute_sql_returning_transfinite_floats(sessions_database, not_postgres): + with sessions_database.snapshot(multi_use=True) as snapshot: + # Query returning -inf, +inf, NaN as column values + rows = list( + snapshot.execute_sql( + "SELECT " + 'CAST("-inf" AS FLOAT64), ' + 'CAST("+inf" AS FLOAT64), ' + 'CAST("NaN" AS FLOAT64)' + ) + ) + assert len(rows) == 1 + assert rows[0][0] == float("-inf") + assert rows[0][1] == float("+inf") + # NaNs cannot be compared by equality. + assert math.isnan(rows[0][2]) + + # Query returning array of -inf, +inf, NaN as one column + rows = list( + snapshot.execute_sql( + "SELECT" + ' [CAST("-inf" AS FLOAT64),' + ' CAST("+inf" AS FLOAT64),' + ' CAST("NaN" AS FLOAT64)]' + ) + ) + assert len(rows) == 1 + + float_array = rows[0][0] + assert float_array[0] == float("-inf") + assert float_array[1] == float("+inf") + + # NaNs cannot be searched for by equality. + assert math.isnan(float_array[2]) + + +def test_partition_query(sessions_database, not_emulator): + row_count = 40 + sql = f"SELECT * FROM {_sample_data.TABLE}" + committed = _set_up_table(sessions_database, row_count) + + # Paritioned query does not support ORDER BY + all_data_rows = set(_row_data(row_count)) + union = set() + batch_txn = sessions_database.batch_snapshot(read_timestamp=committed) + for batch in batch_txn.generate_query_batches(sql, data_boost_enabled=True): + p_results_iter = batch_txn.process(batch) + # Lists aren't hashable so the results need to be converted + rows = [tuple(result) for result in p_results_iter] + union.update(set(rows)) + + assert union == all_data_rows + batch_txn.close() + + +def test_run_partition_query(sessions_database, not_emulator): + row_count = 40 + sql = f"SELECT * FROM {_sample_data.TABLE}" + committed = _set_up_table(sessions_database, row_count) + + # Paritioned query does not support ORDER BY + all_data_rows = set(_row_data(row_count)) + union = set() + batch_txn = sessions_database.batch_snapshot(read_timestamp=committed) + p_results_iter = batch_txn.run_partitioned_query(sql, data_boost_enabled=True) + # Lists aren't hashable so the results need to be converted + rows = [tuple(result) for result in p_results_iter] + union.update(set(rows)) + + assert union == all_data_rows + batch_txn.close() + + +def test_mutation_groups_insert_or_update_then_query(not_emulator, sessions_database): + sd = _sample_data + num_groups = 3 + num_mutations_per_group = len(sd.BATCH_WRITE_ROW_DATA) // num_groups + + with sessions_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + + with sessions_database.mutation_groups() as groups: + for i in range(num_groups): + group = groups.group() + for j in range(num_mutations_per_group): + group.insert_or_update( + sd.TABLE, + sd.COLUMNS, + [sd.BATCH_WRITE_ROW_DATA[i * num_mutations_per_group + j]], + ) + # Response indexes received + seen = collections.Counter() + for response in groups.batch_write(): + _check_batch_status(response.status.code) + assert response.commit_timestamp is not None + assert len(response.indexes) > 0 + seen.update(response.indexes) + # All indexes must be in the range [0, num_groups-1] and seen exactly once + assert len(seen) == num_groups + assert all((0 <= idx < num_groups and ct == 1) for (idx, ct) in seen.items()) + + # Verify the writes by reading from the database + with sessions_database.snapshot() as snapshot: + rows = list(snapshot.execute_sql(sd.SQL)) + + sd._check_rows_data(rows, sd.BATCH_WRITE_ROW_DATA) + + +def _check_batch_status(status_code, expected=code_pb2.OK): + if status_code != expected: + _status_code_to_grpc_status_code = { + member.value[0]: member for member in grpc.StatusCode + } + grpc_status_code = _status_code_to_grpc_status_code[status_code] + call = _helpers.FauxCall(status_code) + raise exceptions.from_grpc_status( + grpc_status_code, "batch_update failed", errors=[call] + ) + + +def get_param_info(param_names, database_dialect): + keys = [f"p{i + 1}" for i in range(len(param_names))] + if database_dialect == DatabaseDialect.POSTGRESQL: + placeholders = [f"${i + 1}" for i in range(len(param_names))] + else: + placeholders = [f"@p{i + 1}" for i in range(len(param_names))] + return keys, placeholders + + +def test_interval(sessions_database, database_dialect, not_emulator): + from google.cloud.spanner_v1 import Interval + + def setup_table(): + if database_dialect == DatabaseDialect.POSTGRESQL: + sessions_database.update_ddl( + [ + """ + CREATE TABLE IntervalTable ( + key text primary key, + create_time timestamptz, + expiry_time timestamptz, + expiry_within_month bool GENERATED ALWAYS AS (expiry_time - create_time < INTERVAL '30' DAY) STORED, + interval_array_len bigint GENERATED ALWAYS AS (ARRAY_LENGTH(ARRAY[INTERVAL '1-2 3 4:5:6'], 1)) STORED + ) + """ + ] + ).result() + else: + sessions_database.update_ddl( + [ + """ + CREATE TABLE IntervalTable ( + key STRING(MAX), + create_time TIMESTAMP, + expiry_time TIMESTAMP, + expiry_within_month bool AS (expiry_time - create_time < INTERVAL 30 DAY), + interval_array_len INT64 AS (ARRAY_LENGTH(ARRAY[INTERVAL '1-2 3 4:5:6' YEAR TO SECOND])) + ) PRIMARY KEY (key) + """ + ] + ).result() + + def insert_test1(transaction): + keys, placeholders = get_param_info( + ["key", "create_time", "expiry_time"], database_dialect + ) + transaction.execute_update( + f""" + INSERT INTO IntervalTable (key, create_time, expiry_time) + VALUES ({placeholders[0]}, {placeholders[1]}, {placeholders[2]}) + """, + params={ + keys[0]: "test1", + keys[1]: datetime.datetime(2004, 11, 30, 4, 53, 54, tzinfo=UTC), + keys[2]: datetime.datetime(2004, 12, 15, 4, 53, 54, tzinfo=UTC), + }, + param_types={ + keys[0]: spanner_v1.param_types.STRING, + keys[1]: spanner_v1.param_types.TIMESTAMP, + keys[2]: spanner_v1.param_types.TIMESTAMP, + }, + ) + + def insert_test2(transaction): + keys, placeholders = get_param_info( + ["key", "create_time", "expiry_time"], database_dialect + ) + transaction.execute_update( + f""" + INSERT INTO IntervalTable (key, create_time, expiry_time) + VALUES ({placeholders[0]}, {placeholders[1]}, {placeholders[2]}) + """, + params={ + keys[0]: "test2", + keys[1]: datetime.datetime(2004, 8, 30, 4, 53, 54, tzinfo=UTC), + keys[2]: datetime.datetime(2004, 12, 15, 4, 53, 54, tzinfo=UTC), + }, + param_types={ + keys[0]: spanner_v1.param_types.STRING, + keys[1]: spanner_v1.param_types.TIMESTAMP, + keys[2]: spanner_v1.param_types.TIMESTAMP, + }, + ) + + def test_computed_columns(transaction): + keys, placeholders = get_param_info(["key"], database_dialect) + results = list( + transaction.execute_sql( + f""" + SELECT expiry_within_month, interval_array_len + FROM IntervalTable + WHERE key = {placeholders[0]}""", + params={keys[0]: "test1"}, + param_types={keys[0]: spanner_v1.param_types.STRING}, + ) + ) + assert len(results) == 1 + row = results[0] + assert row[0] is True # expiry_within_month + assert row[1] == 1 # interval_array_len + + def test_interval_arithmetic(transaction): + results = list( + transaction.execute_sql( + "SELECT INTERVAL '1' DAY + INTERVAL '1' MONTH AS Col1" + ) + ) + assert len(results) == 1 + row = results[0] + interval = row[0] + assert interval.months == 1 + assert interval.days == 1 + assert interval.nanos == 0 + + def test_interval_timestamp_comparison(transaction): + timestamp = "2004-11-30T10:23:54+0530" + keys, placeholders = get_param_info(["interval"], database_dialect) + if database_dialect == DatabaseDialect.POSTGRESQL: + query = f"SELECT COUNT(*) FROM IntervalTable WHERE create_time < TIMESTAMPTZ '%s' - {placeholders[0]}" + else: + query = f"SELECT COUNT(*) FROM IntervalTable WHERE create_time < TIMESTAMP('%s') - {placeholders[0]}" + + results = list( + transaction.execute_sql( + query % timestamp, + params={keys[0]: Interval(days=30)}, + param_types={keys[0]: spanner_v1.param_types.INTERVAL}, + ) + ) + assert len(results) == 1 + assert results[0][0] == 1 + + def test_interval_array_param(transaction): + intervals = [ + Interval(months=14, days=3, nanos=14706000000000), + Interval(), + Interval(months=-14, days=-3, nanos=-14706000000000), + None, + ] + keys, placeholders = get_param_info(["intervals"], database_dialect) + array_type = spanner_v1.Type( + code=spanner_v1.TypeCode.ARRAY, + array_element_type=spanner_v1.param_types.INTERVAL, + ) + results = list( + transaction.execute_sql( + f"SELECT {placeholders[0]}", + params={keys[0]: intervals}, + param_types={keys[0]: array_type}, + ) + ) + assert len(results) == 1 + row = results[0] + intervals = row[0] + assert len(intervals) == 4 + + assert intervals[0].months == 14 + assert intervals[0].days == 3 + assert intervals[0].nanos == 14706000000000 + + assert intervals[1].months == 0 + assert intervals[1].days == 0 + assert intervals[1].nanos == 0 + + assert intervals[2].months == -14 + assert intervals[2].days == -3 + assert intervals[2].nanos == -14706000000000 + + assert intervals[3] is None + + def test_interval_array_cast(transaction): + results = list( + transaction.execute_sql( + """ + SELECT ARRAY[ + CAST('P1Y2M3DT4H5M6.789123S' AS INTERVAL), + null, + CAST('P-1Y-2M-3DT-4H-5M-6.789123S' AS INTERVAL) + ] AS Col1 + """ + ) + ) + assert len(results) == 1 + row = results[0] + intervals = row[0] + assert len(intervals) == 3 + + assert intervals[0].months == 14 # 1 year + 2 months + assert intervals[0].days == 3 + assert intervals[0].nanos == 14706789123000 # 4h5m6.789123s in nanos + + assert intervals[1] is None + + assert intervals[2].months == -14 + assert intervals[2].days == -3 + assert intervals[2].nanos == -14706789123000 + + setup_table() + sessions_database.run_in_transaction(insert_test1) + sessions_database.run_in_transaction(test_computed_columns) + sessions_database.run_in_transaction(test_interval_arithmetic) + sessions_database.run_in_transaction(insert_test2) + sessions_database.run_in_transaction(test_interval_timestamp_comparison) + sessions_database.run_in_transaction(test_interval_array_param) + sessions_database.run_in_transaction(test_interval_array_cast) + + +def test_session_id_and_multiplexed_flag_behavior(sessions_database, ot_exporter): + sd = _sample_data + + with sessions_database.batch() as batch: + batch.delete(sd.TABLE, sd.ALL) + batch.insert(sd.TABLE, sd.COLUMNS, sd.ROW_DATA) + + multiplexed_enabled = is_multiplexed_enabled(TransactionType.READ_ONLY) + + snapshot1_session_id = None + snapshot2_session_id = None + snapshot1_is_multiplexed = None + snapshot2_is_multiplexed = None + + snapshot1 = sessions_database.snapshot() + snapshot2 = sessions_database.snapshot() + + try: + with snapshot1 as snap1, snapshot2 as snap2: + rows1 = list(snap1.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + rows2 = list(snap2.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + + snapshot1_session_id = snap1._session.name + snapshot1_is_multiplexed = snap1._session.is_multiplexed + + snapshot2_session_id = snap2._session.name + snapshot2_is_multiplexed = snap2._session.is_multiplexed + except Exception: + with sessions_database.snapshot() as snap1: + rows1 = list(snap1.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + snapshot1_session_id = snap1._session.name + snapshot1_is_multiplexed = snap1._session.is_multiplexed + + with sessions_database.snapshot() as snap2: + rows2 = list(snap2.read(sd.TABLE, sd.COLUMNS, sd.ALL)) + snapshot2_session_id = snap2._session.name + snapshot2_is_multiplexed = snap2._session.is_multiplexed + + sd._check_rows_data(rows1) + sd._check_rows_data(rows2) + assert rows1 == rows2 + + assert snapshot1_session_id is not None + assert snapshot2_session_id is not None + assert snapshot1_is_multiplexed is not None + assert snapshot2_is_multiplexed is not None + + if multiplexed_enabled: + assert snapshot1_session_id == snapshot2_session_id + assert snapshot1_is_multiplexed is True + assert snapshot2_is_multiplexed is True + else: + assert snapshot1_is_multiplexed is False + assert snapshot2_is_multiplexed is False + + if ot_exporter is not None: + span_list = ot_exporter.get_finished_spans() + + session_spans = [] + read_spans = [] + + for span in span_list: + if ( + "CreateSession" in span.name + or "CreateMultiplexedSession" in span.name + or "GetSession" in span.name + ): + session_spans.append(span) + elif "Snapshot.read" in span.name: + read_spans.append(span) + + assert len(read_spans) == 2 + + if multiplexed_enabled: + multiplexed_session_spans = [ + s for s in session_spans if "CreateMultiplexedSession" in s.name + ] + + read_only_multiplexed_sessions = [ + s + for s in multiplexed_session_spans + if s.start_time > span_list[1].end_time + ] + # Allow for session reuse - if no new multiplexed sessions were created, + # it means an existing one was reused (which is valid behavior) + if len(read_only_multiplexed_sessions) == 0: + # Verify that multiplexed sessions are actually being used by checking + # that the snapshots themselves are multiplexed + assert snapshot1_is_multiplexed is True + assert snapshot2_is_multiplexed is True + assert snapshot1_session_id == snapshot2_session_id + else: + # New multiplexed session was created + assert len(read_only_multiplexed_sessions) >= 1 + + # Note: We don't need to assert specific counts for regular/get sessions + # as the key validation is that multiplexed sessions are being used properly + else: + read_only_session_spans = [ + s for s in session_spans if s.start_time > span_list[1].end_time + ] + assert len(read_only_session_spans) >= 1 + + multiplexed_session_spans = [ + s for s in session_spans if "CreateMultiplexedSession" in s.name + ] + assert len(multiplexed_session_spans) == 0 diff --git a/tests/system/test_streaming_chunking.py b/tests/system/test_streaming_chunking.py new file mode 100644 index 0000000000..5dded09d64 --- /dev/null +++ b/tests/system/test_streaming_chunking.py @@ -0,0 +1,75 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +from tests.system.utils import streaming_utils + +_RUN_POPULATE_STREAMING = """\ +Run 'tests/system/utils/populate_streaming.py' to enable these tests.""" + + +@pytest.fixture(scope="session") +def streaming_instance(spanner_client): + instance = spanner_client.instance(streaming_utils.INSTANCE_NAME) + if not instance.exists(): + pytest.skip(_RUN_POPULATE_STREAMING) + + yield instance + + +@pytest.fixture(scope="session") +def streaming_database(streaming_instance): + database = streaming_instance.database(streaming_utils.DATABASE_NAME) + if not database.exists(): + pytest.skip(_RUN_POPULATE_STREAMING) + + yield database + + +def _verify_one_column(db, table_desc): + sql = f"SELECT chunk_me FROM {table_desc.table}" + with db.snapshot() as snapshot: + rows = list(snapshot.execute_sql(sql)) + assert len(rows) == table_desc.row_count + expected = table_desc.value() + for row in rows: + assert row[0] == expected + + +def _verify_two_columns(db, table_desc): + sql = f"SELECT chunk_me, chunk_me_2 FROM {table_desc.table}" + with db.snapshot() as snapshot: + rows = list(snapshot.execute_sql(sql)) + assert len(rows) == table_desc.row_count + expected = table_desc.value() + for row in rows: + assert row[0] == expected + assert row[1] == expected + + +def test_four_kay(streaming_database): + _verify_one_column(streaming_database, streaming_utils.FOUR_KAY) + + +def test_forty_kay(streaming_database): + _verify_one_column(streaming_database, streaming_utils.FORTY_KAY) + + +def test_four_hundred_kay(streaming_database): + _verify_one_column(streaming_database, streaming_utils.FOUR_HUNDRED_KAY) + + +def test_four_meg(streaming_database): + _verify_two_columns(streaming_database, streaming_utils.FOUR_MEG) diff --git a/tests/system/test_system.py b/tests/system/test_system.py deleted file mode 100644 index 926cbb4b82..0000000000 --- a/tests/system/test_system.py +++ /dev/null @@ -1,2273 +0,0 @@ -# Copyright 2016 Google LLC All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import collections -import datetime -import math -import operator -import os -import struct -import threading -import time -import unittest -import uuid - -import grpc -from google.rpc import code_pb2 - -from google.api_core import exceptions -from google.api_core.datetime_helpers import DatetimeWithNanoseconds - -from google.cloud.spanner_v1 import param_types -from google.cloud.spanner_v1.proto.type_pb2 import ARRAY -from google.cloud.spanner_v1.proto.type_pb2 import BOOL -from google.cloud.spanner_v1.proto.type_pb2 import BYTES -from google.cloud.spanner_v1.proto.type_pb2 import DATE -from google.cloud.spanner_v1.proto.type_pb2 import FLOAT64 -from google.cloud.spanner_v1.proto.type_pb2 import INT64 -from google.cloud.spanner_v1.proto.type_pb2 import STRING -from google.cloud.spanner_v1.proto.type_pb2 import TIMESTAMP -from google.cloud.spanner_v1.proto.type_pb2 import Type - -from google.cloud._helpers import UTC -from google.cloud.spanner import Client -from google.cloud.spanner import KeyRange -from google.cloud.spanner import KeySet -from google.cloud.spanner import BurstyPool -from google.cloud.spanner import COMMIT_TIMESTAMP - -from test_utils.retry import RetryErrors -from test_utils.retry import RetryInstanceState -from test_utils.retry import RetryResult -from test_utils.system import unique_resource_id -from tests._fixtures import DDL_STATEMENTS - - -CREATE_INSTANCE = os.getenv("GOOGLE_CLOUD_TESTS_CREATE_SPANNER_INSTANCE") is not None -USE_EMULATOR = os.getenv("SPANNER_EMULATOR_HOST") is not None -USE_RESOURCE_ROUTING = ( - os.getenv("GOOGLE_CLOUD_SPANNER_ENABLE_RESOURCE_BASED_ROUTING") == "true" -) - -if CREATE_INSTANCE: - INSTANCE_ID = "google-cloud" + unique_resource_id("-") -else: - INSTANCE_ID = os.environ.get( - "GOOGLE_CLOUD_TESTS_SPANNER_INSTANCE", "google-cloud-python-systest" - ) -EXISTING_INSTANCES = [] -COUNTERS_TABLE = "counters" -COUNTERS_COLUMNS = ("name", "value") - -_STATUS_CODE_TO_GRPC_STATUS_CODE = { - member.value[0]: member for member in grpc.StatusCode -} - - -class Config(object): - """Run-time configuration to be modified at set-up. - - This is a mutable stand-in to allow test set-up to modify - global state. - """ - - CLIENT = None - INSTANCE_CONFIG = None - INSTANCE = None - - -def _has_all_ddl(database): - return len(database.ddl_statements) == len(DDL_STATEMENTS) - - -def _list_instances(): - return list(Config.CLIENT.list_instances()) - - -def setUpModule(): - Config.CLIENT = Client() - retry = RetryErrors(exceptions.ServiceUnavailable) - - configs = list(retry(Config.CLIENT.list_instance_configs)()) - - instances = retry(_list_instances)() - EXISTING_INSTANCES[:] = instances - - if CREATE_INSTANCE: - if not USE_EMULATOR: - # Defend against back-end returning configs for regions we aren't - # actually allowed to use. - configs = [config for config in configs if "-us-" in config.name] - - if not configs: - raise ValueError("List instance configs failed in module set up.") - - Config.INSTANCE_CONFIG = configs[0] - config_name = configs[0].name - - Config.INSTANCE = Config.CLIENT.instance(INSTANCE_ID, config_name) - created_op = Config.INSTANCE.create() - created_op.result(30) # block until completion - - else: - Config.INSTANCE = Config.CLIENT.instance(INSTANCE_ID) - Config.INSTANCE.reload() - - -def tearDownModule(): - if CREATE_INSTANCE: - Config.INSTANCE.delete() - - -class TestInstanceAdminAPI(unittest.TestCase): - def setUp(self): - self.instances_to_delete = [] - - def tearDown(self): - for instance in self.instances_to_delete: - instance.delete() - - def test_list_instances(self): - instances = list(Config.CLIENT.list_instances()) - # We have added one new instance in `setUpModule`. - if CREATE_INSTANCE: - self.assertEqual(len(instances), len(EXISTING_INSTANCES) + 1) - for instance in instances: - instance_existence = ( - instance in EXISTING_INSTANCES or instance == Config.INSTANCE - ) - self.assertTrue(instance_existence) - - def test_reload_instance(self): - # Use same arguments as Config.INSTANCE (created in `setUpModule`) - # so we can use reload() on a fresh instance. - instance = Config.CLIENT.instance(INSTANCE_ID) - # Make sure metadata unset before reloading. - instance.display_name = None - - def _expected_display_name(instance): - return instance.display_name == Config.INSTANCE.display_name - - retry = RetryInstanceState(_expected_display_name) - - retry(instance.reload)() - - self.assertEqual(instance.display_name, Config.INSTANCE.display_name) - - @unittest.skipUnless(CREATE_INSTANCE, "Skipping instance creation") - def test_create_instance(self): - ALT_INSTANCE_ID = "new" + unique_resource_id("-") - instance = Config.CLIENT.instance(ALT_INSTANCE_ID, Config.INSTANCE_CONFIG.name) - operation = instance.create() - # Make sure this instance gets deleted after the test case. - self.instances_to_delete.append(instance) - - # We want to make sure the operation completes. - operation.result(30) # raises on failure / timeout. - - # Create a new instance instance and make sure it is the same. - instance_alt = Config.CLIENT.instance( - ALT_INSTANCE_ID, Config.INSTANCE_CONFIG.name - ) - instance_alt.reload() - - self.assertEqual(instance, instance_alt) - self.assertEqual(instance.display_name, instance_alt.display_name) - - @unittest.skipIf(USE_EMULATOR, "Skipping updating instance") - def test_update_instance(self): - OLD_DISPLAY_NAME = Config.INSTANCE.display_name - NEW_DISPLAY_NAME = "Foo Bar Baz" - Config.INSTANCE.display_name = NEW_DISPLAY_NAME - operation = Config.INSTANCE.update() - - # We want to make sure the operation completes. - operation.result(30) # raises on failure / timeout. - - # Create a new instance instance and reload it. - instance_alt = Config.CLIENT.instance(INSTANCE_ID, None) - self.assertNotEqual(instance_alt.display_name, NEW_DISPLAY_NAME) - instance_alt.reload() - self.assertEqual(instance_alt.display_name, NEW_DISPLAY_NAME) - - # Make sure to put the instance back the way it was for the - # other test cases. - Config.INSTANCE.display_name = OLD_DISPLAY_NAME - Config.INSTANCE.update() - - -class _TestData(object): - TABLE = "contacts" - COLUMNS = ("contact_id", "first_name", "last_name", "email") - ROW_DATA = ( - (1, u"Phred", u"Phlyntstone", u"phred@example.com"), - (2, u"Bharney", u"Rhubble", u"bharney@example.com"), - (3, u"Wylma", u"Phlyntstone", u"wylma@example.com"), - ) - ALL = KeySet(all_=True) - SQL = "SELECT * FROM contacts ORDER BY contact_id" - - _recurse_into_lists = True - - def _assert_timestamp(self, value, nano_value): - self.assertIsInstance(value, datetime.datetime) - self.assertIsNone(value.tzinfo) - self.assertIs(nano_value.tzinfo, UTC) - - self.assertEqual(value.year, nano_value.year) - self.assertEqual(value.month, nano_value.month) - self.assertEqual(value.day, nano_value.day) - self.assertEqual(value.hour, nano_value.hour) - self.assertEqual(value.minute, nano_value.minute) - self.assertEqual(value.second, nano_value.second) - self.assertEqual(value.microsecond, nano_value.microsecond) - if isinstance(value, DatetimeWithNanoseconds): - self.assertEqual(value.nanosecond, nano_value.nanosecond) - else: - self.assertEqual(value.microsecond * 1000, nano_value.nanosecond) - - def _check_rows_data(self, rows_data, expected=None): - if expected is None: - expected = self.ROW_DATA - - self.assertEqual(len(rows_data), len(expected)) - for row, expected in zip(rows_data, expected): - self._check_row_data(row, expected) - - def _check_row_data(self, row_data, expected): - self.assertEqual(len(row_data), len(expected)) - for found_cell, expected_cell in zip(row_data, expected): - self._check_cell_data(found_cell, expected_cell) - - def _check_cell_data(self, found_cell, expected_cell): - if isinstance(found_cell, DatetimeWithNanoseconds): - self._assert_timestamp(expected_cell, found_cell) - elif isinstance(found_cell, float) and math.isnan(found_cell): - self.assertTrue(math.isnan(expected_cell)) - elif isinstance(found_cell, list) and self._recurse_into_lists: - self.assertEqual(len(found_cell), len(expected_cell)) - for found_item, expected_item in zip(found_cell, expected_cell): - self._check_cell_data(found_item, expected_item) - else: - self.assertEqual(found_cell, expected_cell) - - -class TestDatabaseAPI(unittest.TestCase, _TestData): - DATABASE_NAME = "test_database" + unique_resource_id("_") - - @classmethod - def setUpClass(cls): - pool = BurstyPool(labels={"testcase": "database_api"}) - cls._db = Config.INSTANCE.database( - cls.DATABASE_NAME, ddl_statements=DDL_STATEMENTS, pool=pool - ) - operation = cls._db.create() - operation.result(30) # raises on failure / timeout. - - @classmethod - def tearDownClass(cls): - cls._db.drop() - - def setUp(self): - self.to_delete = [] - - def tearDown(self): - for doomed in self.to_delete: - doomed.drop() - - @unittest.skipUnless(USE_RESOURCE_ROUTING, "requires enabling resource routing") - def test_spanner_api_use_user_specified_endpoint(self): - # Clear cache. - Client._endpoint_cache = {} - api = Config.CLIENT.instance_admin_api - resp = api.get_instance( - Config.INSTANCE.name, field_mask={"paths": ["endpoint_uris"]} - ) - if not resp or not resp.endpoint_uris: - return # no resolved endpoint. - resolved_endpoint = resp.endpoint_uris[0] - - client = Client(client_options={"api_endpoint": resolved_endpoint}) - - instance = client.instance(Config.INSTANCE.instance_id) - temp_db_id = "temp_db" + unique_resource_id("_") - temp_db = instance.database(temp_db_id) - temp_db.spanner_api - - # No endpoint cache - Default endpoint used. - self.assertEqual(client._endpoint_cache, {}) - - @unittest.skipUnless(USE_RESOURCE_ROUTING, "requires enabling resource routing") - def test_spanner_api_use_resolved_endpoint(self): - # Clear cache. - Client._endpoint_cache = {} - api = Config.CLIENT.instance_admin_api - resp = api.get_instance( - Config.INSTANCE.name, field_mask={"paths": ["endpoint_uris"]} - ) - if not resp or not resp.endpoint_uris: - return # no resolved endpoint. - resolved_endpoint = resp.endpoint_uris[0] - - client = Client( - client_options=Config.CLIENT._client_options - ) # Use same endpoint as main client. - - instance = client.instance(Config.INSTANCE.instance_id) - temp_db_id = "temp_db" + unique_resource_id("_") - temp_db = instance.database(temp_db_id) - temp_db.spanner_api - - # Endpoint is cached - resolved endpoint used. - self.assertIn(Config.INSTANCE.name, client._endpoint_cache) - self.assertEqual( - client._endpoint_cache[Config.INSTANCE.name], resolved_endpoint - ) - - # Endpoint is cached at a class level. - self.assertIn(Config.INSTANCE.name, Config.CLIENT._endpoint_cache) - self.assertEqual( - Config.CLIENT._endpoint_cache[Config.INSTANCE.name], resolved_endpoint - ) - - def test_list_databases(self): - # Since `Config.INSTANCE` is newly created in `setUpModule`, the - # database created in `setUpClass` here will be the only one. - database_names = [ - database.name for database in Config.INSTANCE.list_databases() - ] - self.assertTrue(self._db.name in database_names) - - def test_create_database(self): - pool = BurstyPool(labels={"testcase": "create_database"}) - temp_db_id = "temp_db" + unique_resource_id("_") - temp_db = Config.INSTANCE.database(temp_db_id, pool=pool) - operation = temp_db.create() - self.to_delete.append(temp_db) - - # We want to make sure the operation completes. - operation.result(30) # raises on failure / timeout. - - database_ids = [ - database.database_id for database in Config.INSTANCE.list_databases() - ] - self.assertIn(temp_db_id, database_ids) - - def test_table_not_found(self): - temp_db_id = "temp_db" + unique_resource_id("_") - - correct_table = "MyTable" - incorrect_table = "NotMyTable" - self.assertNotEqual(correct_table, incorrect_table) - - create_table = ( - "CREATE TABLE {} (\n" - " Id STRING(36) NOT NULL,\n" - " Field1 STRING(36) NOT NULL\n" - ") PRIMARY KEY (Id)" - ).format(correct_table) - index = "CREATE INDEX IDX ON {} (Field1)".format(incorrect_table) - - temp_db = Config.INSTANCE.database( - temp_db_id, ddl_statements=[create_table, index] - ) - self.to_delete.append(temp_db) - with self.assertRaises(exceptions.NotFound): - temp_db.create() - - @unittest.skip( - ( - "update_dataset_ddl() has a flaky timeout" - "https://github.com/GoogleCloudPlatform/google-cloud-python/issues/" - "5629" - ) - ) - def test_update_database_ddl_with_operation_id(self): - pool = BurstyPool(labels={"testcase": "update_database_ddl"}) - temp_db_id = "temp_db" + unique_resource_id("_") - temp_db = Config.INSTANCE.database(temp_db_id, pool=pool) - create_op = temp_db.create() - self.to_delete.append(temp_db) - - # We want to make sure the operation completes. - create_op.result(240) # raises on failure / timeout. - # random but shortish always start with letter - operation_id = "a" + str(uuid.uuid4())[:8] - operation = temp_db.update_ddl(DDL_STATEMENTS, operation_id=operation_id) - - self.assertEqual(operation_id, operation.operation.name.split("/")[-1]) - - # We want to make sure the operation completes. - operation.result(240) # raises on failure / timeout. - - temp_db.reload() - - self.assertEqual(len(temp_db.ddl_statements), len(DDL_STATEMENTS)) - - def test_db_batch_insert_then_db_snapshot_read(self): - retry = RetryInstanceState(_has_all_ddl) - retry(self._db.reload)() - - with self._db.batch() as batch: - batch.delete(self.TABLE, self.ALL) - batch.insert(self.TABLE, self.COLUMNS, self.ROW_DATA) - - with self._db.snapshot(read_timestamp=batch.committed) as snapshot: - from_snap = list(snapshot.read(self.TABLE, self.COLUMNS, self.ALL)) - - self._check_rows_data(from_snap) - - def test_db_run_in_transaction_then_snapshot_execute_sql(self): - retry = RetryInstanceState(_has_all_ddl) - retry(self._db.reload)() - - with self._db.batch() as batch: - batch.delete(self.TABLE, self.ALL) - - def _unit_of_work(transaction, test): - rows = list(transaction.read(test.TABLE, test.COLUMNS, self.ALL)) - test.assertEqual(rows, []) - - transaction.insert_or_update(test.TABLE, test.COLUMNS, test.ROW_DATA) - - self._db.run_in_transaction(_unit_of_work, test=self) - - with self._db.snapshot() as after: - rows = list(after.execute_sql(self.SQL)) - self._check_rows_data(rows) - - def test_db_run_in_transaction_twice(self): - retry = RetryInstanceState(_has_all_ddl) - retry(self._db.reload)() - - with self._db.batch() as batch: - batch.delete(self.TABLE, self.ALL) - - def _unit_of_work(transaction, test): - transaction.insert_or_update(test.TABLE, test.COLUMNS, test.ROW_DATA) - - self._db.run_in_transaction(_unit_of_work, test=self) - self._db.run_in_transaction(_unit_of_work, test=self) - - with self._db.snapshot() as after: - rows = list(after.execute_sql(self.SQL)) - self._check_rows_data(rows) - - def test_db_run_in_transaction_twice_4181(self): - retry = RetryInstanceState(_has_all_ddl) - retry(self._db.reload)() - - with self._db.batch() as batch: - batch.delete(COUNTERS_TABLE, self.ALL) - - def _unit_of_work(transaction, name): - transaction.insert(COUNTERS_TABLE, COUNTERS_COLUMNS, [[name, 0]]) - - self._db.run_in_transaction(_unit_of_work, name="id_1") - - with self.assertRaises(exceptions.AlreadyExists): - self._db.run_in_transaction(_unit_of_work, name="id_1") - - self._db.run_in_transaction(_unit_of_work, name="id_2") - - with self._db.snapshot() as after: - rows = list(after.read(COUNTERS_TABLE, COUNTERS_COLUMNS, self.ALL)) - self.assertEqual(len(rows), 2) - - -SOME_DATE = datetime.date(2011, 1, 17) -SOME_TIME = datetime.datetime(1989, 1, 17, 17, 59, 12, 345612) -NANO_TIME = DatetimeWithNanoseconds(1995, 8, 31, nanosecond=987654321) -POS_INF = float("+inf") -NEG_INF = float("-inf") -OTHER_NAN, = struct.unpack(" - self._check_sql_results( - self._db, - sql="SELECT @v", - params={"v": single_value}, - param_types={"v": Type(code=type_name)}, - expected=[(single_value,)], - order=False, - ) - - # Bind a null - self._check_sql_results( - self._db, - sql="SELECT @v", - params={"v": None}, - param_types={"v": Type(code=type_name)}, - expected=[(None,)], - order=False, - ) - - # Bind an array of - array_type = Type(code=ARRAY, array_element_type=Type(code=type_name)) - - if expected_array_value is None: - expected_array_value = array_value - - self._check_sql_results( - self._db, - sql="SELECT @v", - params={"v": array_value}, - param_types={"v": array_type}, - expected=[(expected_array_value,)], - order=False, - ) - - # Bind an empty array of - self._check_sql_results( - self._db, - sql="SELECT @v", - params={"v": []}, - param_types={"v": array_type}, - expected=[([],)], - order=False, - ) - - # Bind a null array of - self._check_sql_results( - self._db, - sql="SELECT @v", - params={"v": None}, - param_types={"v": array_type}, - expected=[(None,)], - order=False, - ) - - def test_execute_sql_w_string_bindings(self): - self._bind_test_helper(STRING, "Phred", ["Phred", "Bharney"]) - - def test_execute_sql_w_bool_bindings(self): - self._bind_test_helper(BOOL, True, [True, False, True]) - - def test_execute_sql_w_int64_bindings(self): - self._bind_test_helper(INT64, 42, [123, 456, 789]) - - def test_execute_sql_w_float64_bindings(self): - self._bind_test_helper(FLOAT64, 42.3, [12.3, 456.0, 7.89]) - - def test_execute_sql_w_float_bindings_transfinite(self): - - # Find -inf - self._check_sql_results( - self._db, - sql="SELECT @neg_inf", - params={"neg_inf": NEG_INF}, - param_types={"neg_inf": Type(code=FLOAT64)}, - expected=[(NEG_INF,)], - order=False, - ) - - # Find +inf - self._check_sql_results( - self._db, - sql="SELECT @pos_inf", - params={"pos_inf": POS_INF}, - param_types={"pos_inf": Type(code=FLOAT64)}, - expected=[(POS_INF,)], - order=False, - ) - - def test_execute_sql_w_bytes_bindings(self): - self._bind_test_helper(BYTES, b"DEADBEEF", [b"FACEDACE", b"DEADBEEF"]) - - def test_execute_sql_w_timestamp_bindings(self): - import pytz - from google.api_core.datetime_helpers import DatetimeWithNanoseconds - - timestamp_1 = DatetimeWithNanoseconds( - 1989, 1, 17, 17, 59, 12, nanosecond=345612789 - ) - - timestamp_2 = DatetimeWithNanoseconds( - 1989, 1, 17, 17, 59, 13, nanosecond=456127893 - ) - - timestamps = [timestamp_1, timestamp_2] - - # In round-trip, timestamps acquire a timezone value. - expected_timestamps = [ - timestamp.replace(tzinfo=pytz.UTC) for timestamp in timestamps - ] - - self._recurse_into_lists = False - self._bind_test_helper(TIMESTAMP, timestamp_1, timestamps, expected_timestamps) - - def test_execute_sql_w_date_bindings(self): - import datetime - - dates = [SOME_DATE, SOME_DATE + datetime.timedelta(days=1)] - self._bind_test_helper(DATE, SOME_DATE, dates) - - def test_execute_sql_w_query_param_struct(self): - NAME = "Phred" - COUNT = 123 - SIZE = 23.456 - HEIGHT = 188.0 - WEIGHT = 97.6 - - record_type = param_types.Struct( - [ - param_types.StructField("name", param_types.STRING), - param_types.StructField("count", param_types.INT64), - param_types.StructField("size", param_types.FLOAT64), - param_types.StructField( - "nested", - param_types.Struct( - [ - param_types.StructField("height", param_types.FLOAT64), - param_types.StructField("weight", param_types.FLOAT64), - ] - ), - ), - ] - ) - - # Query with null struct, explicit type - self._check_sql_results( - self._db, - sql="SELECT @r.name, @r.count, @r.size, @r.nested.weight", - params={"r": None}, - param_types={"r": record_type}, - expected=[(None, None, None, None)], - order=False, - ) - - # Query with non-null struct, explicit type, NULL values - self._check_sql_results( - self._db, - sql="SELECT @r.name, @r.count, @r.size, @r.nested.weight", - params={"r": (None, None, None, None)}, - param_types={"r": record_type}, - expected=[(None, None, None, None)], - order=False, - ) - - # Query with non-null struct, explicit type, nested NULL values - self._check_sql_results( - self._db, - sql="SELECT @r.nested.weight", - params={"r": (None, None, None, (None, None))}, - param_types={"r": record_type}, - expected=[(None,)], - order=False, - ) - - # Query with non-null struct, explicit type - self._check_sql_results( - self._db, - sql="SELECT @r.name, @r.count, @r.size, @r.nested.weight", - params={"r": (NAME, COUNT, SIZE, (HEIGHT, WEIGHT))}, - param_types={"r": record_type}, - expected=[(NAME, COUNT, SIZE, WEIGHT)], - order=False, - ) - - # Query with empty struct, explicitly empty type - empty_type = param_types.Struct([]) - self._check_sql_results( - self._db, - sql="SELECT @r IS NULL", - params={"r": ()}, - param_types={"r": empty_type}, - expected=[(False,)], - order=False, - ) - - # Query with null struct, explicitly empty type - self._check_sql_results( - self._db, - sql="SELECT @r IS NULL", - params={"r": None}, - param_types={"r": empty_type}, - expected=[(True,)], - order=False, - ) - - # Query with equality check for struct value - struct_equality_query = ( - "SELECT " '@struct_param=STRUCT(1,"bob")' - ) - struct_type = param_types.Struct( - [ - param_types.StructField("threadf", param_types.INT64), - param_types.StructField("userf", param_types.STRING), - ] - ) - self._check_sql_results( - self._db, - sql=struct_equality_query, - params={"struct_param": (1, "bob")}, - param_types={"struct_param": struct_type}, - expected=[(True,)], - order=False, - ) - - # Query with nullness test for struct - self._check_sql_results( - self._db, - sql="SELECT @struct_param IS NULL", - params={"struct_param": None}, - param_types={"struct_param": struct_type}, - expected=[(True,)], - order=False, - ) - - # Query with null array-of-struct - array_elem_type = param_types.Struct( - [param_types.StructField("threadid", param_types.INT64)] - ) - array_type = param_types.Array(array_elem_type) - self._check_sql_results( - self._db, - sql="SELECT a.threadid FROM UNNEST(@struct_arr_param) a", - params={"struct_arr_param": None}, - param_types={"struct_arr_param": array_type}, - expected=[], - order=False, - ) - - # Query with non-null array-of-struct - self._check_sql_results( - self._db, - sql="SELECT a.threadid FROM UNNEST(@struct_arr_param) a", - params={"struct_arr_param": [(123,), (456,)]}, - param_types={"struct_arr_param": array_type}, - expected=[(123,), (456,)], - order=False, - ) - - # Query with null array-of-struct field - struct_type_with_array_field = param_types.Struct( - [ - param_types.StructField("intf", param_types.INT64), - param_types.StructField("arraysf", array_type), - ] - ) - self._check_sql_results( - self._db, - sql="SELECT a.threadid FROM UNNEST(@struct_param.arraysf) a", - params={"struct_param": (123, None)}, - param_types={"struct_param": struct_type_with_array_field}, - expected=[], - order=False, - ) - - # Query with non-null array-of-struct field - self._check_sql_results( - self._db, - sql="SELECT a.threadid FROM UNNEST(@struct_param.arraysf) a", - params={"struct_param": (123, ((456,), (789,)))}, - param_types={"struct_param": struct_type_with_array_field}, - expected=[(456,), (789,)], - order=False, - ) - - # Query with anonymous / repeated-name fields - anon_repeated_array_elem_type = param_types.Struct( - [ - param_types.StructField("", param_types.INT64), - param_types.StructField("", param_types.STRING), - ] - ) - anon_repeated_array_type = param_types.Array(anon_repeated_array_elem_type) - self._check_sql_results( - self._db, - sql="SELECT CAST(t as STRUCT).* " - "FROM UNNEST(@struct_param) t", - params={"struct_param": [(123, "abcdef")]}, - param_types={"struct_param": anon_repeated_array_type}, - expected=[(123, "abcdef")], - order=False, - ) - - # Query and return a struct parameter - value_type = param_types.Struct( - [ - param_types.StructField("message", param_types.STRING), - param_types.StructField("repeat", param_types.INT64), - ] - ) - value_query = ( - "SELECT ARRAY(SELECT AS STRUCT message, repeat " - "FROM (SELECT @value.message AS message, " - "@value.repeat AS repeat)) AS value" - ) - self._check_sql_results( - self._db, - sql=value_query, - params={"value": ("hello", 1)}, - param_types={"value": value_type}, - expected=[([["hello", 1]],)], - order=False, - ) - - def test_execute_sql_returning_transfinite_floats(self): - - with self._db.snapshot(multi_use=True) as snapshot: - # Query returning -inf, +inf, NaN as column values - rows = list( - snapshot.execute_sql( - "SELECT " - 'CAST("-inf" AS FLOAT64), ' - 'CAST("+inf" AS FLOAT64), ' - 'CAST("NaN" AS FLOAT64)' - ) - ) - self.assertEqual(len(rows), 1) - self.assertEqual(rows[0][0], float("-inf")) - self.assertEqual(rows[0][1], float("+inf")) - # NaNs cannot be compared by equality. - self.assertTrue(math.isnan(rows[0][2])) - - # Query returning array of -inf, +inf, NaN as one column - rows = list( - snapshot.execute_sql( - "SELECT" - ' [CAST("-inf" AS FLOAT64),' - ' CAST("+inf" AS FLOAT64),' - ' CAST("NaN" AS FLOAT64)]' - ) - ) - self.assertEqual(len(rows), 1) - float_array, = rows[0] - self.assertEqual(float_array[0], float("-inf")) - self.assertEqual(float_array[1], float("+inf")) - # NaNs cannot be searched for by equality. - self.assertTrue(math.isnan(float_array[2])) - - @unittest.skipIf(USE_EMULATOR, "Skipping partitioned queries") - def test_partition_query(self): - row_count = 40 - sql = "SELECT * FROM {}".format(self.TABLE) - committed = self._set_up_table(row_count) - all_data_rows = list(self._row_data(row_count)) - - union = [] - batch_txn = self._db.batch_snapshot(read_timestamp=committed) - for batch in batch_txn.generate_query_batches(sql): - p_results_iter = batch_txn.process(batch) - union.extend(list(p_results_iter)) - - self.assertEqual(union, all_data_rows) - batch_txn.close() - - -class TestStreamingChunking(unittest.TestCase, _TestData): - @classmethod - def setUpClass(cls): - from tests.system.utils.streaming_utils import INSTANCE_NAME - from tests.system.utils.streaming_utils import DATABASE_NAME - - instance = Config.CLIENT.instance(INSTANCE_NAME) - if not instance.exists(): - raise unittest.SkipTest( - "Run 'tests/system/utils/populate_streaming.py' to enable." - ) - - database = instance.database(DATABASE_NAME) - if not instance.exists(): - raise unittest.SkipTest( - "Run 'tests/system/utils/populate_streaming.py' to enable." - ) - - cls._db = database - - def _verify_one_column(self, table_desc): - sql = "SELECT chunk_me FROM {}".format(table_desc.table) - with self._db.snapshot() as snapshot: - rows = list(snapshot.execute_sql(sql)) - self.assertEqual(len(rows), table_desc.row_count) - expected = table_desc.value() - for row in rows: - self.assertEqual(row[0], expected) - - def _verify_two_columns(self, table_desc): - sql = "SELECT chunk_me, chunk_me_2 FROM {}".format(table_desc.table) - with self._db.snapshot() as snapshot: - rows = list(snapshot.execute_sql(sql)) - self.assertEqual(len(rows), table_desc.row_count) - expected = table_desc.value() - for row in rows: - self.assertEqual(row[0], expected) - self.assertEqual(row[1], expected) - - def test_four_kay(self): - from tests.system.utils.streaming_utils import FOUR_KAY - - self._verify_one_column(FOUR_KAY) - - def test_forty_kay(self): - from tests.system.utils.streaming_utils import FORTY_KAY - - self._verify_one_column(FORTY_KAY) - - def test_four_hundred_kay(self): - from tests.system.utils.streaming_utils import FOUR_HUNDRED_KAY - - self._verify_one_column(FOUR_HUNDRED_KAY) - - def test_four_meg(self): - from tests.system.utils.streaming_utils import FOUR_MEG - - self._verify_two_columns(FOUR_MEG) - - -class CustomException(Exception): - """Placeholder for any user-defined exception.""" - - -class _DatabaseDropper(object): - """Helper for cleaning up databases created on-the-fly.""" - - def __init__(self, db): - self._db = db - - def delete(self): - self._db.drop() - - -class _ReadAbortTrigger(object): - """Helper for tests provoking abort-during-read.""" - - KEY1 = "key1" - KEY2 = "key2" - - def __init__(self): - self.provoker_started = threading.Event() - self.provoker_done = threading.Event() - self.handler_running = threading.Event() - self.handler_done = threading.Event() - - def _provoke_abort_unit_of_work(self, transaction): - keyset = KeySet(keys=[(self.KEY1,)]) - rows = list(transaction.read(COUNTERS_TABLE, COUNTERS_COLUMNS, keyset)) - - assert len(rows) == 1 - row = rows[0] - value = row[1] - - self.provoker_started.set() - - self.handler_running.wait() - - transaction.update(COUNTERS_TABLE, COUNTERS_COLUMNS, [[self.KEY1, value + 1]]) - - def provoke_abort(self, database): - database.run_in_transaction(self._provoke_abort_unit_of_work) - self.provoker_done.set() - - def _handle_abort_unit_of_work(self, transaction): - keyset_1 = KeySet(keys=[(self.KEY1,)]) - rows_1 = list(transaction.read(COUNTERS_TABLE, COUNTERS_COLUMNS, keyset_1)) - - assert len(rows_1) == 1 - row_1 = rows_1[0] - value_1 = row_1[1] - - self.handler_running.set() - - self.provoker_done.wait() - - keyset_2 = KeySet(keys=[(self.KEY2,)]) - rows_2 = list(transaction.read(COUNTERS_TABLE, COUNTERS_COLUMNS, keyset_2)) - - assert len(rows_2) == 1 - row_2 = rows_2[0] - value_2 = row_2[1] - - transaction.update( - COUNTERS_TABLE, COUNTERS_COLUMNS, [[self.KEY2, value_1 + value_2]] - ) - - def handle_abort(self, database): - database.run_in_transaction(self._handle_abort_unit_of_work) - self.handler_done.set() - - -class FauxCall(object): - def __init__(self, code, details="FauxCall"): - self._code = code - self._details = details - - def initial_metadata(self): - return {} - - def trailing_metadata(self): - return {} - - def code(self): - return self._code - - def details(self): - return self._details diff --git a/tests/system/test_table_api.py b/tests/system/test_table_api.py new file mode 100644 index 0000000000..80dbc1ccfc --- /dev/null +++ b/tests/system/test_table_api.py @@ -0,0 +1,83 @@ +# Copyright 2021 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +from google.api_core import exceptions +from google.cloud import spanner_v1 +from google.cloud.spanner_admin_database_v1 import DatabaseDialect + + +def test_table_exists(shared_database): + table = shared_database.table("all_types") + assert table.exists() + + +def test_table_exists_not_found(shared_database): + table = shared_database.table("table_does_not_exist") + assert not table.exists() + + +def test_table_exists_reload_database_dialect( + shared_instance, shared_database, not_emulator +): + database = shared_instance.database(shared_database.database_id) + assert database.database_dialect != DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED + table = database.table("all_types") + assert table.exists() + assert database.database_dialect != DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED + + +def test_db_list_tables(shared_database): + tables = shared_database.list_tables() + table_ids = set(table.table_id for table in tables) + assert "contacts" in table_ids + # assert "contact_phones" in table_ids + assert "all_types" in table_ids + + +def test_db_list_tables_reload(shared_database): + for table in shared_database.list_tables(): + assert table.exists() + schema = table.schema + assert isinstance(schema, list) + + +def test_table_reload_miss(shared_database): + table = shared_database.table("table_does_not_exist") + with pytest.raises(exceptions.NotFound): + table.reload() + + +def test_table_schema(shared_database, database_dialect): + table = shared_database.table("all_types") + schema = table.schema + expected = [ + ("pkey", spanner_v1.TypeCode.INT64), + ("int_value", spanner_v1.TypeCode.INT64), + ("bool_value", spanner_v1.TypeCode.BOOL), + ("bytes_value", spanner_v1.TypeCode.BYTES), + ("float_value", spanner_v1.TypeCode.FLOAT64), + ("string_value", spanner_v1.TypeCode.STRING), + ("timestamp_value", spanner_v1.TypeCode.TIMESTAMP), + ("date_value", spanner_v1.TypeCode.DATE), + ("int_array", spanner_v1.TypeCode.ARRAY), + ] + expected = ( + expected[:-2] if database_dialect == DatabaseDialect.POSTGRESQL else expected + ) + found = {field.name: field.type_.code for field in schema} + + for field_name, type_code in expected: + assert found[field_name] == type_code diff --git a/tests/system/testdata/descriptors.pb b/tests/system/testdata/descriptors.pb new file mode 100644 index 0000000000..0536d5004d Binary files /dev/null and b/tests/system/testdata/descriptors.pb differ diff --git a/tests/system/testdata/singer.proto b/tests/system/testdata/singer.proto new file mode 100644 index 0000000000..1a995614a7 --- /dev/null +++ b/tests/system/testdata/singer.proto @@ -0,0 +1,17 @@ +syntax = "proto3"; + +package examples.spanner.music; + +message SingerInfo { + optional int64 singer_id = 1; + optional string birth_date = 2; + optional string nationality = 3; + optional Genre genre = 4; +} + +enum Genre { + POP = 0; + JAZZ = 1; + FOLK = 2; + ROCK = 3; +} diff --git a/tests/system/testdata/singer_pb2.py b/tests/system/testdata/singer_pb2.py new file mode 100644 index 0000000000..51b049865c --- /dev/null +++ b/tests/system/testdata/singer_pb2.py @@ -0,0 +1,29 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: singer.proto +# Protobuf Python Version: 4.25.1 +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +from google.protobuf.internal import builder as _builder + +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile( + b'\n\x0csinger.proto\x12\x16\x65xamples.spanner.music"\xc1\x01\n\nSingerInfo\x12\x16\n\tsinger_id\x18\x01 \x01(\x03H\x00\x88\x01\x01\x12\x17\n\nbirth_date\x18\x02 \x01(\tH\x01\x88\x01\x01\x12\x18\n\x0bnationality\x18\x03 \x01(\tH\x02\x88\x01\x01\x12\x31\n\x05genre\x18\x04 \x01(\x0e\x32\x1d.examples.spanner.music.GenreH\x03\x88\x01\x01\x42\x0c\n\n_singer_idB\r\n\x0b_birth_dateB\x0e\n\x0c_nationalityB\x08\n\x06_genre*.\n\x05Genre\x12\x07\n\x03POP\x10\x00\x12\x08\n\x04JAZZ\x10\x01\x12\x08\n\x04\x46OLK\x10\x02\x12\x08\n\x04ROCK\x10\x03\x62\x06proto3' +) + +_globals = globals() +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, "singer_pb2", _globals) +if _descriptor._USE_C_DESCRIPTORS == False: + DESCRIPTOR._options = None + _globals["_GENRE"]._serialized_start = 236 + _globals["_GENRE"]._serialized_end = 282 + _globals["_SINGERINFO"]._serialized_start = 41 + _globals["_SINGERINFO"]._serialized_end = 234 +# @@protoc_insertion_point(module_scope) diff --git a/tests/system/utils/streaming_utils.py b/tests/system/utils/streaming_utils.py index a39637bf0f..174ddae557 100644 --- a/tests/system/utils/streaming_utils.py +++ b/tests/system/utils/streaming_utils.py @@ -26,7 +26,7 @@ class _TableDesc( ) ): def value(self): - return u"X" * self.value_size + return "X" * self.value_size FOUR_KAY = _TableDesc("four_kay", 1000, 4096, 1) diff --git a/tests/unit/__init__.py b/tests/unit/__init__.py index df379f1e9d..cbf94b283c 100644 --- a/tests/unit/__init__.py +++ b/tests/unit/__init__.py @@ -1,4 +1,5 @@ -# Copyright 2016 Google LLC +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -11,3 +12,4 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# diff --git a/tests/unit/gapic/__init__.py b/tests/unit/gapic/__init__.py new file mode 100644 index 0000000000..cbf94b283c --- /dev/null +++ b/tests/unit/gapic/__init__.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/tests/unit/gapic/spanner_admin_database_v1/__init__.py b/tests/unit/gapic/spanner_admin_database_v1/__init__.py new file mode 100644 index 0000000000..cbf94b283c --- /dev/null +++ b/tests/unit/gapic/spanner_admin_database_v1/__init__.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/tests/unit/gapic/spanner_admin_database_v1/test_database_admin.py b/tests/unit/gapic/spanner_admin_database_v1/test_database_admin.py new file mode 100644 index 0000000000..f62b95c85d --- /dev/null +++ b/tests/unit/gapic/spanner_admin_database_v1/test_database_admin.py @@ -0,0 +1,24688 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import os +import re + +# try/except added for compatibility with python < 3.8 +try: + from unittest import mock + from unittest.mock import AsyncMock # pragma: NO COVER +except ImportError: # pragma: NO COVER + import mock + +import grpc +from grpc.experimental import aio +from collections.abc import Iterable, AsyncIterable +from google.protobuf import json_format +import json +import math +import pytest +from google.api_core import api_core_version +from proto.marshal.rules.dates import DurationRule, TimestampRule +from proto.marshal.rules import wrappers +from requests import Response +from requests import Request, PreparedRequest +from requests.sessions import Session +from google.protobuf import json_format + +try: + from google.auth.aio import credentials as ga_credentials_async + + HAS_GOOGLE_AUTH_AIO = True +except ImportError: # pragma: NO COVER + HAS_GOOGLE_AUTH_AIO = False + +from google.api_core import client_options +from google.api_core import exceptions as core_exceptions +from google.api_core import future +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.api_core import operation +from google.api_core import operation_async # type: ignore +from google.api_core import operations_v1 +from google.api_core import path_template +from google.api_core import retry as retries +from google.auth import credentials as ga_credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.spanner_admin_database_v1.services.database_admin import ( + DatabaseAdminAsyncClient, +) +from google.cloud.spanner_admin_database_v1.services.database_admin import ( + DatabaseAdminClient, +) +from google.cloud.spanner_admin_database_v1.services.database_admin import pagers +from google.cloud.spanner_admin_database_v1.services.database_admin import transports +from google.cloud.spanner_admin_database_v1.types import backup +from google.cloud.spanner_admin_database_v1.types import backup as gsad_backup +from google.cloud.spanner_admin_database_v1.types import backup_schedule +from google.cloud.spanner_admin_database_v1.types import ( + backup_schedule as gsad_backup_schedule, +) +from google.cloud.spanner_admin_database_v1.types import common +from google.cloud.spanner_admin_database_v1.types import spanner_database_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import options_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.oauth2 import service_account +from google.protobuf import any_pb2 # type: ignore +from google.protobuf import duration_pb2 # type: ignore +from google.protobuf import empty_pb2 # type: ignore +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import struct_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from google.rpc import status_pb2 # type: ignore +from google.type import expr_pb2 # type: ignore +import google.auth + + +CRED_INFO_JSON = { + "credential_source": "/path/to/file", + "credential_type": "service account credentials", + "principal": "service-account@example.com", +} +CRED_INFO_STRING = json.dumps(CRED_INFO_JSON) + + +async def mock_async_gen(data, chunk_size=1): + for i in range(0, len(data)): # pragma: NO COVER + chunk = data[i : i + chunk_size] + yield chunk.encode("utf-8") + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# TODO: use async auth anon credentials by default once the minimum version of google-auth is upgraded. +# See related issue: https://github.com/googleapis/gapic-generator-python/issues/2107. +def async_anonymous_credentials(): + if HAS_GOOGLE_AUTH_AIO: + return ga_credentials_async.AnonymousCredentials() + return ga_credentials.AnonymousCredentials() + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +# If default endpoint template is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint template so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint_template(client): + return ( + "test.{UNIVERSE_DOMAIN}" + if ("localhost" in client._DEFAULT_ENDPOINT_TEMPLATE) + else client._DEFAULT_ENDPOINT_TEMPLATE + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert DatabaseAdminClient._get_default_mtls_endpoint(None) is None + assert ( + DatabaseAdminClient._get_default_mtls_endpoint(api_endpoint) + == api_mtls_endpoint + ) + assert ( + DatabaseAdminClient._get_default_mtls_endpoint(api_mtls_endpoint) + == api_mtls_endpoint + ) + assert ( + DatabaseAdminClient._get_default_mtls_endpoint(sandbox_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + DatabaseAdminClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + DatabaseAdminClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi + ) + + +def test__read_environment_variables(): + assert DatabaseAdminClient._read_environment_variables() == (False, "auto", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + assert DatabaseAdminClient._read_environment_variables() == (True, "auto", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "false"}): + assert DatabaseAdminClient._read_environment_variables() == ( + False, + "auto", + None, + ) + + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + DatabaseAdminClient._read_environment_variables() + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + assert DatabaseAdminClient._read_environment_variables() == ( + False, + "never", + None, + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + assert DatabaseAdminClient._read_environment_variables() == ( + False, + "always", + None, + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}): + assert DatabaseAdminClient._read_environment_variables() == ( + False, + "auto", + None, + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + DatabaseAdminClient._read_environment_variables() + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + with mock.patch.dict(os.environ, {"GOOGLE_CLOUD_UNIVERSE_DOMAIN": "foo.com"}): + assert DatabaseAdminClient._read_environment_variables() == ( + False, + "auto", + "foo.com", + ) + + +def test__get_client_cert_source(): + mock_provided_cert_source = mock.Mock() + mock_default_cert_source = mock.Mock() + + assert DatabaseAdminClient._get_client_cert_source(None, False) is None + assert ( + DatabaseAdminClient._get_client_cert_source(mock_provided_cert_source, False) + is None + ) + assert ( + DatabaseAdminClient._get_client_cert_source(mock_provided_cert_source, True) + == mock_provided_cert_source + ) + + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", return_value=True + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=mock_default_cert_source, + ): + assert ( + DatabaseAdminClient._get_client_cert_source(None, True) + is mock_default_cert_source + ) + assert ( + DatabaseAdminClient._get_client_cert_source( + mock_provided_cert_source, "true" + ) + is mock_provided_cert_source + ) + + +@mock.patch.object( + DatabaseAdminClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(DatabaseAdminClient), +) +@mock.patch.object( + DatabaseAdminAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(DatabaseAdminAsyncClient), +) +def test__get_api_endpoint(): + api_override = "foo.com" + mock_client_cert_source = mock.Mock() + default_universe = DatabaseAdminClient._DEFAULT_UNIVERSE + default_endpoint = DatabaseAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=default_universe + ) + mock_universe = "bar.com" + mock_endpoint = DatabaseAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=mock_universe + ) + + assert ( + DatabaseAdminClient._get_api_endpoint( + api_override, mock_client_cert_source, default_universe, "always" + ) + == api_override + ) + assert ( + DatabaseAdminClient._get_api_endpoint( + None, mock_client_cert_source, default_universe, "auto" + ) + == DatabaseAdminClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + DatabaseAdminClient._get_api_endpoint(None, None, default_universe, "auto") + == default_endpoint + ) + assert ( + DatabaseAdminClient._get_api_endpoint(None, None, default_universe, "always") + == DatabaseAdminClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + DatabaseAdminClient._get_api_endpoint( + None, mock_client_cert_source, default_universe, "always" + ) + == DatabaseAdminClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + DatabaseAdminClient._get_api_endpoint(None, None, mock_universe, "never") + == mock_endpoint + ) + assert ( + DatabaseAdminClient._get_api_endpoint(None, None, default_universe, "never") + == default_endpoint + ) + + with pytest.raises(MutualTLSChannelError) as excinfo: + DatabaseAdminClient._get_api_endpoint( + None, mock_client_cert_source, mock_universe, "auto" + ) + assert ( + str(excinfo.value) + == "mTLS is not supported in any universe other than googleapis.com." + ) + + +def test__get_universe_domain(): + client_universe_domain = "foo.com" + universe_domain_env = "bar.com" + + assert ( + DatabaseAdminClient._get_universe_domain( + client_universe_domain, universe_domain_env + ) + == client_universe_domain + ) + assert ( + DatabaseAdminClient._get_universe_domain(None, universe_domain_env) + == universe_domain_env + ) + assert ( + DatabaseAdminClient._get_universe_domain(None, None) + == DatabaseAdminClient._DEFAULT_UNIVERSE + ) + + with pytest.raises(ValueError) as excinfo: + DatabaseAdminClient._get_universe_domain("", None) + assert str(excinfo.value) == "Universe Domain cannot be an empty string." + + +@pytest.mark.parametrize( + "error_code,cred_info_json,show_cred_info", + [ + (401, CRED_INFO_JSON, True), + (403, CRED_INFO_JSON, True), + (404, CRED_INFO_JSON, True), + (500, CRED_INFO_JSON, False), + (401, None, False), + (403, None, False), + (404, None, False), + (500, None, False), + ], +) +def test__add_cred_info_for_auth_errors(error_code, cred_info_json, show_cred_info): + cred = mock.Mock(["get_cred_info"]) + cred.get_cred_info = mock.Mock(return_value=cred_info_json) + client = DatabaseAdminClient(credentials=cred) + client._transport._credentials = cred + + error = core_exceptions.GoogleAPICallError("message", details=["foo"]) + error.code = error_code + + client._add_cred_info_for_auth_errors(error) + if show_cred_info: + assert error.details == ["foo", CRED_INFO_STRING] + else: + assert error.details == ["foo"] + + +@pytest.mark.parametrize("error_code", [401, 403, 404, 500]) +def test__add_cred_info_for_auth_errors_no_get_cred_info(error_code): + cred = mock.Mock([]) + assert not hasattr(cred, "get_cred_info") + client = DatabaseAdminClient(credentials=cred) + client._transport._credentials = cred + + error = core_exceptions.GoogleAPICallError("message", details=[]) + error.code = error_code + + client._add_cred_info_for_auth_errors(error) + assert error.details == [] + + +@pytest.mark.parametrize( + "client_class,transport_name", + [ + (DatabaseAdminClient, "grpc"), + (DatabaseAdminAsyncClient, "grpc_asyncio"), + (DatabaseAdminClient, "rest"), + ], +) +def test_database_admin_client_from_service_account_info(client_class, transport_name): + creds = ga_credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_info" + ) as factory: + factory.return_value = creds + info = {"valid": True} + client = client_class.from_service_account_info(info, transport=transport_name) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +@pytest.mark.parametrize( + "transport_class,transport_name", + [ + (transports.DatabaseAdminGrpcTransport, "grpc"), + (transports.DatabaseAdminGrpcAsyncIOTransport, "grpc_asyncio"), + (transports.DatabaseAdminRestTransport, "rest"), + ], +) +def test_database_admin_client_service_account_always_use_jwt( + transport_class, transport_name +): + with mock.patch.object( + service_account.Credentials, "with_always_use_jwt_access", create=True + ) as use_jwt: + creds = service_account.Credentials(None, None, None) + transport = transport_class(credentials=creds, always_use_jwt_access=True) + use_jwt.assert_called_once_with(True) + + with mock.patch.object( + service_account.Credentials, "with_always_use_jwt_access", create=True + ) as use_jwt: + creds = service_account.Credentials(None, None, None) + transport = transport_class(credentials=creds, always_use_jwt_access=False) + use_jwt.assert_not_called() + + +@pytest.mark.parametrize( + "client_class,transport_name", + [ + (DatabaseAdminClient, "grpc"), + (DatabaseAdminAsyncClient, "grpc_asyncio"), + (DatabaseAdminClient, "rest"), + ], +) +def test_database_admin_client_from_service_account_file(client_class, transport_name): + creds = ga_credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file( + "dummy/file/path.json", transport=transport_name + ) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + client = client_class.from_service_account_json( + "dummy/file/path.json", transport=transport_name + ) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +def test_database_admin_client_get_transport_class(): + transport = DatabaseAdminClient.get_transport_class() + available_transports = [ + transports.DatabaseAdminGrpcTransport, + transports.DatabaseAdminRestTransport, + ] + assert transport in available_transports + + transport = DatabaseAdminClient.get_transport_class("grpc") + assert transport == transports.DatabaseAdminGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + (DatabaseAdminClient, transports.DatabaseAdminGrpcTransport, "grpc"), + ( + DatabaseAdminAsyncClient, + transports.DatabaseAdminGrpcAsyncIOTransport, + "grpc_asyncio", + ), + (DatabaseAdminClient, transports.DatabaseAdminRestTransport, "rest"), + ], +) +@mock.patch.object( + DatabaseAdminClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(DatabaseAdminClient), +) +@mock.patch.object( + DatabaseAdminAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(DatabaseAdminAsyncClient), +) +def test_database_admin_client_client_options( + client_class, transport_class, transport_name +): + # Check that if channel is provided we won't create a new one. + with mock.patch.object(DatabaseAdminClient, "get_transport_class") as gtc: + transport = transport_class(credentials=ga_credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object(DatabaseAdminClient, "get_transport_class") as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name, client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + client = client_class(transport=transport_name) + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + client = client_class(transport=transport_name) + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + # Check the case api_endpoint is provided + options = client_options.ClientOptions( + api_audience="https://language.googleapis.com" + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience="https://language.googleapis.com", + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + (DatabaseAdminClient, transports.DatabaseAdminGrpcTransport, "grpc", "true"), + ( + DatabaseAdminAsyncClient, + transports.DatabaseAdminGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + (DatabaseAdminClient, transports.DatabaseAdminGrpcTransport, "grpc", "false"), + ( + DatabaseAdminAsyncClient, + transports.DatabaseAdminGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + (DatabaseAdminClient, transports.DatabaseAdminRestTransport, "rest", "true"), + (DatabaseAdminClient, transports.DatabaseAdminRestTransport, "rest", "false"), + ], +) +@mock.patch.object( + DatabaseAdminClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(DatabaseAdminClient), +) +@mock.patch.object( + DatabaseAdminAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(DatabaseAdminAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_database_admin_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + + if use_client_cert_env == "false": + expected_client_cert_source = None + expected_host = client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ) + else: + expected_client_cert_source = client_cert_source_callback + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=client_cert_source_callback, + ): + if use_client_cert_env == "false": + expected_host = client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ) + expected_client_cert_source = None + else: + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_client_cert_source = client_cert_source_callback + + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + +@pytest.mark.parametrize( + "client_class", [DatabaseAdminClient, DatabaseAdminAsyncClient] +) +@mock.patch.object( + DatabaseAdminClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(DatabaseAdminClient), +) +@mock.patch.object( + DatabaseAdminAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(DatabaseAdminAsyncClient), +) +def test_database_admin_client_get_mtls_endpoint_and_cert_source(client_class): + mock_client_cert_source = mock.Mock() + + # Test the case GOOGLE_API_USE_CLIENT_CERTIFICATE is "true". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + mock_api_endpoint = "foo" + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=mock_api_endpoint + ) + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source( + options + ) + assert api_endpoint == mock_api_endpoint + assert cert_source == mock_client_cert_source + + # Test the case GOOGLE_API_USE_CLIENT_CERTIFICATE is "false". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "false"}): + mock_client_cert_source = mock.Mock() + mock_api_endpoint = "foo" + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=mock_api_endpoint + ) + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source( + options + ) + assert api_endpoint == mock_api_endpoint + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "auto" and default cert doesn't exist. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "auto" and default cert exists. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=mock_client_cert_source, + ): + ( + api_endpoint, + cert_source, + ) = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + assert cert_source == mock_client_cert_source + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + client_class.get_mtls_endpoint_and_cert_source() + + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + client_class.get_mtls_endpoint_and_cert_source() + + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + +@pytest.mark.parametrize( + "client_class", [DatabaseAdminClient, DatabaseAdminAsyncClient] +) +@mock.patch.object( + DatabaseAdminClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(DatabaseAdminClient), +) +@mock.patch.object( + DatabaseAdminAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(DatabaseAdminAsyncClient), +) +def test_database_admin_client_client_api_endpoint(client_class): + mock_client_cert_source = client_cert_source_callback + api_override = "foo.com" + default_universe = DatabaseAdminClient._DEFAULT_UNIVERSE + default_endpoint = DatabaseAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=default_universe + ) + mock_universe = "bar.com" + mock_endpoint = DatabaseAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=mock_universe + ) + + # If ClientOptions.api_endpoint is set and GOOGLE_API_USE_CLIENT_CERTIFICATE="true", + # use ClientOptions.api_endpoint as the api endpoint regardless. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.requests.AuthorizedSession.configure_mtls_channel" + ): + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=api_override + ) + client = client_class( + client_options=options, + credentials=ga_credentials.AnonymousCredentials(), + ) + assert client.api_endpoint == api_override + + # If ClientOptions.api_endpoint is not set and GOOGLE_API_USE_MTLS_ENDPOINT="never", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with GDU as the api endpoint. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + client = client_class(credentials=ga_credentials.AnonymousCredentials()) + assert client.api_endpoint == default_endpoint + + # If ClientOptions.api_endpoint is not set and GOOGLE_API_USE_MTLS_ENDPOINT="always", + # use the DEFAULT_MTLS_ENDPOINT as the api endpoint. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + client = client_class(credentials=ga_credentials.AnonymousCredentials()) + assert client.api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + + # If ClientOptions.api_endpoint is not set, GOOGLE_API_USE_MTLS_ENDPOINT="auto" (default), + # GOOGLE_API_USE_CLIENT_CERTIFICATE="false" (default), default cert source doesn't exist, + # and ClientOptions.universe_domain="bar.com", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with universe domain as the api endpoint. + options = client_options.ClientOptions() + universe_exists = hasattr(options, "universe_domain") + if universe_exists: + options = client_options.ClientOptions(universe_domain=mock_universe) + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + else: + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + assert client.api_endpoint == ( + mock_endpoint if universe_exists else default_endpoint + ) + assert client.universe_domain == ( + mock_universe if universe_exists else default_universe + ) + + # If ClientOptions does not have a universe domain attribute and GOOGLE_API_USE_MTLS_ENDPOINT="never", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with GDU as the api endpoint. + options = client_options.ClientOptions() + if hasattr(options, "universe_domain"): + delattr(options, "universe_domain") + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + assert client.api_endpoint == default_endpoint + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + (DatabaseAdminClient, transports.DatabaseAdminGrpcTransport, "grpc"), + ( + DatabaseAdminAsyncClient, + transports.DatabaseAdminGrpcAsyncIOTransport, + "grpc_asyncio", + ), + (DatabaseAdminClient, transports.DatabaseAdminRestTransport, "rest"), + ], +) +def test_database_admin_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions( + scopes=["1", "2"], + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=["1", "2"], + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,grpc_helpers", + [ + ( + DatabaseAdminClient, + transports.DatabaseAdminGrpcTransport, + "grpc", + grpc_helpers, + ), + ( + DatabaseAdminAsyncClient, + transports.DatabaseAdminGrpcAsyncIOTransport, + "grpc_asyncio", + grpc_helpers_async, + ), + (DatabaseAdminClient, transports.DatabaseAdminRestTransport, "rest", None), + ], +) +def test_database_admin_client_client_options_credentials_file( + client_class, transport_class, transport_name, grpc_helpers +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + +def test_database_admin_client_client_options_from_dict(): + with mock.patch( + "google.cloud.spanner_admin_database_v1.services.database_admin.transports.DatabaseAdminGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = DatabaseAdminClient( + client_options={"api_endpoint": "squid.clam.whelk"} + ) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,grpc_helpers", + [ + ( + DatabaseAdminClient, + transports.DatabaseAdminGrpcTransport, + "grpc", + grpc_helpers, + ), + ( + DatabaseAdminAsyncClient, + transports.DatabaseAdminGrpcAsyncIOTransport, + "grpc_asyncio", + grpc_helpers_async, + ), + ], +) +def test_database_admin_client_create_channel_credentials_file( + client_class, transport_class, transport_name, grpc_helpers +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # test that the credentials from file are saved and used as the credentials. + with mock.patch.object( + google.auth, "load_credentials_from_file", autospec=True + ) as load_creds, mock.patch.object( + google.auth, "default", autospec=True + ) as adc, mock.patch.object( + grpc_helpers, "create_channel" + ) as create_channel: + creds = ga_credentials.AnonymousCredentials() + file_creds = ga_credentials.AnonymousCredentials() + load_creds.return_value = (file_creds, None) + adc.return_value = (creds, None) + client = client_class(client_options=options, transport=transport_name) + create_channel.assert_called_with( + "spanner.googleapis.com:443", + credentials=file_creds, + credentials_file=None, + quota_project_id=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + scopes=None, + default_host="spanner.googleapis.com", + ssl_credentials=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.ListDatabasesRequest, + dict, + ], +) +def test_list_databases(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabasesResponse( + next_page_token="next_page_token_value", + ) + response = client.list_databases(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.ListDatabasesRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabasesPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_databases_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.ListDatabasesRequest( + parent="parent_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_databases(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.ListDatabasesRequest( + parent="parent_value", + page_token="page_token_value", + ) + + +def test_list_databases_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.list_databases in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.list_databases] = mock_rpc + request = {} + client.list_databases(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_databases(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_databases_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_databases + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_databases + ] = mock_rpc + + request = {} + await client.list_databases(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_databases(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_databases_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.ListDatabasesRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabasesResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_databases(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.ListDatabasesRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabasesAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_databases_async_from_dict(): + await test_list_databases_async(request_type=dict) + + +def test_list_databases_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.ListDatabasesRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + call.return_value = spanner_database_admin.ListDatabasesResponse() + client.list_databases(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_databases_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.ListDatabasesRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabasesResponse() + ) + await client.list_databases(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_databases_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabasesResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_databases( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_databases_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_databases( + spanner_database_admin.ListDatabasesRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_databases_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabasesResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabasesResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_databases( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_databases_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_databases( + spanner_database_admin.ListDatabasesRequest(), + parent="parent_value", + ) + + +def test_list_databases_pager(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_databases(request={}, retry=retry, timeout=timeout) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, spanner_database_admin.Database) for i in results) + + +def test_list_databases_pages(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + ), + RuntimeError, + ) + pages = list(client.list_databases(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_databases_async_pager(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_databases), "__call__", new_callable=mock.AsyncMock + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_databases( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, spanner_database_admin.Database) for i in responses) + + +@pytest.mark.asyncio +async def test_list_databases_async_pages(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_databases), "__call__", new_callable=mock.AsyncMock + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_databases(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.CreateDatabaseRequest, + dict, + ], +) +def test_create_database(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.create_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.CreateDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_create_database_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.CreateDatabaseRequest( + parent="parent_value", + create_statement="create_statement_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.create_database(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.CreateDatabaseRequest( + parent="parent_value", + create_statement="create_statement_value", + ) + + +def test_create_database_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.create_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.create_database] = mock_rpc + request = {} + client.create_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_database_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.create_database + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.create_database + ] = mock_rpc + + request = {} + await client.create_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.create_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_database_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.CreateDatabaseRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.create_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.CreateDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_create_database_async_from_dict(): + await test_create_database_async(request_type=dict) + + +def test_create_database_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.CreateDatabaseRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_create_database_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.CreateDatabaseRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.create_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_create_database_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.create_database( + parent="parent_value", + create_statement="create_statement_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].create_statement + mock_val = "create_statement_value" + assert arg == mock_val + + +def test_create_database_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_database( + spanner_database_admin.CreateDatabaseRequest(), + parent="parent_value", + create_statement="create_statement_value", + ) + + +@pytest.mark.asyncio +async def test_create_database_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.create_database( + parent="parent_value", + create_statement="create_statement_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].create_statement + mock_val = "create_statement_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_create_database_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.create_database( + spanner_database_admin.CreateDatabaseRequest(), + parent="parent_value", + create_statement="create_statement_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.GetDatabaseRequest, + dict, + ], +) +def test_get_database(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.Database( + name="name_value", + state=spanner_database_admin.Database.State.CREATING, + version_retention_period="version_retention_period_value", + default_leader="default_leader_value", + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + enable_drop_protection=True, + reconciling=True, + ) + response = client.get_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.GetDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.Database) + assert response.name == "name_value" + assert response.state == spanner_database_admin.Database.State.CREATING + assert response.version_retention_period == "version_retention_period_value" + assert response.default_leader == "default_leader_value" + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.enable_drop_protection is True + assert response.reconciling is True + + +def test_get_database_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.GetDatabaseRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_database(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.GetDatabaseRequest( + name="name_value", + ) + + +def test_get_database_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_database] = mock_rpc + request = {} + client.get_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_database_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_database + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_database + ] = mock_rpc + + request = {} + await client.get_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_database_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.GetDatabaseRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.Database( + name="name_value", + state=spanner_database_admin.Database.State.CREATING, + version_retention_period="version_retention_period_value", + default_leader="default_leader_value", + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + enable_drop_protection=True, + reconciling=True, + ) + ) + response = await client.get_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.GetDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.Database) + assert response.name == "name_value" + assert response.state == spanner_database_admin.Database.State.CREATING + assert response.version_retention_period == "version_retention_period_value" + assert response.default_leader == "default_leader_value" + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.enable_drop_protection is True + assert response.reconciling is True + + +@pytest.mark.asyncio +async def test_get_database_async_from_dict(): + await test_get_database_async(request_type=dict) + + +def test_get_database_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.GetDatabaseRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + call.return_value = spanner_database_admin.Database() + client.get_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_database_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.GetDatabaseRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.Database() + ) + await client.get_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_get_database_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.Database() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_database( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_get_database_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_database( + spanner_database_admin.GetDatabaseRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_get_database_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.Database() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.Database() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_database( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_database_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_database( + spanner_database_admin.GetDatabaseRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.UpdateDatabaseRequest, + dict, + ], +) +def test_update_database(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.update_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.UpdateDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_update_database_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.UpdateDatabaseRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.update_database(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.UpdateDatabaseRequest() + + +def test_update_database_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.update_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.update_database] = mock_rpc + request = {} + client.update_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_database_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.update_database + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.update_database + ] = mock_rpc + + request = {} + await client.update_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.update_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_database_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.UpdateDatabaseRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.update_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.UpdateDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_update_database_async_from_dict(): + await test_update_database_async(request_type=dict) + + +def test_update_database_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.UpdateDatabaseRequest() + + request.database.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database.name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_update_database_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.UpdateDatabaseRequest() + + request.database.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.update_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database.name=name_value", + ) in kw["metadata"] + + +def test_update_database_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.update_database( + database=spanner_database_admin.Database(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = spanner_database_admin.Database(name="name_value") + assert arg == mock_val + arg = args[0].update_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +def test_update_database_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_database( + spanner_database_admin.UpdateDatabaseRequest(), + database=spanner_database_admin.Database(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.asyncio +async def test_update_database_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.update_database( + database=spanner_database_admin.Database(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = spanner_database_admin.Database(name="name_value") + assert arg == mock_val + arg = args[0].update_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_update_database_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.update_database( + spanner_database_admin.UpdateDatabaseRequest(), + database=spanner_database_admin.Database(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.UpdateDatabaseDdlRequest, + dict, + ], +) +def test_update_database_ddl(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.update_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.UpdateDatabaseDdlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_update_database_ddl_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.UpdateDatabaseDdlRequest( + database="database_value", + operation_id="operation_id_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.update_database_ddl(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.UpdateDatabaseDdlRequest( + database="database_value", + operation_id="operation_id_value", + ) + + +def test_update_database_ddl_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.update_database_ddl in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.update_database_ddl + ] = mock_rpc + request = {} + client.update_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_database_ddl(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_database_ddl_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.update_database_ddl + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.update_database_ddl + ] = mock_rpc + + request = {} + await client.update_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.update_database_ddl(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_database_ddl_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.UpdateDatabaseDdlRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.update_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.UpdateDatabaseDdlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_update_database_ddl_async_from_dict(): + await test_update_database_ddl_async(request_type=dict) + + +def test_update_database_ddl_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.UpdateDatabaseDdlRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_update_database_ddl_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.UpdateDatabaseDdlRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.update_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +def test_update_database_ddl_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.update_database_ddl( + database="database_value", + statements=["statements_value"], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + arg = args[0].statements + mock_val = ["statements_value"] + assert arg == mock_val + + +def test_update_database_ddl_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_database_ddl( + spanner_database_admin.UpdateDatabaseDdlRequest(), + database="database_value", + statements=["statements_value"], + ) + + +@pytest.mark.asyncio +async def test_update_database_ddl_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.update_database_ddl( + database="database_value", + statements=["statements_value"], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + arg = args[0].statements + mock_val = ["statements_value"] + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_update_database_ddl_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.update_database_ddl( + spanner_database_admin.UpdateDatabaseDdlRequest(), + database="database_value", + statements=["statements_value"], + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.DropDatabaseRequest, + dict, + ], +) +def test_drop_database(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.drop_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.DropDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_drop_database_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.DropDatabaseRequest( + database="database_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.drop_database(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.DropDatabaseRequest( + database="database_value", + ) + + +def test_drop_database_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.drop_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.drop_database] = mock_rpc + request = {} + client.drop_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.drop_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_drop_database_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.drop_database + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.drop_database + ] = mock_rpc + + request = {} + await client.drop_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.drop_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_drop_database_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.DropDatabaseRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.drop_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.DropDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_drop_database_async_from_dict(): + await test_drop_database_async(request_type=dict) + + +def test_drop_database_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.DropDatabaseRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + call.return_value = None + client.drop_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_drop_database_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.DropDatabaseRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.drop_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +def test_drop_database_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.drop_database( + database="database_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + + +def test_drop_database_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.drop_database( + spanner_database_admin.DropDatabaseRequest(), + database="database_value", + ) + + +@pytest.mark.asyncio +async def test_drop_database_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.drop_database( + database="database_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_drop_database_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.drop_database( + spanner_database_admin.DropDatabaseRequest(), + database="database_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.GetDatabaseDdlRequest, + dict, + ], +) +def test_get_database_ddl(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.GetDatabaseDdlResponse( + statements=["statements_value"], + proto_descriptors=b"proto_descriptors_blob", + ) + response = client.get_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.GetDatabaseDdlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.GetDatabaseDdlResponse) + assert response.statements == ["statements_value"] + assert response.proto_descriptors == b"proto_descriptors_blob" + + +def test_get_database_ddl_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.GetDatabaseDdlRequest( + database="database_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_database_ddl(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.GetDatabaseDdlRequest( + database="database_value", + ) + + +def test_get_database_ddl_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_database_ddl in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.get_database_ddl + ] = mock_rpc + request = {} + client.get_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_database_ddl(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_database_ddl_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_database_ddl + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_database_ddl + ] = mock_rpc + + request = {} + await client.get_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_database_ddl(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_database_ddl_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.GetDatabaseDdlRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.GetDatabaseDdlResponse( + statements=["statements_value"], + proto_descriptors=b"proto_descriptors_blob", + ) + ) + response = await client.get_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.GetDatabaseDdlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.GetDatabaseDdlResponse) + assert response.statements == ["statements_value"] + assert response.proto_descriptors == b"proto_descriptors_blob" + + +@pytest.mark.asyncio +async def test_get_database_ddl_async_from_dict(): + await test_get_database_ddl_async(request_type=dict) + + +def test_get_database_ddl_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.GetDatabaseDdlRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + call.return_value = spanner_database_admin.GetDatabaseDdlResponse() + client.get_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_database_ddl_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.GetDatabaseDdlRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.GetDatabaseDdlResponse() + ) + await client.get_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +def test_get_database_ddl_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.GetDatabaseDdlResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_database_ddl( + database="database_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + + +def test_get_database_ddl_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_database_ddl( + spanner_database_admin.GetDatabaseDdlRequest(), + database="database_value", + ) + + +@pytest.mark.asyncio +async def test_get_database_ddl_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.GetDatabaseDdlResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.GetDatabaseDdlResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_database_ddl( + database="database_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_database_ddl_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_database_ddl( + spanner_database_admin.GetDatabaseDdlRequest(), + database="database_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.SetIamPolicyRequest, + dict, + ], +) +def test_set_iam_policy(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + response = client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.SetIamPolicyRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +def test_set_iam_policy_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.set_iam_policy(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + +def test_set_iam_policy_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.set_iam_policy in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.set_iam_policy] = mock_rpc + request = {} + client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.set_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_set_iam_policy_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.set_iam_policy + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.set_iam_policy + ] = mock_rpc + + request = {} + await client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.set_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_set_iam_policy_async( + transport: str = "grpc_asyncio", request_type=iam_policy_pb2.SetIamPolicyRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + ) + response = await client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.SetIamPolicyRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +@pytest.mark.asyncio +async def test_set_iam_policy_async_from_dict(): + await test_set_iam_policy_async(request_type=dict) + + +def test_set_iam_policy_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.SetIamPolicyRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + call.return_value = policy_pb2.Policy() + client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_set_iam_policy_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.SetIamPolicyRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy_pb2.Policy()) + await client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +def test_set_iam_policy_from_dict_foreign(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + response = client.set_iam_policy( + request={ + "resource": "resource_value", + "policy": policy_pb2.Policy(version=774), + "update_mask": field_mask_pb2.FieldMask(paths=["paths_value"]), + } + ) + call.assert_called() + + +def test_set_iam_policy_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.set_iam_policy( + resource="resource_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + + +def test_set_iam_policy_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.set_iam_policy( + iam_policy_pb2.SetIamPolicyRequest(), + resource="resource_value", + ) + + +@pytest.mark.asyncio +async def test_set_iam_policy_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy_pb2.Policy()) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.set_iam_policy( + resource="resource_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_set_iam_policy_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.set_iam_policy( + iam_policy_pb2.SetIamPolicyRequest(), + resource="resource_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.GetIamPolicyRequest, + dict, + ], +) +def test_get_iam_policy(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + response = client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.GetIamPolicyRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +def test_get_iam_policy_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_iam_policy(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + +def test_get_iam_policy_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_iam_policy in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_iam_policy] = mock_rpc + request = {} + client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_iam_policy_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_iam_policy + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_iam_policy + ] = mock_rpc + + request = {} + await client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_iam_policy_async( + transport: str = "grpc_asyncio", request_type=iam_policy_pb2.GetIamPolicyRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + ) + response = await client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.GetIamPolicyRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +@pytest.mark.asyncio +async def test_get_iam_policy_async_from_dict(): + await test_get_iam_policy_async(request_type=dict) + + +def test_get_iam_policy_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.GetIamPolicyRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + call.return_value = policy_pb2.Policy() + client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_iam_policy_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.GetIamPolicyRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy_pb2.Policy()) + await client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +def test_get_iam_policy_from_dict_foreign(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + response = client.get_iam_policy( + request={ + "resource": "resource_value", + "options": options_pb2.GetPolicyOptions(requested_policy_version=2598), + } + ) + call.assert_called() + + +def test_get_iam_policy_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_iam_policy( + resource="resource_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + + +def test_get_iam_policy_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_iam_policy( + iam_policy_pb2.GetIamPolicyRequest(), + resource="resource_value", + ) + + +@pytest.mark.asyncio +async def test_get_iam_policy_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy_pb2.Policy()) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_iam_policy( + resource="resource_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_iam_policy_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_iam_policy( + iam_policy_pb2.GetIamPolicyRequest(), + resource="resource_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.TestIamPermissionsRequest, + dict, + ], +) +def test_test_iam_permissions(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iam_policy_pb2.TestIamPermissionsResponse( + permissions=["permissions_value"], + ) + response = client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.TestIamPermissionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, iam_policy_pb2.TestIamPermissionsResponse) + assert response.permissions == ["permissions_value"] + + +def test_test_iam_permissions_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.test_iam_permissions(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + ) + + +def test_test_iam_permissions_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.test_iam_permissions in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.test_iam_permissions + ] = mock_rpc + request = {} + client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.test_iam_permissions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_test_iam_permissions_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.test_iam_permissions + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.test_iam_permissions + ] = mock_rpc + + request = {} + await client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.test_iam_permissions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_test_iam_permissions_async( + transport: str = "grpc_asyncio", + request_type=iam_policy_pb2.TestIamPermissionsRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + iam_policy_pb2.TestIamPermissionsResponse( + permissions=["permissions_value"], + ) + ) + response = await client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.TestIamPermissionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, iam_policy_pb2.TestIamPermissionsResponse) + assert response.permissions == ["permissions_value"] + + +@pytest.mark.asyncio +async def test_test_iam_permissions_async_from_dict(): + await test_test_iam_permissions_async(request_type=dict) + + +def test_test_iam_permissions_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.TestIamPermissionsRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_test_iam_permissions_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.TestIamPermissionsRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + iam_policy_pb2.TestIamPermissionsResponse() + ) + await client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +def test_test_iam_permissions_from_dict_foreign(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + response = client.test_iam_permissions( + request={ + "resource": "resource_value", + "permissions": ["permissions_value"], + } + ) + call.assert_called() + + +def test_test_iam_permissions_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.test_iam_permissions( + resource="resource_value", + permissions=["permissions_value"], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + arg = args[0].permissions + mock_val = ["permissions_value"] + assert arg == mock_val + + +def test_test_iam_permissions_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.test_iam_permissions( + iam_policy_pb2.TestIamPermissionsRequest(), + resource="resource_value", + permissions=["permissions_value"], + ) + + +@pytest.mark.asyncio +async def test_test_iam_permissions_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + iam_policy_pb2.TestIamPermissionsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.test_iam_permissions( + resource="resource_value", + permissions=["permissions_value"], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + arg = args[0].permissions + mock_val = ["permissions_value"] + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_test_iam_permissions_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.test_iam_permissions( + iam_policy_pb2.TestIamPermissionsRequest(), + resource="resource_value", + permissions=["permissions_value"], + ) + + +@pytest.mark.parametrize( + "request_type", + [ + gsad_backup.CreateBackupRequest, + dict, + ], +) +def test_create_backup(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.create_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = gsad_backup.CreateBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_create_backup_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = gsad_backup.CreateBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.create_backup(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == gsad_backup.CreateBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + ) + + +def test_create_backup_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.create_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.create_backup] = mock_rpc + request = {} + client.create_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_backup_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.create_backup + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.create_backup + ] = mock_rpc + + request = {} + await client.create_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.create_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_backup_async( + transport: str = "grpc_asyncio", request_type=gsad_backup.CreateBackupRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.create_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = gsad_backup.CreateBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_create_backup_async_from_dict(): + await test_create_backup_async(request_type=dict) + + +def test_create_backup_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = gsad_backup.CreateBackupRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_create_backup_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = gsad_backup.CreateBackupRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.create_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_create_backup_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.create_backup( + parent="parent_value", + backup=gsad_backup.Backup(database="database_value"), + backup_id="backup_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].backup + mock_val = gsad_backup.Backup(database="database_value") + assert arg == mock_val + arg = args[0].backup_id + mock_val = "backup_id_value" + assert arg == mock_val + + +def test_create_backup_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_backup( + gsad_backup.CreateBackupRequest(), + parent="parent_value", + backup=gsad_backup.Backup(database="database_value"), + backup_id="backup_id_value", + ) + + +@pytest.mark.asyncio +async def test_create_backup_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.create_backup( + parent="parent_value", + backup=gsad_backup.Backup(database="database_value"), + backup_id="backup_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].backup + mock_val = gsad_backup.Backup(database="database_value") + assert arg == mock_val + arg = args[0].backup_id + mock_val = "backup_id_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_create_backup_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.create_backup( + gsad_backup.CreateBackupRequest(), + parent="parent_value", + backup=gsad_backup.Backup(database="database_value"), + backup_id="backup_id_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.CopyBackupRequest, + dict, + ], +) +def test_copy_backup(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.copy_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = backup.CopyBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_copy_backup_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = backup.CopyBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.copy_backup(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == backup.CopyBackupRequest( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + ) + + +def test_copy_backup_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.copy_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.copy_backup] = mock_rpc + request = {} + client.copy_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.copy_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_copy_backup_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.copy_backup + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.copy_backup + ] = mock_rpc + + request = {} + await client.copy_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.copy_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_copy_backup_async( + transport: str = "grpc_asyncio", request_type=backup.CopyBackupRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.copy_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = backup.CopyBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_copy_backup_async_from_dict(): + await test_copy_backup_async(request_type=dict) + + +def test_copy_backup_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.CopyBackupRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.copy_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_copy_backup_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.CopyBackupRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.copy_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_copy_backup_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.copy_backup( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + expire_time=timestamp_pb2.Timestamp(seconds=751), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].backup_id + mock_val = "backup_id_value" + assert arg == mock_val + arg = args[0].source_backup + mock_val = "source_backup_value" + assert arg == mock_val + assert TimestampRule().to_proto(args[0].expire_time) == timestamp_pb2.Timestamp( + seconds=751 + ) + + +def test_copy_backup_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.copy_backup( + backup.CopyBackupRequest(), + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + expire_time=timestamp_pb2.Timestamp(seconds=751), + ) + + +@pytest.mark.asyncio +async def test_copy_backup_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.copy_backup( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + expire_time=timestamp_pb2.Timestamp(seconds=751), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].backup_id + mock_val = "backup_id_value" + assert arg == mock_val + arg = args[0].source_backup + mock_val = "source_backup_value" + assert arg == mock_val + assert TimestampRule().to_proto(args[0].expire_time) == timestamp_pb2.Timestamp( + seconds=751 + ) + + +@pytest.mark.asyncio +async def test_copy_backup_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.copy_backup( + backup.CopyBackupRequest(), + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + expire_time=timestamp_pb2.Timestamp(seconds=751), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.GetBackupRequest, + dict, + ], +) +def test_get_backup(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = backup.Backup( + database="database_value", + name="name_value", + size_bytes=1089, + freeable_size_bytes=2006, + exclusive_size_bytes=2168, + state=backup.Backup.State.CREATING, + referencing_databases=["referencing_databases_value"], + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + referencing_backups=["referencing_backups_value"], + backup_schedules=["backup_schedules_value"], + incremental_backup_chain_id="incremental_backup_chain_id_value", + ) + response = client.get_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = backup.GetBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, backup.Backup) + assert response.database == "database_value" + assert response.name == "name_value" + assert response.size_bytes == 1089 + assert response.freeable_size_bytes == 2006 + assert response.exclusive_size_bytes == 2168 + assert response.state == backup.Backup.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.referencing_backups == ["referencing_backups_value"] + assert response.backup_schedules == ["backup_schedules_value"] + assert response.incremental_backup_chain_id == "incremental_backup_chain_id_value" + + +def test_get_backup_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = backup.GetBackupRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_backup(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == backup.GetBackupRequest( + name="name_value", + ) + + +def test_get_backup_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_backup] = mock_rpc + request = {} + client.get_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_backup_async_use_cached_wrapped_rpc(transport: str = "grpc_asyncio"): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_backup + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_backup + ] = mock_rpc + + request = {} + await client.get_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_backup_async( + transport: str = "grpc_asyncio", request_type=backup.GetBackupRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.Backup( + database="database_value", + name="name_value", + size_bytes=1089, + freeable_size_bytes=2006, + exclusive_size_bytes=2168, + state=backup.Backup.State.CREATING, + referencing_databases=["referencing_databases_value"], + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + referencing_backups=["referencing_backups_value"], + backup_schedules=["backup_schedules_value"], + incremental_backup_chain_id="incremental_backup_chain_id_value", + ) + ) + response = await client.get_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = backup.GetBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, backup.Backup) + assert response.database == "database_value" + assert response.name == "name_value" + assert response.size_bytes == 1089 + assert response.freeable_size_bytes == 2006 + assert response.exclusive_size_bytes == 2168 + assert response.state == backup.Backup.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.referencing_backups == ["referencing_backups_value"] + assert response.backup_schedules == ["backup_schedules_value"] + assert response.incremental_backup_chain_id == "incremental_backup_chain_id_value" + + +@pytest.mark.asyncio +async def test_get_backup_async_from_dict(): + await test_get_backup_async(request_type=dict) + + +def test_get_backup_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.GetBackupRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + call.return_value = backup.Backup() + client.get_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_backup_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.GetBackupRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(backup.Backup()) + await client.get_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_get_backup_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = backup.Backup() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_backup( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_get_backup_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_backup( + backup.GetBackupRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_get_backup_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = backup.Backup() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(backup.Backup()) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_backup( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_backup_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_backup( + backup.GetBackupRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + gsad_backup.UpdateBackupRequest, + dict, + ], +) +def test_update_backup(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup.Backup( + database="database_value", + name="name_value", + size_bytes=1089, + freeable_size_bytes=2006, + exclusive_size_bytes=2168, + state=gsad_backup.Backup.State.CREATING, + referencing_databases=["referencing_databases_value"], + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + referencing_backups=["referencing_backups_value"], + backup_schedules=["backup_schedules_value"], + incremental_backup_chain_id="incremental_backup_chain_id_value", + ) + response = client.update_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = gsad_backup.UpdateBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup.Backup) + assert response.database == "database_value" + assert response.name == "name_value" + assert response.size_bytes == 1089 + assert response.freeable_size_bytes == 2006 + assert response.exclusive_size_bytes == 2168 + assert response.state == gsad_backup.Backup.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.referencing_backups == ["referencing_backups_value"] + assert response.backup_schedules == ["backup_schedules_value"] + assert response.incremental_backup_chain_id == "incremental_backup_chain_id_value" + + +def test_update_backup_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = gsad_backup.UpdateBackupRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.update_backup(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == gsad_backup.UpdateBackupRequest() + + +def test_update_backup_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.update_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.update_backup] = mock_rpc + request = {} + client.update_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.update_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_backup_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.update_backup + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.update_backup + ] = mock_rpc + + request = {} + await client.update_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.update_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_backup_async( + transport: str = "grpc_asyncio", request_type=gsad_backup.UpdateBackupRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup.Backup( + database="database_value", + name="name_value", + size_bytes=1089, + freeable_size_bytes=2006, + exclusive_size_bytes=2168, + state=gsad_backup.Backup.State.CREATING, + referencing_databases=["referencing_databases_value"], + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + referencing_backups=["referencing_backups_value"], + backup_schedules=["backup_schedules_value"], + incremental_backup_chain_id="incremental_backup_chain_id_value", + ) + ) + response = await client.update_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = gsad_backup.UpdateBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup.Backup) + assert response.database == "database_value" + assert response.name == "name_value" + assert response.size_bytes == 1089 + assert response.freeable_size_bytes == 2006 + assert response.exclusive_size_bytes == 2168 + assert response.state == gsad_backup.Backup.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.referencing_backups == ["referencing_backups_value"] + assert response.backup_schedules == ["backup_schedules_value"] + assert response.incremental_backup_chain_id == "incremental_backup_chain_id_value" + + +@pytest.mark.asyncio +async def test_update_backup_async_from_dict(): + await test_update_backup_async(request_type=dict) + + +def test_update_backup_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = gsad_backup.UpdateBackupRequest() + + request.backup.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + call.return_value = gsad_backup.Backup() + client.update_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "backup.name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_update_backup_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = gsad_backup.UpdateBackupRequest() + + request.backup.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(gsad_backup.Backup()) + await client.update_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "backup.name=name_value", + ) in kw["metadata"] + + +def test_update_backup_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup.Backup() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.update_backup( + backup=gsad_backup.Backup(database="database_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].backup + mock_val = gsad_backup.Backup(database="database_value") + assert arg == mock_val + arg = args[0].update_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +def test_update_backup_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_backup( + gsad_backup.UpdateBackupRequest(), + backup=gsad_backup.Backup(database="database_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.asyncio +async def test_update_backup_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup.Backup() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(gsad_backup.Backup()) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.update_backup( + backup=gsad_backup.Backup(database="database_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].backup + mock_val = gsad_backup.Backup(database="database_value") + assert arg == mock_val + arg = args[0].update_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_update_backup_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.update_backup( + gsad_backup.UpdateBackupRequest(), + backup=gsad_backup.Backup(database="database_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.DeleteBackupRequest, + dict, + ], +) +def test_delete_backup(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.delete_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = backup.DeleteBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_backup_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = backup.DeleteBackupRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.delete_backup(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == backup.DeleteBackupRequest( + name="name_value", + ) + + +def test_delete_backup_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.delete_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.delete_backup] = mock_rpc + request = {} + client.delete_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_backup_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.delete_backup + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.delete_backup + ] = mock_rpc + + request = {} + await client.delete_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.delete_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_backup_async( + transport: str = "grpc_asyncio", request_type=backup.DeleteBackupRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = backup.DeleteBackupRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_delete_backup_async_from_dict(): + await test_delete_backup_async(request_type=dict) + + +def test_delete_backup_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.DeleteBackupRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + call.return_value = None + client.delete_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_delete_backup_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.DeleteBackupRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_delete_backup_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.delete_backup( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_delete_backup_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_backup( + backup.DeleteBackupRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_delete_backup_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.delete_backup( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_delete_backup_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.delete_backup( + backup.DeleteBackupRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.ListBackupsRequest, + dict, + ], +) +def test_list_backups(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = backup.ListBackupsResponse( + next_page_token="next_page_token_value", + ) + response = client.list_backups(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = backup.ListBackupsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupsPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_backups_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = backup.ListBackupsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_backups(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == backup.ListBackupsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + +def test_list_backups_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.list_backups in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.list_backups] = mock_rpc + request = {} + client.list_backups(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_backups(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_backups_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_backups + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_backups + ] = mock_rpc + + request = {} + await client.list_backups(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_backups(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_backups_async( + transport: str = "grpc_asyncio", request_type=backup.ListBackupsRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.ListBackupsResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_backups(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = backup.ListBackupsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupsAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_backups_async_from_dict(): + await test_list_backups_async(request_type=dict) + + +def test_list_backups_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.ListBackupsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + call.return_value = backup.ListBackupsResponse() + client.list_backups(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_backups_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.ListBackupsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.ListBackupsResponse() + ) + await client.list_backups(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_backups_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = backup.ListBackupsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_backups( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_backups_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_backups( + backup.ListBackupsRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_backups_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = backup.ListBackupsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.ListBackupsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_backups( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_backups_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_backups( + backup.ListBackupsRequest(), + parent="parent_value", + ) + + +def test_list_backups_pager(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + # Set the response to a series of pages. + call.side_effect = ( + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + backup.Backup(), + ], + next_page_token="abc", + ), + backup.ListBackupsResponse( + backups=[], + next_page_token="def", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + ], + next_page_token="ghi", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_backups(request={}, retry=retry, timeout=timeout) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, backup.Backup) for i in results) + + +def test_list_backups_pages(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + # Set the response to a series of pages. + call.side_effect = ( + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + backup.Backup(), + ], + next_page_token="abc", + ), + backup.ListBackupsResponse( + backups=[], + next_page_token="def", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + ], + next_page_token="ghi", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + ], + ), + RuntimeError, + ) + pages = list(client.list_backups(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_backups_async_pager(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backups), "__call__", new_callable=mock.AsyncMock + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + backup.Backup(), + ], + next_page_token="abc", + ), + backup.ListBackupsResponse( + backups=[], + next_page_token="def", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + ], + next_page_token="ghi", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_backups( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, backup.Backup) for i in responses) + + +@pytest.mark.asyncio +async def test_list_backups_async_pages(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backups), "__call__", new_callable=mock.AsyncMock + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + backup.Backup(), + ], + next_page_token="abc", + ), + backup.ListBackupsResponse( + backups=[], + next_page_token="def", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + ], + next_page_token="ghi", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_backups(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.RestoreDatabaseRequest, + dict, + ], +) +def test_restore_database(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.restore_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.RestoreDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_restore_database_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.RestoreDatabaseRequest( + parent="parent_value", + database_id="database_id_value", + backup="backup_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.restore_database(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.RestoreDatabaseRequest( + parent="parent_value", + database_id="database_id_value", + backup="backup_value", + ) + + +def test_restore_database_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.restore_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.restore_database + ] = mock_rpc + request = {} + client.restore_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.restore_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_restore_database_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.restore_database + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.restore_database + ] = mock_rpc + + request = {} + await client.restore_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.restore_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_restore_database_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.RestoreDatabaseRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.restore_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.RestoreDatabaseRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_restore_database_async_from_dict(): + await test_restore_database_async(request_type=dict) + + +def test_restore_database_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.RestoreDatabaseRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.restore_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_restore_database_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.RestoreDatabaseRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.restore_database(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_restore_database_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.restore_database( + parent="parent_value", + database_id="database_id_value", + backup="backup_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].database_id + mock_val = "database_id_value" + assert arg == mock_val + assert args[0].backup == "backup_value" + + +def test_restore_database_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.restore_database( + spanner_database_admin.RestoreDatabaseRequest(), + parent="parent_value", + database_id="database_id_value", + backup="backup_value", + ) + + +@pytest.mark.asyncio +async def test_restore_database_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.restore_database( + parent="parent_value", + database_id="database_id_value", + backup="backup_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].database_id + mock_val = "database_id_value" + assert arg == mock_val + assert args[0].backup == "backup_value" + + +@pytest.mark.asyncio +async def test_restore_database_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.restore_database( + spanner_database_admin.RestoreDatabaseRequest(), + parent="parent_value", + database_id="database_id_value", + backup="backup_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.ListDatabaseOperationsRequest, + dict, + ], +) +def test_list_database_operations(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabaseOperationsResponse( + next_page_token="next_page_token_value", + ) + response = client.list_database_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.ListDatabaseOperationsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabaseOperationsPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_database_operations_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.ListDatabaseOperationsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_database_operations(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.ListDatabaseOperationsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + +def test_list_database_operations_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_database_operations + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_database_operations + ] = mock_rpc + request = {} + client.list_database_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_database_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_database_operations_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_database_operations + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_database_operations + ] = mock_rpc + + request = {} + await client.list_database_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_database_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_database_operations_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.ListDatabaseOperationsRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabaseOperationsResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_database_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.ListDatabaseOperationsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabaseOperationsAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_database_operations_async_from_dict(): + await test_list_database_operations_async(request_type=dict) + + +def test_list_database_operations_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.ListDatabaseOperationsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + call.return_value = spanner_database_admin.ListDatabaseOperationsResponse() + client.list_database_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_database_operations_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.ListDatabaseOperationsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabaseOperationsResponse() + ) + await client.list_database_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_database_operations_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabaseOperationsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_database_operations( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_database_operations_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_database_operations( + spanner_database_admin.ListDatabaseOperationsRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_database_operations_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabaseOperationsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabaseOperationsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_database_operations( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_database_operations_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_database_operations( + spanner_database_admin.ListDatabaseOperationsRequest(), + parent="parent_value", + ) + + +def test_list_database_operations_pager(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_database_operations( + request={}, retry=retry, timeout=timeout + ) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in results) + + +def test_list_database_operations_pages(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + pages = list(client.list_database_operations(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_database_operations_async_pager(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_database_operations( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in responses) + + +@pytest.mark.asyncio +async def test_list_database_operations_async_pages(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_database_operations(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + backup.ListBackupOperationsRequest, + dict, + ], +) +def test_list_backup_operations(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup.ListBackupOperationsResponse( + next_page_token="next_page_token_value", + ) + response = client.list_backup_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = backup.ListBackupOperationsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupOperationsPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_backup_operations_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = backup.ListBackupOperationsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_backup_operations(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == backup.ListBackupOperationsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + +def test_list_backup_operations_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_backup_operations + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_backup_operations + ] = mock_rpc + request = {} + client.list_backup_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_backup_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_backup_operations_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_backup_operations + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_backup_operations + ] = mock_rpc + + request = {} + await client.list_backup_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_backup_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_backup_operations_async( + transport: str = "grpc_asyncio", request_type=backup.ListBackupOperationsRequest +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.ListBackupOperationsResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_backup_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = backup.ListBackupOperationsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupOperationsAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_backup_operations_async_from_dict(): + await test_list_backup_operations_async(request_type=dict) + + +def test_list_backup_operations_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.ListBackupOperationsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + call.return_value = backup.ListBackupOperationsResponse() + client.list_backup_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_backup_operations_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup.ListBackupOperationsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.ListBackupOperationsResponse() + ) + await client.list_backup_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_backup_operations_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup.ListBackupOperationsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_backup_operations( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_backup_operations_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_backup_operations( + backup.ListBackupOperationsRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_backup_operations_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup.ListBackupOperationsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.ListBackupOperationsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_backup_operations( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_backup_operations_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_backup_operations( + backup.ListBackupOperationsRequest(), + parent="parent_value", + ) + + +def test_list_backup_operations_pager(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + backup.ListBackupOperationsResponse( + operations=[], + next_page_token="def", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_backup_operations(request={}, retry=retry, timeout=timeout) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in results) + + +def test_list_backup_operations_pages(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + backup.ListBackupOperationsResponse( + operations=[], + next_page_token="def", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + pages = list(client.list_backup_operations(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_backup_operations_async_pager(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + backup.ListBackupOperationsResponse( + operations=[], + next_page_token="def", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_backup_operations( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in responses) + + +@pytest.mark.asyncio +async def test_list_backup_operations_async_pages(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + backup.ListBackupOperationsResponse( + operations=[], + next_page_token="def", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_backup_operations(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.ListDatabaseRolesRequest, + dict, + ], +) +def test_list_database_roles(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabaseRolesResponse( + next_page_token="next_page_token_value", + ) + response = client.list_database_roles(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.ListDatabaseRolesRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabaseRolesPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_database_roles_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.ListDatabaseRolesRequest( + parent="parent_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_database_roles(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.ListDatabaseRolesRequest( + parent="parent_value", + page_token="page_token_value", + ) + + +def test_list_database_roles_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_database_roles in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_database_roles + ] = mock_rpc + request = {} + client.list_database_roles(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_database_roles(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_database_roles_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_database_roles + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_database_roles + ] = mock_rpc + + request = {} + await client.list_database_roles(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_database_roles(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_database_roles_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.ListDatabaseRolesRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabaseRolesResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_database_roles(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.ListDatabaseRolesRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabaseRolesAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_database_roles_async_from_dict(): + await test_list_database_roles_async(request_type=dict) + + +def test_list_database_roles_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.ListDatabaseRolesRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + call.return_value = spanner_database_admin.ListDatabaseRolesResponse() + client.list_database_roles(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_database_roles_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.ListDatabaseRolesRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabaseRolesResponse() + ) + await client.list_database_roles(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_database_roles_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabaseRolesResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_database_roles( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_database_roles_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_database_roles( + spanner_database_admin.ListDatabaseRolesRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_database_roles_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.ListDatabaseRolesResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabaseRolesResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_database_roles( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_database_roles_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_database_roles( + spanner_database_admin.ListDatabaseRolesRequest(), + parent="parent_value", + ) + + +def test_list_database_roles_pager(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_database_roles(request={}, retry=retry, timeout=timeout) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, spanner_database_admin.DatabaseRole) for i in results) + + +def test_list_database_roles_pages(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + ), + RuntimeError, + ) + pages = list(client.list_database_roles(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_database_roles_async_pager(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_database_roles( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all( + isinstance(i, spanner_database_admin.DatabaseRole) for i in responses + ) + + +@pytest.mark.asyncio +async def test_list_database_roles_async_pages(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_database_roles(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.AddSplitPointsRequest, + dict, + ], +) +def test_add_split_points(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.AddSplitPointsResponse() + response = client.add_split_points(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.AddSplitPointsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.AddSplitPointsResponse) + + +def test_add_split_points_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.AddSplitPointsRequest( + database="database_value", + initiator="initiator_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.add_split_points(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.AddSplitPointsRequest( + database="database_value", + initiator="initiator_value", + ) + + +def test_add_split_points_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.add_split_points in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.add_split_points + ] = mock_rpc + request = {} + client.add_split_points(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.add_split_points(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_add_split_points_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.add_split_points + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.add_split_points + ] = mock_rpc + + request = {} + await client.add_split_points(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.add_split_points(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_add_split_points_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.AddSplitPointsRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.AddSplitPointsResponse() + ) + response = await client.add_split_points(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.AddSplitPointsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.AddSplitPointsResponse) + + +@pytest.mark.asyncio +async def test_add_split_points_async_from_dict(): + await test_add_split_points_async(request_type=dict) + + +def test_add_split_points_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.AddSplitPointsRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + call.return_value = spanner_database_admin.AddSplitPointsResponse() + client.add_split_points(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_add_split_points_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_database_admin.AddSplitPointsRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.AddSplitPointsResponse() + ) + await client.add_split_points(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +def test_add_split_points_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.AddSplitPointsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.add_split_points( + database="database_value", + split_points=[spanner_database_admin.SplitPoints(table="table_value")], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + arg = args[0].split_points + mock_val = [spanner_database_admin.SplitPoints(table="table_value")] + assert arg == mock_val + + +def test_add_split_points_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.add_split_points( + spanner_database_admin.AddSplitPointsRequest(), + database="database_value", + split_points=[spanner_database_admin.SplitPoints(table="table_value")], + ) + + +@pytest.mark.asyncio +async def test_add_split_points_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_database_admin.AddSplitPointsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.AddSplitPointsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.add_split_points( + database="database_value", + split_points=[spanner_database_admin.SplitPoints(table="table_value")], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + arg = args[0].split_points + mock_val = [spanner_database_admin.SplitPoints(table="table_value")] + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_add_split_points_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.add_split_points( + spanner_database_admin.AddSplitPointsRequest(), + database="database_value", + split_points=[spanner_database_admin.SplitPoints(table="table_value")], + ) + + +@pytest.mark.parametrize( + "request_type", + [ + gsad_backup_schedule.CreateBackupScheduleRequest, + dict, + ], +) +def test_create_backup_schedule(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup_schedule.BackupSchedule( + name="name_value", + ) + response = client.create_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = gsad_backup_schedule.CreateBackupScheduleRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +def test_create_backup_schedule_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = gsad_backup_schedule.CreateBackupScheduleRequest( + parent="parent_value", + backup_schedule_id="backup_schedule_id_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.create_backup_schedule(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == gsad_backup_schedule.CreateBackupScheduleRequest( + parent="parent_value", + backup_schedule_id="backup_schedule_id_value", + ) + + +def test_create_backup_schedule_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.create_backup_schedule + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.create_backup_schedule + ] = mock_rpc + request = {} + client.create_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.create_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_backup_schedule_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.create_backup_schedule + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.create_backup_schedule + ] = mock_rpc + + request = {} + await client.create_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.create_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_backup_schedule_async( + transport: str = "grpc_asyncio", + request_type=gsad_backup_schedule.CreateBackupScheduleRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup_schedule.BackupSchedule( + name="name_value", + ) + ) + response = await client.create_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = gsad_backup_schedule.CreateBackupScheduleRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +@pytest.mark.asyncio +async def test_create_backup_schedule_async_from_dict(): + await test_create_backup_schedule_async(request_type=dict) + + +def test_create_backup_schedule_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = gsad_backup_schedule.CreateBackupScheduleRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + call.return_value = gsad_backup_schedule.BackupSchedule() + client.create_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_create_backup_schedule_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = gsad_backup_schedule.CreateBackupScheduleRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup_schedule.BackupSchedule() + ) + await client.create_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_create_backup_schedule_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup_schedule.BackupSchedule() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.create_backup_schedule( + parent="parent_value", + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + backup_schedule_id="backup_schedule_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].backup_schedule + mock_val = gsad_backup_schedule.BackupSchedule(name="name_value") + assert arg == mock_val + arg = args[0].backup_schedule_id + mock_val = "backup_schedule_id_value" + assert arg == mock_val + + +def test_create_backup_schedule_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_backup_schedule( + gsad_backup_schedule.CreateBackupScheduleRequest(), + parent="parent_value", + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + backup_schedule_id="backup_schedule_id_value", + ) + + +@pytest.mark.asyncio +async def test_create_backup_schedule_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup_schedule.BackupSchedule() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup_schedule.BackupSchedule() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.create_backup_schedule( + parent="parent_value", + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + backup_schedule_id="backup_schedule_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].backup_schedule + mock_val = gsad_backup_schedule.BackupSchedule(name="name_value") + assert arg == mock_val + arg = args[0].backup_schedule_id + mock_val = "backup_schedule_id_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_create_backup_schedule_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.create_backup_schedule( + gsad_backup_schedule.CreateBackupScheduleRequest(), + parent="parent_value", + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + backup_schedule_id="backup_schedule_id_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + backup_schedule.GetBackupScheduleRequest, + dict, + ], +) +def test_get_backup_schedule(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup_schedule.BackupSchedule( + name="name_value", + ) + response = client.get_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = backup_schedule.GetBackupScheduleRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +def test_get_backup_schedule_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = backup_schedule.GetBackupScheduleRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_backup_schedule(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == backup_schedule.GetBackupScheduleRequest( + name="name_value", + ) + + +def test_get_backup_schedule_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.get_backup_schedule in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.get_backup_schedule + ] = mock_rpc + request = {} + client.get_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_backup_schedule_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_backup_schedule + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_backup_schedule + ] = mock_rpc + + request = {} + await client.get_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_backup_schedule_async( + transport: str = "grpc_asyncio", + request_type=backup_schedule.GetBackupScheduleRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup_schedule.BackupSchedule( + name="name_value", + ) + ) + response = await client.get_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = backup_schedule.GetBackupScheduleRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +@pytest.mark.asyncio +async def test_get_backup_schedule_async_from_dict(): + await test_get_backup_schedule_async(request_type=dict) + + +def test_get_backup_schedule_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup_schedule.GetBackupScheduleRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + call.return_value = backup_schedule.BackupSchedule() + client.get_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_backup_schedule_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup_schedule.GetBackupScheduleRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup_schedule.BackupSchedule() + ) + await client.get_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_get_backup_schedule_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup_schedule.BackupSchedule() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_backup_schedule( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_get_backup_schedule_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_backup_schedule( + backup_schedule.GetBackupScheduleRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_get_backup_schedule_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup_schedule.BackupSchedule() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup_schedule.BackupSchedule() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_backup_schedule( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_backup_schedule_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_backup_schedule( + backup_schedule.GetBackupScheduleRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + gsad_backup_schedule.UpdateBackupScheduleRequest, + dict, + ], +) +def test_update_backup_schedule(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup_schedule.BackupSchedule( + name="name_value", + ) + response = client.update_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = gsad_backup_schedule.UpdateBackupScheduleRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +def test_update_backup_schedule_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = gsad_backup_schedule.UpdateBackupScheduleRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.update_backup_schedule(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == gsad_backup_schedule.UpdateBackupScheduleRequest() + + +def test_update_backup_schedule_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.update_backup_schedule + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.update_backup_schedule + ] = mock_rpc + request = {} + client.update_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.update_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_backup_schedule_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.update_backup_schedule + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.update_backup_schedule + ] = mock_rpc + + request = {} + await client.update_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.update_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_backup_schedule_async( + transport: str = "grpc_asyncio", + request_type=gsad_backup_schedule.UpdateBackupScheduleRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup_schedule.BackupSchedule( + name="name_value", + ) + ) + response = await client.update_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = gsad_backup_schedule.UpdateBackupScheduleRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +@pytest.mark.asyncio +async def test_update_backup_schedule_async_from_dict(): + await test_update_backup_schedule_async(request_type=dict) + + +def test_update_backup_schedule_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = gsad_backup_schedule.UpdateBackupScheduleRequest() + + request.backup_schedule.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + call.return_value = gsad_backup_schedule.BackupSchedule() + client.update_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "backup_schedule.name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_update_backup_schedule_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = gsad_backup_schedule.UpdateBackupScheduleRequest() + + request.backup_schedule.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup_schedule.BackupSchedule() + ) + await client.update_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "backup_schedule.name=name_value", + ) in kw["metadata"] + + +def test_update_backup_schedule_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup_schedule.BackupSchedule() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.update_backup_schedule( + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].backup_schedule + mock_val = gsad_backup_schedule.BackupSchedule(name="name_value") + assert arg == mock_val + arg = args[0].update_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +def test_update_backup_schedule_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_backup_schedule( + gsad_backup_schedule.UpdateBackupScheduleRequest(), + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.asyncio +async def test_update_backup_schedule_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = gsad_backup_schedule.BackupSchedule() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup_schedule.BackupSchedule() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.update_backup_schedule( + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].backup_schedule + mock_val = gsad_backup_schedule.BackupSchedule(name="name_value") + assert arg == mock_val + arg = args[0].update_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_update_backup_schedule_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.update_backup_schedule( + gsad_backup_schedule.UpdateBackupScheduleRequest(), + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + backup_schedule.DeleteBackupScheduleRequest, + dict, + ], +) +def test_delete_backup_schedule(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.delete_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = backup_schedule.DeleteBackupScheduleRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_backup_schedule_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = backup_schedule.DeleteBackupScheduleRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.delete_backup_schedule(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == backup_schedule.DeleteBackupScheduleRequest( + name="name_value", + ) + + +def test_delete_backup_schedule_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.delete_backup_schedule + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.delete_backup_schedule + ] = mock_rpc + request = {} + client.delete_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_backup_schedule_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.delete_backup_schedule + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.delete_backup_schedule + ] = mock_rpc + + request = {} + await client.delete_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.delete_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_backup_schedule_async( + transport: str = "grpc_asyncio", + request_type=backup_schedule.DeleteBackupScheduleRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = backup_schedule.DeleteBackupScheduleRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_delete_backup_schedule_async_from_dict(): + await test_delete_backup_schedule_async(request_type=dict) + + +def test_delete_backup_schedule_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup_schedule.DeleteBackupScheduleRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + call.return_value = None + client.delete_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_delete_backup_schedule_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup_schedule.DeleteBackupScheduleRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_delete_backup_schedule_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.delete_backup_schedule( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_delete_backup_schedule_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_backup_schedule( + backup_schedule.DeleteBackupScheduleRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_delete_backup_schedule_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.delete_backup_schedule( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_delete_backup_schedule_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.delete_backup_schedule( + backup_schedule.DeleteBackupScheduleRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + backup_schedule.ListBackupSchedulesRequest, + dict, + ], +) +def test_list_backup_schedules(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup_schedule.ListBackupSchedulesResponse( + next_page_token="next_page_token_value", + ) + response = client.list_backup_schedules(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = backup_schedule.ListBackupSchedulesRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupSchedulesPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_backup_schedules_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = backup_schedule.ListBackupSchedulesRequest( + parent="parent_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_backup_schedules(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == backup_schedule.ListBackupSchedulesRequest( + parent="parent_value", + page_token="page_token_value", + ) + + +def test_list_backup_schedules_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_backup_schedules + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_backup_schedules + ] = mock_rpc + request = {} + client.list_backup_schedules(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_backup_schedules(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_backup_schedules_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_backup_schedules + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_backup_schedules + ] = mock_rpc + + request = {} + await client.list_backup_schedules(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_backup_schedules(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_backup_schedules_async( + transport: str = "grpc_asyncio", + request_type=backup_schedule.ListBackupSchedulesRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup_schedule.ListBackupSchedulesResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_backup_schedules(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = backup_schedule.ListBackupSchedulesRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupSchedulesAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_backup_schedules_async_from_dict(): + await test_list_backup_schedules_async(request_type=dict) + + +def test_list_backup_schedules_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup_schedule.ListBackupSchedulesRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + call.return_value = backup_schedule.ListBackupSchedulesResponse() + client.list_backup_schedules(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_backup_schedules_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = backup_schedule.ListBackupSchedulesRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup_schedule.ListBackupSchedulesResponse() + ) + await client.list_backup_schedules(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_backup_schedules_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup_schedule.ListBackupSchedulesResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_backup_schedules( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_backup_schedules_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_backup_schedules( + backup_schedule.ListBackupSchedulesRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_backup_schedules_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = backup_schedule.ListBackupSchedulesResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup_schedule.ListBackupSchedulesResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_backup_schedules( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_backup_schedules_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_backup_schedules( + backup_schedule.ListBackupSchedulesRequest(), + parent="parent_value", + ) + + +def test_list_backup_schedules_pager(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + next_page_token="abc", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[], + next_page_token="def", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + ], + next_page_token="ghi", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_backup_schedules(request={}, retry=retry, timeout=timeout) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, backup_schedule.BackupSchedule) for i in results) + + +def test_list_backup_schedules_pages(transport_name: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + next_page_token="abc", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[], + next_page_token="def", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + ], + next_page_token="ghi", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + ), + RuntimeError, + ) + pages = list(client.list_backup_schedules(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_backup_schedules_async_pager(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + next_page_token="abc", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[], + next_page_token="def", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + ], + next_page_token="ghi", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_backup_schedules( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, backup_schedule.BackupSchedule) for i in responses) + + +@pytest.mark.asyncio +async def test_list_backup_schedules_async_pages(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + next_page_token="abc", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[], + next_page_token="def", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + ], + next_page_token="ghi", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_backup_schedules(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.InternalUpdateGraphOperationRequest, + dict, + ], +) +def test_internal_update_graph_operation(request_type, transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.internal_update_graph_operation), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = ( + spanner_database_admin.InternalUpdateGraphOperationResponse() + ) + response = client.internal_update_graph_operation(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.InternalUpdateGraphOperationRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance( + response, spanner_database_admin.InternalUpdateGraphOperationResponse + ) + + +def test_internal_update_graph_operation_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_database_admin.InternalUpdateGraphOperationRequest( + database="database_value", + operation_id="operation_id_value", + vm_identity_token="vm_identity_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.internal_update_graph_operation), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.internal_update_graph_operation(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_database_admin.InternalUpdateGraphOperationRequest( + database="database_value", + operation_id="operation_id_value", + vm_identity_token="vm_identity_token_value", + ) + + +def test_internal_update_graph_operation_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.internal_update_graph_operation + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.internal_update_graph_operation + ] = mock_rpc + request = {} + client.internal_update_graph_operation(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.internal_update_graph_operation(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_internal_update_graph_operation_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.internal_update_graph_operation + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.internal_update_graph_operation + ] = mock_rpc + + request = {} + await client.internal_update_graph_operation(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.internal_update_graph_operation(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_internal_update_graph_operation_async( + transport: str = "grpc_asyncio", + request_type=spanner_database_admin.InternalUpdateGraphOperationRequest, +): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.internal_update_graph_operation), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.InternalUpdateGraphOperationResponse() + ) + response = await client.internal_update_graph_operation(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_database_admin.InternalUpdateGraphOperationRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance( + response, spanner_database_admin.InternalUpdateGraphOperationResponse + ) + + +@pytest.mark.asyncio +async def test_internal_update_graph_operation_async_from_dict(): + await test_internal_update_graph_operation_async(request_type=dict) + + +def test_internal_update_graph_operation_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.internal_update_graph_operation), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = ( + spanner_database_admin.InternalUpdateGraphOperationResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.internal_update_graph_operation( + database="database_value", + operation_id="operation_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + arg = args[0].operation_id + mock_val = "operation_id_value" + assert arg == mock_val + + +def test_internal_update_graph_operation_flattened_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.internal_update_graph_operation( + spanner_database_admin.InternalUpdateGraphOperationRequest(), + database="database_value", + operation_id="operation_id_value", + ) + + +@pytest.mark.asyncio +async def test_internal_update_graph_operation_flattened_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.internal_update_graph_operation), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = ( + spanner_database_admin.InternalUpdateGraphOperationResponse() + ) + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.InternalUpdateGraphOperationResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.internal_update_graph_operation( + database="database_value", + operation_id="operation_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + arg = args[0].operation_id + mock_val = "operation_id_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_internal_update_graph_operation_flattened_error_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.internal_update_graph_operation( + spanner_database_admin.InternalUpdateGraphOperationRequest(), + database="database_value", + operation_id="operation_id_value", + ) + + +def test_list_databases_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.list_databases in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.list_databases] = mock_rpc + + request = {} + client.list_databases(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_databases(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_databases_rest_required_fields( + request_type=spanner_database_admin.ListDatabasesRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_databases._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_databases._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabasesResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabasesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_databases(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_databases_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_databases._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_databases_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabasesResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabasesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_databases(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/databases" % client.transport._host, + args[1], + ) + + +def test_list_databases_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_databases( + spanner_database_admin.ListDatabasesRequest(), + parent="parent_value", + ) + + +def test_list_databases_rest_pager(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabasesResponse( + databases=[ + spanner_database_admin.Database(), + spanner_database_admin.Database(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + spanner_database_admin.ListDatabasesResponse.to_json(x) for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1/instances/sample2"} + + pager = client.list_databases(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, spanner_database_admin.Database) for i in results) + + pages = list(client.list_databases(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_create_database_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.create_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.create_database] = mock_rpc + + request = {} + client.create_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_create_database_rest_required_fields( + request_type=spanner_database_admin.CreateDatabaseRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request_init["create_statement"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + jsonified_request["createStatement"] = "create_statement_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + assert "createStatement" in jsonified_request + assert jsonified_request["createStatement"] == "create_statement_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.create_database(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_create_database_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.create_database._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "parent", + "createStatement", + ) + ) + ) + + +def test_create_database_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + create_statement="create_statement_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.create_database(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/databases" % client.transport._host, + args[1], + ) + + +def test_create_database_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_database( + spanner_database_admin.CreateDatabaseRequest(), + parent="parent_value", + create_statement="create_statement_value", + ) + + +def test_get_database_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_database] = mock_rpc + + request = {} + client.get_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_database_rest_required_fields( + request_type=spanner_database_admin.GetDatabaseRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.Database() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.Database.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_database(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_database_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_database._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_get_database_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.Database() + + # get arguments that satisfy an http rule for this method + sample_request = { + "name": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_database_admin.Database.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_database(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/databases/*}" % client.transport._host, + args[1], + ) + + +def test_get_database_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_database( + spanner_database_admin.GetDatabaseRequest(), + name="name_value", + ) + + +def test_update_database_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.update_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.update_database] = mock_rpc + + request = {} + client.update_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_update_database_rest_required_fields( + request_type=spanner_database_admin.UpdateDatabaseRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_database._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set(("update_mask",)) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "patch", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.update_database(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_update_database_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.update_database._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(("updateMask",)) + & set( + ( + "database", + "updateMask", + ) + ) + ) + + +def test_update_database_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = { + "database": {"name": "projects/sample1/instances/sample2/databases/sample3"} + } + + # get truthy value for each flattened field + mock_args = dict( + database=spanner_database_admin.Database(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.update_database(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{database.name=projects/*/instances/*/databases/*}" + % client.transport._host, + args[1], + ) + + +def test_update_database_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_database( + spanner_database_admin.UpdateDatabaseRequest(), + database=spanner_database_admin.Database(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +def test_update_database_ddl_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.update_database_ddl in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.update_database_ddl + ] = mock_rpc + + request = {} + client.update_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_database_ddl(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_update_database_ddl_rest_required_fields( + request_type=spanner_database_admin.UpdateDatabaseDdlRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["database"] = "" + request_init["statements"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_database_ddl._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["database"] = "database_value" + jsonified_request["statements"] = "statements_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_database_ddl._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "database" in jsonified_request + assert jsonified_request["database"] == "database_value" + assert "statements" in jsonified_request + assert jsonified_request["statements"] == "statements_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "patch", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.update_database_ddl(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_update_database_ddl_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.update_database_ddl._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "database", + "statements", + ) + ) + ) + + +def test_update_database_ddl_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = { + "database": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + database="database_value", + statements=["statements_value"], + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.update_database_ddl(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{database=projects/*/instances/*/databases/*}/ddl" + % client.transport._host, + args[1], + ) + + +def test_update_database_ddl_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_database_ddl( + spanner_database_admin.UpdateDatabaseDdlRequest(), + database="database_value", + statements=["statements_value"], + ) + + +def test_drop_database_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.drop_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.drop_database] = mock_rpc + + request = {} + client.drop_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.drop_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_drop_database_rest_required_fields( + request_type=spanner_database_admin.DropDatabaseRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["database"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).drop_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["database"] = "database_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).drop_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "database" in jsonified_request + assert jsonified_request["database"] == "database_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = None + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "delete", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.drop_database(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_drop_database_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.drop_database._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("database",))) + + +def test_drop_database_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # get arguments that satisfy an http rule for this method + sample_request = { + "database": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + database="database_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.drop_database(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{database=projects/*/instances/*/databases/*}" + % client.transport._host, + args[1], + ) + + +def test_drop_database_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.drop_database( + spanner_database_admin.DropDatabaseRequest(), + database="database_value", + ) + + +def test_get_database_ddl_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_database_ddl in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.get_database_ddl + ] = mock_rpc + + request = {} + client.get_database_ddl(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_database_ddl(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_database_ddl_rest_required_fields( + request_type=spanner_database_admin.GetDatabaseDdlRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["database"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_database_ddl._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["database"] = "database_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_database_ddl._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "database" in jsonified_request + assert jsonified_request["database"] == "database_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.GetDatabaseDdlResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.GetDatabaseDdlResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_database_ddl(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_database_ddl_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_database_ddl._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("database",))) + + +def test_get_database_ddl_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.GetDatabaseDdlResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "database": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + database="database_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_database_admin.GetDatabaseDdlResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_database_ddl(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{database=projects/*/instances/*/databases/*}/ddl" + % client.transport._host, + args[1], + ) + + +def test_get_database_ddl_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_database_ddl( + spanner_database_admin.GetDatabaseDdlRequest(), + database="database_value", + ) + + +def test_set_iam_policy_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.set_iam_policy in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.set_iam_policy] = mock_rpc + + request = {} + client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.set_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_set_iam_policy_rest_required_fields( + request_type=iam_policy_pb2.SetIamPolicyRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["resource"] = "" + request = request_type(**request_init) + pb_request = request + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).set_iam_policy._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["resource"] = "resource_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).set_iam_policy._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "resource" in jsonified_request + assert jsonified_request["resource"] == "resource_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.set_iam_policy(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_set_iam_policy_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.set_iam_policy._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "resource", + "policy", + ) + ) + ) + + +def test_set_iam_policy_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy() + + # get arguments that satisfy an http rule for this method + sample_request = { + "resource": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + resource="resource_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.set_iam_policy(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{resource=projects/*/instances/*/databases/*}:setIamPolicy" + % client.transport._host, + args[1], + ) + + +def test_set_iam_policy_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.set_iam_policy( + iam_policy_pb2.SetIamPolicyRequest(), + resource="resource_value", + ) + + +def test_get_iam_policy_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_iam_policy in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_iam_policy] = mock_rpc + + request = {} + client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_iam_policy_rest_required_fields( + request_type=iam_policy_pb2.GetIamPolicyRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["resource"] = "" + request = request_type(**request_init) + pb_request = request + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_iam_policy._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["resource"] = "resource_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_iam_policy._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "resource" in jsonified_request + assert jsonified_request["resource"] == "resource_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_iam_policy(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_iam_policy_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_iam_policy._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("resource",))) + + +def test_get_iam_policy_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy() + + # get arguments that satisfy an http rule for this method + sample_request = { + "resource": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + resource="resource_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_iam_policy(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{resource=projects/*/instances/*/databases/*}:getIamPolicy" + % client.transport._host, + args[1], + ) + + +def test_get_iam_policy_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_iam_policy( + iam_policy_pb2.GetIamPolicyRequest(), + resource="resource_value", + ) + + +def test_test_iam_permissions_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.test_iam_permissions in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.test_iam_permissions + ] = mock_rpc + + request = {} + client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.test_iam_permissions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_test_iam_permissions_rest_required_fields( + request_type=iam_policy_pb2.TestIamPermissionsRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["resource"] = "" + request_init["permissions"] = "" + request = request_type(**request_init) + pb_request = request + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).test_iam_permissions._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["resource"] = "resource_value" + jsonified_request["permissions"] = "permissions_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).test_iam_permissions._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "resource" in jsonified_request + assert jsonified_request["resource"] == "resource_value" + assert "permissions" in jsonified_request + assert jsonified_request["permissions"] == "permissions_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = iam_policy_pb2.TestIamPermissionsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.test_iam_permissions(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_test_iam_permissions_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.test_iam_permissions._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "resource", + "permissions", + ) + ) + ) + + +def test_test_iam_permissions_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = iam_policy_pb2.TestIamPermissionsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "resource": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + resource="resource_value", + permissions=["permissions_value"], + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.test_iam_permissions(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{resource=projects/*/instances/*/databases/*}:testIamPermissions" + % client.transport._host, + args[1], + ) + + +def test_test_iam_permissions_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.test_iam_permissions( + iam_policy_pb2.TestIamPermissionsRequest(), + resource="resource_value", + permissions=["permissions_value"], + ) + + +def test_create_backup_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.create_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.create_backup] = mock_rpc + + request = {} + client.create_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_create_backup_rest_required_fields( + request_type=gsad_backup.CreateBackupRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request_init["backup_id"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + assert "backupId" not in jsonified_request + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_backup._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + assert "backupId" in jsonified_request + assert jsonified_request["backupId"] == request_init["backup_id"] + + jsonified_request["parent"] = "parent_value" + jsonified_request["backupId"] = "backup_id_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_backup._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "backup_id", + "encryption_config", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + assert "backupId" in jsonified_request + assert jsonified_request["backupId"] == "backup_id_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.create_backup(request) + + expected_params = [ + ( + "backupId", + "", + ), + ("$alt", "json;enum-encoding=int"), + ] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_create_backup_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.create_backup._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "backupId", + "encryptionConfig", + ) + ) + & set( + ( + "parent", + "backupId", + "backup", + ) + ) + ) + + +def test_create_backup_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + backup=gsad_backup.Backup(database="database_value"), + backup_id="backup_id_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.create_backup(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/backups" % client.transport._host, + args[1], + ) + + +def test_create_backup_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_backup( + gsad_backup.CreateBackupRequest(), + parent="parent_value", + backup=gsad_backup.Backup(database="database_value"), + backup_id="backup_id_value", + ) + + +def test_copy_backup_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.copy_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.copy_backup] = mock_rpc + + request = {} + client.copy_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.copy_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_copy_backup_rest_required_fields(request_type=backup.CopyBackupRequest): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request_init["backup_id"] = "" + request_init["source_backup"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).copy_backup._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + jsonified_request["backupId"] = "backup_id_value" + jsonified_request["sourceBackup"] = "source_backup_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).copy_backup._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + assert "backupId" in jsonified_request + assert jsonified_request["backupId"] == "backup_id_value" + assert "sourceBackup" in jsonified_request + assert jsonified_request["sourceBackup"] == "source_backup_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.copy_backup(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_copy_backup_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.copy_backup._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "parent", + "backupId", + "sourceBackup", + "expireTime", + ) + ) + ) + + +def test_copy_backup_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + expire_time=timestamp_pb2.Timestamp(seconds=751), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.copy_backup(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/backups:copy" + % client.transport._host, + args[1], + ) + + +def test_copy_backup_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.copy_backup( + backup.CopyBackupRequest(), + parent="parent_value", + backup_id="backup_id_value", + source_backup="source_backup_value", + expire_time=timestamp_pb2.Timestamp(seconds=751), + ) + + +def test_get_backup_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_backup] = mock_rpc + + request = {} + client.get_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_backup_rest_required_fields(request_type=backup.GetBackupRequest): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_backup._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_backup._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = backup.Backup() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup.Backup.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_backup(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_backup_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_backup._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_get_backup_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup.Backup() + + # get arguments that satisfy an http rule for this method + sample_request = {"name": "projects/sample1/instances/sample2/backups/sample3"} + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = backup.Backup.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_backup(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/backups/*}" % client.transport._host, + args[1], + ) + + +def test_get_backup_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_backup( + backup.GetBackupRequest(), + name="name_value", + ) + + +def test_update_backup_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.update_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.update_backup] = mock_rpc + + request = {} + client.update_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.update_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_update_backup_rest_required_fields( + request_type=gsad_backup.UpdateBackupRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_backup._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_backup._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set(("update_mask",)) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = gsad_backup.Backup() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "patch", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = gsad_backup.Backup.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.update_backup(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_update_backup_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.update_backup._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(("updateMask",)) + & set( + ( + "backup", + "updateMask", + ) + ) + ) + + +def test_update_backup_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = gsad_backup.Backup() + + # get arguments that satisfy an http rule for this method + sample_request = { + "backup": {"name": "projects/sample1/instances/sample2/backups/sample3"} + } + + # get truthy value for each flattened field + mock_args = dict( + backup=gsad_backup.Backup(database="database_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = gsad_backup.Backup.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.update_backup(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{backup.name=projects/*/instances/*/backups/*}" + % client.transport._host, + args[1], + ) + + +def test_update_backup_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_backup( + gsad_backup.UpdateBackupRequest(), + backup=gsad_backup.Backup(database="database_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +def test_delete_backup_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.delete_backup in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.delete_backup] = mock_rpc + + request = {} + client.delete_backup(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_backup(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_delete_backup_rest_required_fields(request_type=backup.DeleteBackupRequest): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_backup._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_backup._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = None + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "delete", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.delete_backup(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_delete_backup_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.delete_backup._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_delete_backup_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # get arguments that satisfy an http rule for this method + sample_request = {"name": "projects/sample1/instances/sample2/backups/sample3"} + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.delete_backup(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/backups/*}" % client.transport._host, + args[1], + ) + + +def test_delete_backup_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_backup( + backup.DeleteBackupRequest(), + name="name_value", + ) + + +def test_list_backups_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.list_backups in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.list_backups] = mock_rpc + + request = {} + client.list_backups(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_backups(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_backups_rest_required_fields(request_type=backup.ListBackupsRequest): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_backups._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_backups._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "filter", + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = backup.ListBackupsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup.ListBackupsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_backups(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_backups_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_backups._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "filter", + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_backups_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup.ListBackupsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = backup.ListBackupsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_backups(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/backups" % client.transport._host, + args[1], + ) + + +def test_list_backups_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_backups( + backup.ListBackupsRequest(), + parent="parent_value", + ) + + +def test_list_backups_rest_pager(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + backup.Backup(), + ], + next_page_token="abc", + ), + backup.ListBackupsResponse( + backups=[], + next_page_token="def", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + ], + next_page_token="ghi", + ), + backup.ListBackupsResponse( + backups=[ + backup.Backup(), + backup.Backup(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple(backup.ListBackupsResponse.to_json(x) for x in response) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1/instances/sample2"} + + pager = client.list_backups(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, backup.Backup) for i in results) + + pages = list(client.list_backups(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_restore_database_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.restore_database in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.restore_database + ] = mock_rpc + + request = {} + client.restore_database(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.restore_database(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_restore_database_rest_required_fields( + request_type=spanner_database_admin.RestoreDatabaseRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request_init["database_id"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).restore_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + jsonified_request["databaseId"] = "database_id_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).restore_database._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + assert "databaseId" in jsonified_request + assert jsonified_request["databaseId"] == "database_id_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.restore_database(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_restore_database_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.restore_database._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "parent", + "databaseId", + ) + ) + ) + + +def test_restore_database_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + database_id="database_id_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.restore_database(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/databases:restore" + % client.transport._host, + args[1], + ) + + +def test_restore_database_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.restore_database( + spanner_database_admin.RestoreDatabaseRequest(), + parent="parent_value", + database_id="database_id_value", + backup="backup_value", + ) + + +def test_list_database_operations_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_database_operations + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_database_operations + ] = mock_rpc + + request = {} + client.list_database_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_database_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_database_operations_rest_required_fields( + request_type=spanner_database_admin.ListDatabaseOperationsRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_database_operations._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_database_operations._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "filter", + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabaseOperationsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabaseOperationsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_database_operations(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_database_operations_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_database_operations._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "filter", + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_database_operations_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabaseOperationsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabaseOperationsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_database_operations(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/databaseOperations" + % client.transport._host, + args[1], + ) + + +def test_list_database_operations_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_database_operations( + spanner_database_admin.ListDatabaseOperationsRequest(), + parent="parent_value", + ) + + +def test_list_database_operations_rest_pager(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + spanner_database_admin.ListDatabaseOperationsResponse.to_json(x) + for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1/instances/sample2"} + + pager = client.list_database_operations(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in results) + + pages = list(client.list_database_operations(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_list_backup_operations_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_backup_operations + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_backup_operations + ] = mock_rpc + + request = {} + client.list_backup_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_backup_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_backup_operations_rest_required_fields( + request_type=backup.ListBackupOperationsRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_backup_operations._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_backup_operations._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "filter", + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = backup.ListBackupOperationsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup.ListBackupOperationsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_backup_operations(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_backup_operations_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_backup_operations._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "filter", + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_backup_operations_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup.ListBackupOperationsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = backup.ListBackupOperationsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_backup_operations(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/backupOperations" + % client.transport._host, + args[1], + ) + + +def test_list_backup_operations_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_backup_operations( + backup.ListBackupOperationsRequest(), + parent="parent_value", + ) + + +def test_list_backup_operations_rest_pager(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + backup.ListBackupOperationsResponse( + operations=[], + next_page_token="def", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + backup.ListBackupOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + backup.ListBackupOperationsResponse.to_json(x) for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1/instances/sample2"} + + pager = client.list_backup_operations(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in results) + + pages = list(client.list_backup_operations(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_list_database_roles_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_database_roles in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_database_roles + ] = mock_rpc + + request = {} + client.list_database_roles(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_database_roles(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_database_roles_rest_required_fields( + request_type=spanner_database_admin.ListDatabaseRolesRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_database_roles._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_database_roles._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabaseRolesResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabaseRolesResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_database_roles(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_database_roles_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_database_roles._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_database_roles_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabaseRolesResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "parent": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabaseRolesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_database_roles(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*/databases/*}/databaseRoles" + % client.transport._host, + args[1], + ) + + +def test_list_database_roles_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_database_roles( + spanner_database_admin.ListDatabaseRolesRequest(), + parent="parent_value", + ) + + +def test_list_database_roles_rest_pager(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + next_page_token="abc", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[], + next_page_token="def", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + ], + next_page_token="ghi", + ), + spanner_database_admin.ListDatabaseRolesResponse( + database_roles=[ + spanner_database_admin.DatabaseRole(), + spanner_database_admin.DatabaseRole(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + spanner_database_admin.ListDatabaseRolesResponse.to_json(x) + for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = { + "parent": "projects/sample1/instances/sample2/databases/sample3" + } + + pager = client.list_database_roles(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, spanner_database_admin.DatabaseRole) for i in results) + + pages = list(client.list_database_roles(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_add_split_points_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.add_split_points in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.add_split_points + ] = mock_rpc + + request = {} + client.add_split_points(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.add_split_points(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_add_split_points_rest_required_fields( + request_type=spanner_database_admin.AddSplitPointsRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["database"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).add_split_points._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["database"] = "database_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).add_split_points._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "database" in jsonified_request + assert jsonified_request["database"] == "database_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.AddSplitPointsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.AddSplitPointsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.add_split_points(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_add_split_points_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.add_split_points._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "database", + "splitPoints", + ) + ) + ) + + +def test_add_split_points_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.AddSplitPointsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "database": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + database="database_value", + split_points=[spanner_database_admin.SplitPoints(table="table_value")], + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_database_admin.AddSplitPointsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.add_split_points(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{database=projects/*/instances/*/databases/*}:addSplitPoints" + % client.transport._host, + args[1], + ) + + +def test_add_split_points_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.add_split_points( + spanner_database_admin.AddSplitPointsRequest(), + database="database_value", + split_points=[spanner_database_admin.SplitPoints(table="table_value")], + ) + + +def test_create_backup_schedule_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.create_backup_schedule + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.create_backup_schedule + ] = mock_rpc + + request = {} + client.create_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.create_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_create_backup_schedule_rest_required_fields( + request_type=gsad_backup_schedule.CreateBackupScheduleRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request_init["backup_schedule_id"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + assert "backupScheduleId" not in jsonified_request + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_backup_schedule._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + assert "backupScheduleId" in jsonified_request + assert jsonified_request["backupScheduleId"] == request_init["backup_schedule_id"] + + jsonified_request["parent"] = "parent_value" + jsonified_request["backupScheduleId"] = "backup_schedule_id_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_backup_schedule._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set(("backup_schedule_id",)) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + assert "backupScheduleId" in jsonified_request + assert jsonified_request["backupScheduleId"] == "backup_schedule_id_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = gsad_backup_schedule.BackupSchedule() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = gsad_backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.create_backup_schedule(request) + + expected_params = [ + ( + "backupScheduleId", + "", + ), + ("$alt", "json;enum-encoding=int"), + ] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_create_backup_schedule_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.create_backup_schedule._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(("backupScheduleId",)) + & set( + ( + "parent", + "backupScheduleId", + "backupSchedule", + ) + ) + ) + + +def test_create_backup_schedule_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = gsad_backup_schedule.BackupSchedule() + + # get arguments that satisfy an http rule for this method + sample_request = { + "parent": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + backup_schedule_id="backup_schedule_id_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = gsad_backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.create_backup_schedule(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*/databases/*}/backupSchedules" + % client.transport._host, + args[1], + ) + + +def test_create_backup_schedule_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_backup_schedule( + gsad_backup_schedule.CreateBackupScheduleRequest(), + parent="parent_value", + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + backup_schedule_id="backup_schedule_id_value", + ) + + +def test_get_backup_schedule_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.get_backup_schedule in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.get_backup_schedule + ] = mock_rpc + + request = {} + client.get_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_backup_schedule_rest_required_fields( + request_type=backup_schedule.GetBackupScheduleRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_backup_schedule._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_backup_schedule._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = backup_schedule.BackupSchedule() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_backup_schedule(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_backup_schedule_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_backup_schedule._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_get_backup_schedule_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup_schedule.BackupSchedule() + + # get arguments that satisfy an http rule for this method + sample_request = { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_backup_schedule(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/databases/*/backupSchedules/*}" + % client.transport._host, + args[1], + ) + + +def test_get_backup_schedule_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_backup_schedule( + backup_schedule.GetBackupScheduleRequest(), + name="name_value", + ) + + +def test_update_backup_schedule_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.update_backup_schedule + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.update_backup_schedule + ] = mock_rpc + + request = {} + client.update_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.update_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_update_backup_schedule_rest_required_fields( + request_type=gsad_backup_schedule.UpdateBackupScheduleRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_backup_schedule._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_backup_schedule._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set(("update_mask",)) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = gsad_backup_schedule.BackupSchedule() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "patch", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = gsad_backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.update_backup_schedule(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_update_backup_schedule_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.update_backup_schedule._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(("updateMask",)) + & set( + ( + "backupSchedule", + "updateMask", + ) + ) + ) + + +def test_update_backup_schedule_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = gsad_backup_schedule.BackupSchedule() + + # get arguments that satisfy an http rule for this method + sample_request = { + "backup_schedule": { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + } + + # get truthy value for each flattened field + mock_args = dict( + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = gsad_backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.update_backup_schedule(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{backup_schedule.name=projects/*/instances/*/databases/*/backupSchedules/*}" + % client.transport._host, + args[1], + ) + + +def test_update_backup_schedule_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_backup_schedule( + gsad_backup_schedule.UpdateBackupScheduleRequest(), + backup_schedule=gsad_backup_schedule.BackupSchedule(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +def test_delete_backup_schedule_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.delete_backup_schedule + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.delete_backup_schedule + ] = mock_rpc + + request = {} + client.delete_backup_schedule(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_backup_schedule(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_delete_backup_schedule_rest_required_fields( + request_type=backup_schedule.DeleteBackupScheduleRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_backup_schedule._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_backup_schedule._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = None + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "delete", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.delete_backup_schedule(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_delete_backup_schedule_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.delete_backup_schedule._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_delete_backup_schedule_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # get arguments that satisfy an http rule for this method + sample_request = { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.delete_backup_schedule(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/databases/*/backupSchedules/*}" + % client.transport._host, + args[1], + ) + + +def test_delete_backup_schedule_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_backup_schedule( + backup_schedule.DeleteBackupScheduleRequest(), + name="name_value", + ) + + +def test_list_backup_schedules_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_backup_schedules + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_backup_schedules + ] = mock_rpc + + request = {} + client.list_backup_schedules(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_backup_schedules(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_backup_schedules_rest_required_fields( + request_type=backup_schedule.ListBackupSchedulesRequest, +): + transport_class = transports.DatabaseAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_backup_schedules._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_backup_schedules._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = backup_schedule.ListBackupSchedulesResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup_schedule.ListBackupSchedulesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_backup_schedules(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_backup_schedules_rest_unset_required_fields(): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_backup_schedules._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_backup_schedules_rest_flattened(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup_schedule.ListBackupSchedulesResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "parent": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = backup_schedule.ListBackupSchedulesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_backup_schedules(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*/databases/*}/backupSchedules" + % client.transport._host, + args[1], + ) + + +def test_list_backup_schedules_rest_flattened_error(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_backup_schedules( + backup_schedule.ListBackupSchedulesRequest(), + parent="parent_value", + ) + + +def test_list_backup_schedules_rest_pager(transport: str = "rest"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + next_page_token="abc", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[], + next_page_token="def", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + ], + next_page_token="ghi", + ), + backup_schedule.ListBackupSchedulesResponse( + backup_schedules=[ + backup_schedule.BackupSchedule(), + backup_schedule.BackupSchedule(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + backup_schedule.ListBackupSchedulesResponse.to_json(x) for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = { + "parent": "projects/sample1/instances/sample2/databases/sample3" + } + + pager = client.list_backup_schedules(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, backup_schedule.BackupSchedule) for i in results) + + pages = list(client.list_backup_schedules(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_internal_update_graph_operation_rest_no_http_options(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = spanner_database_admin.InternalUpdateGraphOperationRequest() + with pytest.raises(RuntimeError): + client.internal_update_graph_operation(request) + + +def test_internal_update_graph_operation_rest_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # Since a `google.api.http` annotation is required for using a rest transport + # method, this should error. + with pytest.raises(NotImplementedError) as not_implemented_error: + client.internal_update_graph_operation({}) + assert ( + "Method InternalUpdateGraphOperation is not available over REST transport" + in str(not_implemented_error.value) + ) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.DatabaseAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.DatabaseAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = DatabaseAdminClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide an api_key and a transport instance. + transport = transports.DatabaseAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + options = client_options.ClientOptions() + options.api_key = "api_key" + with pytest.raises(ValueError): + client = DatabaseAdminClient( + client_options=options, + transport=transport, + ) + + # It is an error to provide an api_key and a credential. + options = client_options.ClientOptions() + options.api_key = "api_key" + with pytest.raises(ValueError): + client = DatabaseAdminClient( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.DatabaseAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = DatabaseAdminClient( + client_options={"scopes": ["1", "2"]}, + transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.DatabaseAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + client = DatabaseAdminClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.DatabaseAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.DatabaseAdminGrpcAsyncIOTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.DatabaseAdminGrpcTransport, + transports.DatabaseAdminGrpcAsyncIOTransport, + transports.DatabaseAdminRestTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(google.auth, "default") as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_kind_grpc(): + transport = DatabaseAdminClient.get_transport_class("grpc")( + credentials=ga_credentials.AnonymousCredentials() + ) + assert transport.kind == "grpc" + + +def test_initialize_client_w_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="grpc" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_databases_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + call.return_value = spanner_database_admin.ListDatabasesResponse() + client.list_databases(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabasesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_database_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.CreateDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_database_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + call.return_value = spanner_database_admin.Database() + client.get_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.GetDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_database_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.UpdateDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_database_ddl_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_database_ddl(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.UpdateDatabaseDdlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_drop_database_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + call.return_value = None + client.drop_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.DropDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_database_ddl_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + call.return_value = spanner_database_admin.GetDatabaseDdlResponse() + client.get_database_ddl(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.GetDatabaseDdlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_set_iam_policy_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + call.return_value = policy_pb2.Policy() + client.set_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.SetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_iam_policy_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + call.return_value = policy_pb2.Policy() + client.get_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.GetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_test_iam_permissions_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + client.test_iam_permissions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.TestIamPermissionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_backup_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup.CreateBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_copy_backup_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.copy_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.CopyBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_backup_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + call.return_value = backup.Backup() + client.get_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.GetBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_backup_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + call.return_value = gsad_backup.Backup() + client.update_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup.UpdateBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_backup_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + call.return_value = None + client.delete_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.DeleteBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_backups_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + call.return_value = backup.ListBackupsResponse() + client.list_backups(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.ListBackupsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_restore_database_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.restore_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.RestoreDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_database_operations_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + call.return_value = spanner_database_admin.ListDatabaseOperationsResponse() + client.list_database_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabaseOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_backup_operations_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + call.return_value = backup.ListBackupOperationsResponse() + client.list_backup_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.ListBackupOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_database_roles_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + call.return_value = spanner_database_admin.ListDatabaseRolesResponse() + client.list_database_roles(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabaseRolesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_add_split_points_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + call.return_value = spanner_database_admin.AddSplitPointsResponse() + client.add_split_points(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.AddSplitPointsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_backup_schedule_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + call.return_value = gsad_backup_schedule.BackupSchedule() + client.create_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup_schedule.CreateBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_backup_schedule_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + call.return_value = backup_schedule.BackupSchedule() + client.get_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.GetBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_backup_schedule_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + call.return_value = gsad_backup_schedule.BackupSchedule() + client.update_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup_schedule.UpdateBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_backup_schedule_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + call.return_value = None + client.delete_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.DeleteBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_backup_schedules_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + call.return_value = backup_schedule.ListBackupSchedulesResponse() + client.list_backup_schedules(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.ListBackupSchedulesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_internal_update_graph_operation_empty_call_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.internal_update_graph_operation), "__call__" + ) as call: + call.return_value = ( + spanner_database_admin.InternalUpdateGraphOperationResponse() + ) + client.internal_update_graph_operation(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.InternalUpdateGraphOperationRequest() + + assert args[0] == request_msg + + +def test_transport_kind_grpc_asyncio(): + transport = DatabaseAdminAsyncClient.get_transport_class("grpc_asyncio")( + credentials=async_anonymous_credentials() + ) + assert transport.kind == "grpc_asyncio" + + +def test_initialize_client_w_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), transport="grpc_asyncio" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_databases_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabasesResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_databases(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabasesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_create_database_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.create_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.CreateDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_database_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.Database( + name="name_value", + state=spanner_database_admin.Database.State.CREATING, + version_retention_period="version_retention_period_value", + default_leader="default_leader_value", + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + enable_drop_protection=True, + reconciling=True, + ) + ) + await client.get_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.GetDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_update_database_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.update_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.UpdateDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_update_database_ddl_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.update_database_ddl(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.UpdateDatabaseDdlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_drop_database_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.drop_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.DropDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_database_ddl_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.GetDatabaseDdlResponse( + statements=["statements_value"], + proto_descriptors=b"proto_descriptors_blob", + ) + ) + await client.get_database_ddl(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.GetDatabaseDdlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_set_iam_policy_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + ) + await client.set_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.SetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_iam_policy_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + ) + await client.get_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.GetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_test_iam_permissions_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + iam_policy_pb2.TestIamPermissionsResponse( + permissions=["permissions_value"], + ) + ) + await client.test_iam_permissions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.TestIamPermissionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_create_backup_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.create_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup.CreateBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_copy_backup_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.copy_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.CopyBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_backup_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.Backup( + database="database_value", + name="name_value", + size_bytes=1089, + freeable_size_bytes=2006, + exclusive_size_bytes=2168, + state=backup.Backup.State.CREATING, + referencing_databases=["referencing_databases_value"], + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + referencing_backups=["referencing_backups_value"], + backup_schedules=["backup_schedules_value"], + incremental_backup_chain_id="incremental_backup_chain_id_value", + ) + ) + await client.get_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.GetBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_update_backup_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup.Backup( + database="database_value", + name="name_value", + size_bytes=1089, + freeable_size_bytes=2006, + exclusive_size_bytes=2168, + state=gsad_backup.Backup.State.CREATING, + referencing_databases=["referencing_databases_value"], + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + referencing_backups=["referencing_backups_value"], + backup_schedules=["backup_schedules_value"], + incremental_backup_chain_id="incremental_backup_chain_id_value", + ) + ) + await client.update_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup.UpdateBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_delete_backup_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.DeleteBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_backups_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.ListBackupsResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_backups(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.ListBackupsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_restore_database_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.restore_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.RestoreDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_database_operations_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabaseOperationsResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_database_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabaseOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_backup_operations_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup.ListBackupOperationsResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_backup_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.ListBackupOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_database_roles_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.ListDatabaseRolesResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_database_roles(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabaseRolesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_add_split_points_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.AddSplitPointsResponse() + ) + await client.add_split_points(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.AddSplitPointsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_create_backup_schedule_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup_schedule.BackupSchedule( + name="name_value", + ) + ) + await client.create_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup_schedule.CreateBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_backup_schedule_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup_schedule.BackupSchedule( + name="name_value", + ) + ) + await client.get_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.GetBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_update_backup_schedule_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + gsad_backup_schedule.BackupSchedule( + name="name_value", + ) + ) + await client.update_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup_schedule.UpdateBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_delete_backup_schedule_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.DeleteBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_backup_schedules_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + backup_schedule.ListBackupSchedulesResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_backup_schedules(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.ListBackupSchedulesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_internal_update_graph_operation_empty_call_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.internal_update_graph_operation), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_database_admin.InternalUpdateGraphOperationResponse() + ) + await client.internal_update_graph_operation(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.InternalUpdateGraphOperationRequest() + + assert args[0] == request_msg + + +def test_transport_kind_rest(): + transport = DatabaseAdminClient.get_transport_class("rest")( + credentials=ga_credentials.AnonymousCredentials() + ) + assert transport.kind == "rest" + + +def test_list_databases_rest_bad_request( + request_type=spanner_database_admin.ListDatabasesRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_databases(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.ListDatabasesRequest, + dict, + ], +) +def test_list_databases_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabasesResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabasesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_databases(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabasesPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_databases_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_list_databases" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_list_databases_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_list_databases" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.ListDatabasesRequest.pb( + spanner_database_admin.ListDatabasesRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_database_admin.ListDatabasesResponse.to_json( + spanner_database_admin.ListDatabasesResponse() + ) + req.return_value.content = return_value + + request = spanner_database_admin.ListDatabasesRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_database_admin.ListDatabasesResponse() + post_with_metadata.return_value = ( + spanner_database_admin.ListDatabasesResponse(), + metadata, + ) + + client.list_databases( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_create_database_rest_bad_request( + request_type=spanner_database_admin.CreateDatabaseRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.create_database(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.CreateDatabaseRequest, + dict, + ], +) +def test_create_database_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.create_database(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_create_database_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_create_database" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_create_database_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_create_database" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.CreateDatabaseRequest.pb( + spanner_database_admin.CreateDatabaseRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_database_admin.CreateDatabaseRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.create_database( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_database_rest_bad_request( + request_type=spanner_database_admin.GetDatabaseRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_database(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.GetDatabaseRequest, + dict, + ], +) +def test_get_database_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.Database( + name="name_value", + state=spanner_database_admin.Database.State.CREATING, + version_retention_period="version_retention_period_value", + default_leader="default_leader_value", + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + enable_drop_protection=True, + reconciling=True, + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.Database.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_database(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.Database) + assert response.name == "name_value" + assert response.state == spanner_database_admin.Database.State.CREATING + assert response.version_retention_period == "version_retention_period_value" + assert response.default_leader == "default_leader_value" + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.enable_drop_protection is True + assert response.reconciling is True + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_database_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_database" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_database_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_get_database" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.GetDatabaseRequest.pb( + spanner_database_admin.GetDatabaseRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_database_admin.Database.to_json( + spanner_database_admin.Database() + ) + req.return_value.content = return_value + + request = spanner_database_admin.GetDatabaseRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_database_admin.Database() + post_with_metadata.return_value = spanner_database_admin.Database(), metadata + + client.get_database( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_update_database_rest_bad_request( + request_type=spanner_database_admin.UpdateDatabaseRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "database": {"name": "projects/sample1/instances/sample2/databases/sample3"} + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.update_database(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.UpdateDatabaseRequest, + dict, + ], +) +def test_update_database_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "database": {"name": "projects/sample1/instances/sample2/databases/sample3"} + } + request_init["database"] = { + "name": "projects/sample1/instances/sample2/databases/sample3", + "state": 1, + "create_time": {"seconds": 751, "nanos": 543}, + "restore_info": { + "source_type": 1, + "backup_info": { + "backup": "backup_value", + "version_time": {}, + "create_time": {}, + "source_database": "source_database_value", + }, + }, + "encryption_config": { + "kms_key_name": "kms_key_name_value", + "kms_key_names": ["kms_key_names_value1", "kms_key_names_value2"], + }, + "encryption_info": [ + { + "encryption_type": 1, + "encryption_status": { + "code": 411, + "message": "message_value", + "details": [ + { + "type_url": "type.googleapis.com/google.protobuf.Duration", + "value": b"\x08\x0c\x10\xdb\x07", + } + ], + }, + "kms_key_version": "kms_key_version_value", + } + ], + "version_retention_period": "version_retention_period_value", + "earliest_version_time": {}, + "default_leader": "default_leader_value", + "database_dialect": 1, + "enable_drop_protection": True, + "reconciling": True, + } + # The version of a generated dependency at test runtime may differ from the version used during generation. + # Delete any fields which are not present in the current runtime dependency + # See https://github.com/googleapis/gapic-generator-python/issues/1748 + + # Determine if the message type is proto-plus or protobuf + test_field = spanner_database_admin.UpdateDatabaseRequest.meta.fields["database"] + + def get_message_fields(field): + # Given a field which is a message (composite type), return a list with + # all the fields of the message. + # If the field is not a composite type, return an empty list. + message_fields = [] + + if hasattr(field, "message") and field.message: + is_field_type_proto_plus_type = not hasattr(field.message, "DESCRIPTOR") + + if is_field_type_proto_plus_type: + message_fields = field.message.meta.fields.values() + # Add `# pragma: NO COVER` because there may not be any `*_pb2` field types + else: # pragma: NO COVER + message_fields = field.message.DESCRIPTOR.fields + return message_fields + + runtime_nested_fields = [ + (field.name, nested_field.name) + for field in get_message_fields(test_field) + for nested_field in get_message_fields(field) + ] + + subfields_not_in_runtime = [] + + # For each item in the sample request, create a list of sub fields which are not present at runtime + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for field, value in request_init["database"].items(): # pragma: NO COVER + result = None + is_repeated = False + # For repeated fields + if isinstance(value, list) and len(value): + is_repeated = True + result = value[0] + # For fields where the type is another message + if isinstance(value, dict): + result = value + + if result and hasattr(result, "keys"): + for subfield in result.keys(): + if (field, subfield) not in runtime_nested_fields: + subfields_not_in_runtime.append( + { + "field": field, + "subfield": subfield, + "is_repeated": is_repeated, + } + ) + + # Remove fields from the sample request which are not present in the runtime version of the dependency + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for subfield_to_delete in subfields_not_in_runtime: # pragma: NO COVER + field = subfield_to_delete.get("field") + field_repeated = subfield_to_delete.get("is_repeated") + subfield = subfield_to_delete.get("subfield") + if subfield: + if field_repeated: + for i in range(0, len(request_init["database"][field])): + del request_init["database"][field][i][subfield] + else: + del request_init["database"][field][subfield] + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.update_database(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_update_database_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_update_database" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_update_database_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_update_database" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.UpdateDatabaseRequest.pb( + spanner_database_admin.UpdateDatabaseRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_database_admin.UpdateDatabaseRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.update_database( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_update_database_ddl_rest_bad_request( + request_type=spanner_database_admin.UpdateDatabaseDdlRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.update_database_ddl(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.UpdateDatabaseDdlRequest, + dict, + ], +) +def test_update_database_ddl_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.update_database_ddl(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_update_database_ddl_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_update_database_ddl" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_update_database_ddl_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_update_database_ddl" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.UpdateDatabaseDdlRequest.pb( + spanner_database_admin.UpdateDatabaseDdlRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_database_admin.UpdateDatabaseDdlRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.update_database_ddl( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_drop_database_rest_bad_request( + request_type=spanner_database_admin.DropDatabaseRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.drop_database(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.DropDatabaseRequest, + dict, + ], +) +def test_drop_database_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "" + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.drop_database(request) + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_drop_database_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_drop_database" + ) as pre: + pre.assert_not_called() + pb_message = spanner_database_admin.DropDatabaseRequest.pb( + spanner_database_admin.DropDatabaseRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + request = spanner_database_admin.DropDatabaseRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + + client.drop_database( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + + +def test_get_database_ddl_rest_bad_request( + request_type=spanner_database_admin.GetDatabaseDdlRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_database_ddl(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.GetDatabaseDdlRequest, + dict, + ], +) +def test_get_database_ddl_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.GetDatabaseDdlResponse( + statements=["statements_value"], + proto_descriptors=b"proto_descriptors_blob", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.GetDatabaseDdlResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_database_ddl(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.GetDatabaseDdlResponse) + assert response.statements == ["statements_value"] + assert response.proto_descriptors == b"proto_descriptors_blob" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_database_ddl_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_database_ddl" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_database_ddl_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_get_database_ddl" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.GetDatabaseDdlRequest.pb( + spanner_database_admin.GetDatabaseDdlRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_database_admin.GetDatabaseDdlResponse.to_json( + spanner_database_admin.GetDatabaseDdlResponse() + ) + req.return_value.content = return_value + + request = spanner_database_admin.GetDatabaseDdlRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_database_admin.GetDatabaseDdlResponse() + post_with_metadata.return_value = ( + spanner_database_admin.GetDatabaseDdlResponse(), + metadata, + ) + + client.get_database_ddl( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_set_iam_policy_rest_bad_request( + request_type=iam_policy_pb2.SetIamPolicyRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.set_iam_policy(request) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.SetIamPolicyRequest, + dict, + ], +) +def test_set_iam_policy_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.set_iam_policy(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_set_iam_policy_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_set_iam_policy" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_set_iam_policy_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_set_iam_policy" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = iam_policy_pb2.SetIamPolicyRequest() + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(policy_pb2.Policy()) + req.return_value.content = return_value + + request = iam_policy_pb2.SetIamPolicyRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = policy_pb2.Policy() + post_with_metadata.return_value = policy_pb2.Policy(), metadata + + client.set_iam_policy( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_iam_policy_rest_bad_request( + request_type=iam_policy_pb2.GetIamPolicyRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_iam_policy(request) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.GetIamPolicyRequest, + dict, + ], +) +def test_get_iam_policy_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_iam_policy(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_iam_policy_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_iam_policy" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_iam_policy_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_get_iam_policy" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = iam_policy_pb2.GetIamPolicyRequest() + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(policy_pb2.Policy()) + req.return_value.content = return_value + + request = iam_policy_pb2.GetIamPolicyRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = policy_pb2.Policy() + post_with_metadata.return_value = policy_pb2.Policy(), metadata + + client.get_iam_policy( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_test_iam_permissions_rest_bad_request( + request_type=iam_policy_pb2.TestIamPermissionsRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.test_iam_permissions(request) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.TestIamPermissionsRequest, + dict, + ], +) +def test_test_iam_permissions_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = iam_policy_pb2.TestIamPermissionsResponse( + permissions=["permissions_value"], + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.test_iam_permissions(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, iam_policy_pb2.TestIamPermissionsResponse) + assert response.permissions == ["permissions_value"] + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_test_iam_permissions_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_test_iam_permissions" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_test_iam_permissions_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_test_iam_permissions" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = iam_policy_pb2.TestIamPermissionsRequest() + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson( + iam_policy_pb2.TestIamPermissionsResponse() + ) + req.return_value.content = return_value + + request = iam_policy_pb2.TestIamPermissionsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = iam_policy_pb2.TestIamPermissionsResponse() + post_with_metadata.return_value = ( + iam_policy_pb2.TestIamPermissionsResponse(), + metadata, + ) + + client.test_iam_permissions( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_create_backup_rest_bad_request(request_type=gsad_backup.CreateBackupRequest): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.create_backup(request) + + +@pytest.mark.parametrize( + "request_type", + [ + gsad_backup.CreateBackupRequest, + dict, + ], +) +def test_create_backup_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request_init["backup"] = { + "database": "database_value", + "version_time": {"seconds": 751, "nanos": 543}, + "expire_time": {}, + "name": "name_value", + "create_time": {}, + "size_bytes": 1089, + "freeable_size_bytes": 2006, + "exclusive_size_bytes": 2168, + "state": 1, + "referencing_databases": [ + "referencing_databases_value1", + "referencing_databases_value2", + ], + "encryption_info": { + "encryption_type": 1, + "encryption_status": { + "code": 411, + "message": "message_value", + "details": [ + { + "type_url": "type.googleapis.com/google.protobuf.Duration", + "value": b"\x08\x0c\x10\xdb\x07", + } + ], + }, + "kms_key_version": "kms_key_version_value", + }, + "encryption_information": {}, + "database_dialect": 1, + "referencing_backups": [ + "referencing_backups_value1", + "referencing_backups_value2", + ], + "max_expire_time": {}, + "backup_schedules": ["backup_schedules_value1", "backup_schedules_value2"], + "incremental_backup_chain_id": "incremental_backup_chain_id_value", + "oldest_version_time": {}, + "instance_partitions": [{"instance_partition": "instance_partition_value"}], + } + # The version of a generated dependency at test runtime may differ from the version used during generation. + # Delete any fields which are not present in the current runtime dependency + # See https://github.com/googleapis/gapic-generator-python/issues/1748 + + # Determine if the message type is proto-plus or protobuf + test_field = gsad_backup.CreateBackupRequest.meta.fields["backup"] + + def get_message_fields(field): + # Given a field which is a message (composite type), return a list with + # all the fields of the message. + # If the field is not a composite type, return an empty list. + message_fields = [] + + if hasattr(field, "message") and field.message: + is_field_type_proto_plus_type = not hasattr(field.message, "DESCRIPTOR") + + if is_field_type_proto_plus_type: + message_fields = field.message.meta.fields.values() + # Add `# pragma: NO COVER` because there may not be any `*_pb2` field types + else: # pragma: NO COVER + message_fields = field.message.DESCRIPTOR.fields + return message_fields + + runtime_nested_fields = [ + (field.name, nested_field.name) + for field in get_message_fields(test_field) + for nested_field in get_message_fields(field) + ] + + subfields_not_in_runtime = [] + + # For each item in the sample request, create a list of sub fields which are not present at runtime + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for field, value in request_init["backup"].items(): # pragma: NO COVER + result = None + is_repeated = False + # For repeated fields + if isinstance(value, list) and len(value): + is_repeated = True + result = value[0] + # For fields where the type is another message + if isinstance(value, dict): + result = value + + if result and hasattr(result, "keys"): + for subfield in result.keys(): + if (field, subfield) not in runtime_nested_fields: + subfields_not_in_runtime.append( + { + "field": field, + "subfield": subfield, + "is_repeated": is_repeated, + } + ) + + # Remove fields from the sample request which are not present in the runtime version of the dependency + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for subfield_to_delete in subfields_not_in_runtime: # pragma: NO COVER + field = subfield_to_delete.get("field") + field_repeated = subfield_to_delete.get("is_repeated") + subfield = subfield_to_delete.get("subfield") + if subfield: + if field_repeated: + for i in range(0, len(request_init["backup"][field])): + del request_init["backup"][field][i][subfield] + else: + del request_init["backup"][field][subfield] + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.create_backup(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_create_backup_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_create_backup" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_create_backup_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_create_backup" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = gsad_backup.CreateBackupRequest.pb( + gsad_backup.CreateBackupRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = gsad_backup.CreateBackupRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.create_backup( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_copy_backup_rest_bad_request(request_type=backup.CopyBackupRequest): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.copy_backup(request) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.CopyBackupRequest, + dict, + ], +) +def test_copy_backup_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.copy_backup(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_copy_backup_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_copy_backup" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_copy_backup_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_copy_backup" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = backup.CopyBackupRequest.pb(backup.CopyBackupRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = backup.CopyBackupRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.copy_backup( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_backup_rest_bad_request(request_type=backup.GetBackupRequest): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2/backups/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_backup(request) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.GetBackupRequest, + dict, + ], +) +def test_get_backup_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2/backups/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup.Backup( + database="database_value", + name="name_value", + size_bytes=1089, + freeable_size_bytes=2006, + exclusive_size_bytes=2168, + state=backup.Backup.State.CREATING, + referencing_databases=["referencing_databases_value"], + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + referencing_backups=["referencing_backups_value"], + backup_schedules=["backup_schedules_value"], + incremental_backup_chain_id="incremental_backup_chain_id_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup.Backup.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_backup(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, backup.Backup) + assert response.database == "database_value" + assert response.name == "name_value" + assert response.size_bytes == 1089 + assert response.freeable_size_bytes == 2006 + assert response.exclusive_size_bytes == 2168 + assert response.state == backup.Backup.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.referencing_backups == ["referencing_backups_value"] + assert response.backup_schedules == ["backup_schedules_value"] + assert response.incremental_backup_chain_id == "incremental_backup_chain_id_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_backup_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_backup" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_backup_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_get_backup" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = backup.GetBackupRequest.pb(backup.GetBackupRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = backup.Backup.to_json(backup.Backup()) + req.return_value.content = return_value + + request = backup.GetBackupRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = backup.Backup() + post_with_metadata.return_value = backup.Backup(), metadata + + client.get_backup( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_update_backup_rest_bad_request(request_type=gsad_backup.UpdateBackupRequest): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "backup": {"name": "projects/sample1/instances/sample2/backups/sample3"} + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.update_backup(request) + + +@pytest.mark.parametrize( + "request_type", + [ + gsad_backup.UpdateBackupRequest, + dict, + ], +) +def test_update_backup_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "backup": {"name": "projects/sample1/instances/sample2/backups/sample3"} + } + request_init["backup"] = { + "database": "database_value", + "version_time": {"seconds": 751, "nanos": 543}, + "expire_time": {}, + "name": "projects/sample1/instances/sample2/backups/sample3", + "create_time": {}, + "size_bytes": 1089, + "freeable_size_bytes": 2006, + "exclusive_size_bytes": 2168, + "state": 1, + "referencing_databases": [ + "referencing_databases_value1", + "referencing_databases_value2", + ], + "encryption_info": { + "encryption_type": 1, + "encryption_status": { + "code": 411, + "message": "message_value", + "details": [ + { + "type_url": "type.googleapis.com/google.protobuf.Duration", + "value": b"\x08\x0c\x10\xdb\x07", + } + ], + }, + "kms_key_version": "kms_key_version_value", + }, + "encryption_information": {}, + "database_dialect": 1, + "referencing_backups": [ + "referencing_backups_value1", + "referencing_backups_value2", + ], + "max_expire_time": {}, + "backup_schedules": ["backup_schedules_value1", "backup_schedules_value2"], + "incremental_backup_chain_id": "incremental_backup_chain_id_value", + "oldest_version_time": {}, + "instance_partitions": [{"instance_partition": "instance_partition_value"}], + } + # The version of a generated dependency at test runtime may differ from the version used during generation. + # Delete any fields which are not present in the current runtime dependency + # See https://github.com/googleapis/gapic-generator-python/issues/1748 + + # Determine if the message type is proto-plus or protobuf + test_field = gsad_backup.UpdateBackupRequest.meta.fields["backup"] + + def get_message_fields(field): + # Given a field which is a message (composite type), return a list with + # all the fields of the message. + # If the field is not a composite type, return an empty list. + message_fields = [] + + if hasattr(field, "message") and field.message: + is_field_type_proto_plus_type = not hasattr(field.message, "DESCRIPTOR") + + if is_field_type_proto_plus_type: + message_fields = field.message.meta.fields.values() + # Add `# pragma: NO COVER` because there may not be any `*_pb2` field types + else: # pragma: NO COVER + message_fields = field.message.DESCRIPTOR.fields + return message_fields + + runtime_nested_fields = [ + (field.name, nested_field.name) + for field in get_message_fields(test_field) + for nested_field in get_message_fields(field) + ] + + subfields_not_in_runtime = [] + + # For each item in the sample request, create a list of sub fields which are not present at runtime + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for field, value in request_init["backup"].items(): # pragma: NO COVER + result = None + is_repeated = False + # For repeated fields + if isinstance(value, list) and len(value): + is_repeated = True + result = value[0] + # For fields where the type is another message + if isinstance(value, dict): + result = value + + if result and hasattr(result, "keys"): + for subfield in result.keys(): + if (field, subfield) not in runtime_nested_fields: + subfields_not_in_runtime.append( + { + "field": field, + "subfield": subfield, + "is_repeated": is_repeated, + } + ) + + # Remove fields from the sample request which are not present in the runtime version of the dependency + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for subfield_to_delete in subfields_not_in_runtime: # pragma: NO COVER + field = subfield_to_delete.get("field") + field_repeated = subfield_to_delete.get("is_repeated") + subfield = subfield_to_delete.get("subfield") + if subfield: + if field_repeated: + for i in range(0, len(request_init["backup"][field])): + del request_init["backup"][field][i][subfield] + else: + del request_init["backup"][field][subfield] + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = gsad_backup.Backup( + database="database_value", + name="name_value", + size_bytes=1089, + freeable_size_bytes=2006, + exclusive_size_bytes=2168, + state=gsad_backup.Backup.State.CREATING, + referencing_databases=["referencing_databases_value"], + database_dialect=common.DatabaseDialect.GOOGLE_STANDARD_SQL, + referencing_backups=["referencing_backups_value"], + backup_schedules=["backup_schedules_value"], + incremental_backup_chain_id="incremental_backup_chain_id_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = gsad_backup.Backup.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.update_backup(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup.Backup) + assert response.database == "database_value" + assert response.name == "name_value" + assert response.size_bytes == 1089 + assert response.freeable_size_bytes == 2006 + assert response.exclusive_size_bytes == 2168 + assert response.state == gsad_backup.Backup.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.database_dialect == common.DatabaseDialect.GOOGLE_STANDARD_SQL + assert response.referencing_backups == ["referencing_backups_value"] + assert response.backup_schedules == ["backup_schedules_value"] + assert response.incremental_backup_chain_id == "incremental_backup_chain_id_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_update_backup_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_update_backup" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_update_backup_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_update_backup" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = gsad_backup.UpdateBackupRequest.pb( + gsad_backup.UpdateBackupRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = gsad_backup.Backup.to_json(gsad_backup.Backup()) + req.return_value.content = return_value + + request = gsad_backup.UpdateBackupRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = gsad_backup.Backup() + post_with_metadata.return_value = gsad_backup.Backup(), metadata + + client.update_backup( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_delete_backup_rest_bad_request(request_type=backup.DeleteBackupRequest): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2/backups/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.delete_backup(request) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.DeleteBackupRequest, + dict, + ], +) +def test_delete_backup_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2/backups/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "" + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.delete_backup(request) + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_delete_backup_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_delete_backup" + ) as pre: + pre.assert_not_called() + pb_message = backup.DeleteBackupRequest.pb(backup.DeleteBackupRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + request = backup.DeleteBackupRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + + client.delete_backup( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + + +def test_list_backups_rest_bad_request(request_type=backup.ListBackupsRequest): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_backups(request) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.ListBackupsRequest, + dict, + ], +) +def test_list_backups_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup.ListBackupsResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup.ListBackupsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_backups(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupsPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_backups_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_list_backups" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_list_backups_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_list_backups" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = backup.ListBackupsRequest.pb(backup.ListBackupsRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = backup.ListBackupsResponse.to_json(backup.ListBackupsResponse()) + req.return_value.content = return_value + + request = backup.ListBackupsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = backup.ListBackupsResponse() + post_with_metadata.return_value = backup.ListBackupsResponse(), metadata + + client.list_backups( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_restore_database_rest_bad_request( + request_type=spanner_database_admin.RestoreDatabaseRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.restore_database(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.RestoreDatabaseRequest, + dict, + ], +) +def test_restore_database_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.restore_database(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_restore_database_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_restore_database" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_restore_database_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_restore_database" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.RestoreDatabaseRequest.pb( + spanner_database_admin.RestoreDatabaseRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_database_admin.RestoreDatabaseRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.restore_database( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_list_database_operations_rest_bad_request( + request_type=spanner_database_admin.ListDatabaseOperationsRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_database_operations(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.ListDatabaseOperationsRequest, + dict, + ], +) +def test_list_database_operations_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabaseOperationsResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabaseOperationsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_database_operations(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabaseOperationsPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_database_operations_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_list_database_operations" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_list_database_operations_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_list_database_operations" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.ListDatabaseOperationsRequest.pb( + spanner_database_admin.ListDatabaseOperationsRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_database_admin.ListDatabaseOperationsResponse.to_json( + spanner_database_admin.ListDatabaseOperationsResponse() + ) + req.return_value.content = return_value + + request = spanner_database_admin.ListDatabaseOperationsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_database_admin.ListDatabaseOperationsResponse() + post_with_metadata.return_value = ( + spanner_database_admin.ListDatabaseOperationsResponse(), + metadata, + ) + + client.list_database_operations( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_list_backup_operations_rest_bad_request( + request_type=backup.ListBackupOperationsRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_backup_operations(request) + + +@pytest.mark.parametrize( + "request_type", + [ + backup.ListBackupOperationsRequest, + dict, + ], +) +def test_list_backup_operations_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup.ListBackupOperationsResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup.ListBackupOperationsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_backup_operations(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupOperationsPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_backup_operations_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_list_backup_operations" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_list_backup_operations_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_list_backup_operations" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = backup.ListBackupOperationsRequest.pb( + backup.ListBackupOperationsRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = backup.ListBackupOperationsResponse.to_json( + backup.ListBackupOperationsResponse() + ) + req.return_value.content = return_value + + request = backup.ListBackupOperationsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = backup.ListBackupOperationsResponse() + post_with_metadata.return_value = ( + backup.ListBackupOperationsResponse(), + metadata, + ) + + client.list_backup_operations( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_list_database_roles_rest_bad_request( + request_type=spanner_database_admin.ListDatabaseRolesRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_database_roles(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.ListDatabaseRolesRequest, + dict, + ], +) +def test_list_database_roles_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.ListDatabaseRolesResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.ListDatabaseRolesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_database_roles(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListDatabaseRolesPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_database_roles_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_list_database_roles" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_list_database_roles_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_list_database_roles" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.ListDatabaseRolesRequest.pb( + spanner_database_admin.ListDatabaseRolesRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_database_admin.ListDatabaseRolesResponse.to_json( + spanner_database_admin.ListDatabaseRolesResponse() + ) + req.return_value.content = return_value + + request = spanner_database_admin.ListDatabaseRolesRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_database_admin.ListDatabaseRolesResponse() + post_with_metadata.return_value = ( + spanner_database_admin.ListDatabaseRolesResponse(), + metadata, + ) + + client.list_database_roles( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_add_split_points_rest_bad_request( + request_type=spanner_database_admin.AddSplitPointsRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.add_split_points(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_database_admin.AddSplitPointsRequest, + dict, + ], +) +def test_add_split_points_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_database_admin.AddSplitPointsResponse() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_database_admin.AddSplitPointsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.add_split_points(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_database_admin.AddSplitPointsResponse) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_add_split_points_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_add_split_points" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_add_split_points_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_add_split_points" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_database_admin.AddSplitPointsRequest.pb( + spanner_database_admin.AddSplitPointsRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_database_admin.AddSplitPointsResponse.to_json( + spanner_database_admin.AddSplitPointsResponse() + ) + req.return_value.content = return_value + + request = spanner_database_admin.AddSplitPointsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_database_admin.AddSplitPointsResponse() + post_with_metadata.return_value = ( + spanner_database_admin.AddSplitPointsResponse(), + metadata, + ) + + client.add_split_points( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_create_backup_schedule_rest_bad_request( + request_type=gsad_backup_schedule.CreateBackupScheduleRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.create_backup_schedule(request) + + +@pytest.mark.parametrize( + "request_type", + [ + gsad_backup_schedule.CreateBackupScheduleRequest, + dict, + ], +) +def test_create_backup_schedule_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2/databases/sample3"} + request_init["backup_schedule"] = { + "name": "name_value", + "spec": { + "cron_spec": { + "text": "text_value", + "time_zone": "time_zone_value", + "creation_window": {"seconds": 751, "nanos": 543}, + } + }, + "retention_duration": {}, + "encryption_config": { + "encryption_type": 1, + "kms_key_name": "kms_key_name_value", + "kms_key_names": ["kms_key_names_value1", "kms_key_names_value2"], + }, + "full_backup_spec": {}, + "incremental_backup_spec": {}, + "update_time": {"seconds": 751, "nanos": 543}, + } + # The version of a generated dependency at test runtime may differ from the version used during generation. + # Delete any fields which are not present in the current runtime dependency + # See https://github.com/googleapis/gapic-generator-python/issues/1748 + + # Determine if the message type is proto-plus or protobuf + test_field = gsad_backup_schedule.CreateBackupScheduleRequest.meta.fields[ + "backup_schedule" + ] + + def get_message_fields(field): + # Given a field which is a message (composite type), return a list with + # all the fields of the message. + # If the field is not a composite type, return an empty list. + message_fields = [] + + if hasattr(field, "message") and field.message: + is_field_type_proto_plus_type = not hasattr(field.message, "DESCRIPTOR") + + if is_field_type_proto_plus_type: + message_fields = field.message.meta.fields.values() + # Add `# pragma: NO COVER` because there may not be any `*_pb2` field types + else: # pragma: NO COVER + message_fields = field.message.DESCRIPTOR.fields + return message_fields + + runtime_nested_fields = [ + (field.name, nested_field.name) + for field in get_message_fields(test_field) + for nested_field in get_message_fields(field) + ] + + subfields_not_in_runtime = [] + + # For each item in the sample request, create a list of sub fields which are not present at runtime + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for field, value in request_init["backup_schedule"].items(): # pragma: NO COVER + result = None + is_repeated = False + # For repeated fields + if isinstance(value, list) and len(value): + is_repeated = True + result = value[0] + # For fields where the type is another message + if isinstance(value, dict): + result = value + + if result and hasattr(result, "keys"): + for subfield in result.keys(): + if (field, subfield) not in runtime_nested_fields: + subfields_not_in_runtime.append( + { + "field": field, + "subfield": subfield, + "is_repeated": is_repeated, + } + ) + + # Remove fields from the sample request which are not present in the runtime version of the dependency + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for subfield_to_delete in subfields_not_in_runtime: # pragma: NO COVER + field = subfield_to_delete.get("field") + field_repeated = subfield_to_delete.get("is_repeated") + subfield = subfield_to_delete.get("subfield") + if subfield: + if field_repeated: + for i in range(0, len(request_init["backup_schedule"][field])): + del request_init["backup_schedule"][field][i][subfield] + else: + del request_init["backup_schedule"][field][subfield] + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = gsad_backup_schedule.BackupSchedule( + name="name_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = gsad_backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.create_backup_schedule(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_create_backup_schedule_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_create_backup_schedule" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_create_backup_schedule_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_create_backup_schedule" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = gsad_backup_schedule.CreateBackupScheduleRequest.pb( + gsad_backup_schedule.CreateBackupScheduleRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = gsad_backup_schedule.BackupSchedule.to_json( + gsad_backup_schedule.BackupSchedule() + ) + req.return_value.content = return_value + + request = gsad_backup_schedule.CreateBackupScheduleRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = gsad_backup_schedule.BackupSchedule() + post_with_metadata.return_value = ( + gsad_backup_schedule.BackupSchedule(), + metadata, + ) + + client.create_backup_schedule( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_backup_schedule_rest_bad_request( + request_type=backup_schedule.GetBackupScheduleRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_backup_schedule(request) + + +@pytest.mark.parametrize( + "request_type", + [ + backup_schedule.GetBackupScheduleRequest, + dict, + ], +) +def test_get_backup_schedule_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup_schedule.BackupSchedule( + name="name_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_backup_schedule(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_backup_schedule_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_get_backup_schedule" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_get_backup_schedule_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_get_backup_schedule" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = backup_schedule.GetBackupScheduleRequest.pb( + backup_schedule.GetBackupScheduleRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = backup_schedule.BackupSchedule.to_json( + backup_schedule.BackupSchedule() + ) + req.return_value.content = return_value + + request = backup_schedule.GetBackupScheduleRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = backup_schedule.BackupSchedule() + post_with_metadata.return_value = backup_schedule.BackupSchedule(), metadata + + client.get_backup_schedule( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_update_backup_schedule_rest_bad_request( + request_type=gsad_backup_schedule.UpdateBackupScheduleRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "backup_schedule": { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.update_backup_schedule(request) + + +@pytest.mark.parametrize( + "request_type", + [ + gsad_backup_schedule.UpdateBackupScheduleRequest, + dict, + ], +) +def test_update_backup_schedule_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "backup_schedule": { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + } + request_init["backup_schedule"] = { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4", + "spec": { + "cron_spec": { + "text": "text_value", + "time_zone": "time_zone_value", + "creation_window": {"seconds": 751, "nanos": 543}, + } + }, + "retention_duration": {}, + "encryption_config": { + "encryption_type": 1, + "kms_key_name": "kms_key_name_value", + "kms_key_names": ["kms_key_names_value1", "kms_key_names_value2"], + }, + "full_backup_spec": {}, + "incremental_backup_spec": {}, + "update_time": {"seconds": 751, "nanos": 543}, + } + # The version of a generated dependency at test runtime may differ from the version used during generation. + # Delete any fields which are not present in the current runtime dependency + # See https://github.com/googleapis/gapic-generator-python/issues/1748 + + # Determine if the message type is proto-plus or protobuf + test_field = gsad_backup_schedule.UpdateBackupScheduleRequest.meta.fields[ + "backup_schedule" + ] + + def get_message_fields(field): + # Given a field which is a message (composite type), return a list with + # all the fields of the message. + # If the field is not a composite type, return an empty list. + message_fields = [] + + if hasattr(field, "message") and field.message: + is_field_type_proto_plus_type = not hasattr(field.message, "DESCRIPTOR") + + if is_field_type_proto_plus_type: + message_fields = field.message.meta.fields.values() + # Add `# pragma: NO COVER` because there may not be any `*_pb2` field types + else: # pragma: NO COVER + message_fields = field.message.DESCRIPTOR.fields + return message_fields + + runtime_nested_fields = [ + (field.name, nested_field.name) + for field in get_message_fields(test_field) + for nested_field in get_message_fields(field) + ] + + subfields_not_in_runtime = [] + + # For each item in the sample request, create a list of sub fields which are not present at runtime + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for field, value in request_init["backup_schedule"].items(): # pragma: NO COVER + result = None + is_repeated = False + # For repeated fields + if isinstance(value, list) and len(value): + is_repeated = True + result = value[0] + # For fields where the type is another message + if isinstance(value, dict): + result = value + + if result and hasattr(result, "keys"): + for subfield in result.keys(): + if (field, subfield) not in runtime_nested_fields: + subfields_not_in_runtime.append( + { + "field": field, + "subfield": subfield, + "is_repeated": is_repeated, + } + ) + + # Remove fields from the sample request which are not present in the runtime version of the dependency + # Add `# pragma: NO COVER` because this test code will not run if all subfields are present at runtime + for subfield_to_delete in subfields_not_in_runtime: # pragma: NO COVER + field = subfield_to_delete.get("field") + field_repeated = subfield_to_delete.get("is_repeated") + subfield = subfield_to_delete.get("subfield") + if subfield: + if field_repeated: + for i in range(0, len(request_init["backup_schedule"][field])): + del request_init["backup_schedule"][field][i][subfield] + else: + del request_init["backup_schedule"][field][subfield] + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = gsad_backup_schedule.BackupSchedule( + name="name_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = gsad_backup_schedule.BackupSchedule.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.update_backup_schedule(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, gsad_backup_schedule.BackupSchedule) + assert response.name == "name_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_update_backup_schedule_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_update_backup_schedule" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_update_backup_schedule_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_update_backup_schedule" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = gsad_backup_schedule.UpdateBackupScheduleRequest.pb( + gsad_backup_schedule.UpdateBackupScheduleRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = gsad_backup_schedule.BackupSchedule.to_json( + gsad_backup_schedule.BackupSchedule() + ) + req.return_value.content = return_value + + request = gsad_backup_schedule.UpdateBackupScheduleRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = gsad_backup_schedule.BackupSchedule() + post_with_metadata.return_value = ( + gsad_backup_schedule.BackupSchedule(), + metadata, + ) + + client.update_backup_schedule( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_delete_backup_schedule_rest_bad_request( + request_type=backup_schedule.DeleteBackupScheduleRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.delete_backup_schedule(request) + + +@pytest.mark.parametrize( + "request_type", + [ + backup_schedule.DeleteBackupScheduleRequest, + dict, + ], +) +def test_delete_backup_schedule_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/backupSchedules/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "" + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.delete_backup_schedule(request) + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_delete_backup_schedule_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_delete_backup_schedule" + ) as pre: + pre.assert_not_called() + pb_message = backup_schedule.DeleteBackupScheduleRequest.pb( + backup_schedule.DeleteBackupScheduleRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + request = backup_schedule.DeleteBackupScheduleRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + + client.delete_backup_schedule( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + + +def test_list_backup_schedules_rest_bad_request( + request_type=backup_schedule.ListBackupSchedulesRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_backup_schedules(request) + + +@pytest.mark.parametrize( + "request_type", + [ + backup_schedule.ListBackupSchedulesRequest, + dict, + ], +) +def test_list_backup_schedules_rest_call_success(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = backup_schedule.ListBackupSchedulesResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = backup_schedule.ListBackupSchedulesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_backup_schedules(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListBackupSchedulesPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_backup_schedules_rest_interceptors(null_interceptor): + transport = transports.DatabaseAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.DatabaseAdminRestInterceptor(), + ) + client = DatabaseAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "post_list_backup_schedules" + ) as post, mock.patch.object( + transports.DatabaseAdminRestInterceptor, + "post_list_backup_schedules_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.DatabaseAdminRestInterceptor, "pre_list_backup_schedules" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = backup_schedule.ListBackupSchedulesRequest.pb( + backup_schedule.ListBackupSchedulesRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = backup_schedule.ListBackupSchedulesResponse.to_json( + backup_schedule.ListBackupSchedulesResponse() + ) + req.return_value.content = return_value + + request = backup_schedule.ListBackupSchedulesRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = backup_schedule.ListBackupSchedulesResponse() + post_with_metadata.return_value = ( + backup_schedule.ListBackupSchedulesResponse(), + metadata, + ) + + client.list_backup_schedules( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_internal_update_graph_operation_rest_error(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + with pytest.raises(NotImplementedError) as not_implemented_error: + client.internal_update_graph_operation({}) + assert ( + "Method InternalUpdateGraphOperation is not available over REST transport" + in str(not_implemented_error.value) + ) + + +def test_cancel_operation_rest_bad_request( + request_type=operations_pb2.CancelOperationRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type() + request = json_format.ParseDict( + { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + }, + request, + ) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = Response() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = Request() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.cancel_operation(request) + + +@pytest.mark.parametrize( + "request_type", + [ + operations_pb2.CancelOperationRequest, + dict, + ], +) +def test_cancel_operation_rest(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + } + request = request_type(**request_init) + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "{}" + response_value.content = json_return_value.encode("UTF-8") + + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.cancel_operation(request) + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_operation_rest_bad_request( + request_type=operations_pb2.DeleteOperationRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type() + request = json_format.ParseDict( + { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + }, + request, + ) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = Response() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = Request() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.delete_operation(request) + + +@pytest.mark.parametrize( + "request_type", + [ + operations_pb2.DeleteOperationRequest, + dict, + ], +) +def test_delete_operation_rest(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + } + request = request_type(**request_init) + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "{}" + response_value.content = json_return_value.encode("UTF-8") + + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.delete_operation(request) + + # Establish that the response is the type that we expect. + assert response is None + + +def test_get_operation_rest_bad_request( + request_type=operations_pb2.GetOperationRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type() + request = json_format.ParseDict( + { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + }, + request, + ) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = Response() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = Request() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_operation(request) + + +@pytest.mark.parametrize( + "request_type", + [ + operations_pb2.GetOperationRequest, + dict, + ], +) +def test_get_operation_rest(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + } + request = request_type(**request_init) + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_operation(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.Operation) + + +def test_list_operations_rest_bad_request( + request_type=operations_pb2.ListOperationsRequest, +): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type() + request = json_format.ParseDict( + {"name": "projects/sample1/instances/sample2/databases/sample3/operations"}, + request, + ) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = Response() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = Request() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_operations(request) + + +@pytest.mark.parametrize( + "request_type", + [ + operations_pb2.ListOperationsRequest, + dict, + ], +) +def test_list_operations_rest(request_type): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/operations" + } + request = request_type(**request_init) + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.ListOperationsResponse() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_operations(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.ListOperationsResponse) + + +def test_initialize_client_w_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_databases_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_databases), "__call__") as call: + client.list_databases(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabasesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_database_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_database), "__call__") as call: + client.create_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.CreateDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_database_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_database), "__call__") as call: + client.get_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.GetDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_database_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_database), "__call__") as call: + client.update_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.UpdateDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_database_ddl_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_database_ddl), "__call__" + ) as call: + client.update_database_ddl(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.UpdateDatabaseDdlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_drop_database_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.drop_database), "__call__") as call: + client.drop_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.DropDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_database_ddl_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_database_ddl), "__call__") as call: + client.get_database_ddl(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.GetDatabaseDdlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_set_iam_policy_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + client.set_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.SetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_iam_policy_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + client.get_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.GetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_test_iam_permissions_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + client.test_iam_permissions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.TestIamPermissionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_backup_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_backup), "__call__") as call: + client.create_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup.CreateBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_copy_backup_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.copy_backup), "__call__") as call: + client.copy_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.CopyBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_backup_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_backup), "__call__") as call: + client.get_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.GetBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_backup_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_backup), "__call__") as call: + client.update_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup.UpdateBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_backup_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_backup), "__call__") as call: + client.delete_backup(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.DeleteBackupRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_backups_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_backups), "__call__") as call: + client.list_backups(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.ListBackupsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_restore_database_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.restore_database), "__call__") as call: + client.restore_database(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.RestoreDatabaseRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_database_operations_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_database_operations), "__call__" + ) as call: + client.list_database_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabaseOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_backup_operations_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_operations), "__call__" + ) as call: + client.list_backup_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup.ListBackupOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_database_roles_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_database_roles), "__call__" + ) as call: + client.list_database_roles(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.ListDatabaseRolesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_add_split_points_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.add_split_points), "__call__") as call: + client.add_split_points(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.AddSplitPointsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_backup_schedule_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_backup_schedule), "__call__" + ) as call: + client.create_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup_schedule.CreateBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_backup_schedule_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_backup_schedule), "__call__" + ) as call: + client.get_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.GetBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_backup_schedule_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_backup_schedule), "__call__" + ) as call: + client.update_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = gsad_backup_schedule.UpdateBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_backup_schedule_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_backup_schedule), "__call__" + ) as call: + client.delete_backup_schedule(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.DeleteBackupScheduleRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_backup_schedules_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_backup_schedules), "__call__" + ) as call: + client.list_backup_schedules(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = backup_schedule.ListBackupSchedulesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_internal_update_graph_operation_empty_call_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.internal_update_graph_operation), "__call__" + ) as call: + client.internal_update_graph_operation(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_database_admin.InternalUpdateGraphOperationRequest() + + assert args[0] == request_msg + + +def test_database_admin_rest_lro_client(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + transport = client.transport + + # Ensure that we have an api-core operations client. + assert isinstance( + transport.operations_client, + operations_v1.AbstractOperationsClient, + ) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, + transports.DatabaseAdminGrpcTransport, + ) + + +def test_database_admin_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(core_exceptions.DuplicateCredentialArgs): + transport = transports.DatabaseAdminTransport( + credentials=ga_credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_database_admin_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.spanner_admin_database_v1.services.database_admin.transports.DatabaseAdminTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.DatabaseAdminTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ( + "list_databases", + "create_database", + "get_database", + "update_database", + "update_database_ddl", + "drop_database", + "get_database_ddl", + "set_iam_policy", + "get_iam_policy", + "test_iam_permissions", + "create_backup", + "copy_backup", + "get_backup", + "update_backup", + "delete_backup", + "list_backups", + "restore_database", + "list_database_operations", + "list_backup_operations", + "list_database_roles", + "add_split_points", + "create_backup_schedule", + "get_backup_schedule", + "update_backup_schedule", + "delete_backup_schedule", + "list_backup_schedules", + "internal_update_graph_operation", + "get_operation", + "cancel_operation", + "delete_operation", + "list_operations", + ) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + with pytest.raises(NotImplementedError): + transport.close() + + # Additionally, the LRO client (a property) should + # also raise NotImplementedError + with pytest.raises(NotImplementedError): + transport.operations_client + + # Catch all for all remaining methods and properties + remainder = [ + "kind", + ] + for r in remainder: + with pytest.raises(NotImplementedError): + getattr(transport, r)() + + +def test_database_admin_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + google.auth, "load_credentials_from_file", autospec=True + ) as load_creds, mock.patch( + "google.cloud.spanner_admin_database_v1.services.database_admin.transports.DatabaseAdminTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (ga_credentials.AnonymousCredentials(), None) + transport = transports.DatabaseAdminTransport( + credentials_file="credentials.json", + quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + quota_project_id="octopus", + ) + + +def test_database_admin_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(google.auth, "default", autospec=True) as adc, mock.patch( + "google.cloud.spanner_admin_database_v1.services.database_admin.transports.DatabaseAdminTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport = transports.DatabaseAdminTransport() + adc.assert_called_once() + + +def test_database_admin_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(google.auth, "default", autospec=True) as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + DatabaseAdminClient() + adc.assert_called_once_with( + scopes=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + quota_project_id=None, + ) + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.DatabaseAdminGrpcTransport, + transports.DatabaseAdminGrpcAsyncIOTransport, + ], +) +def test_database_admin_transport_auth_adc(transport_class): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(google.auth, "default", autospec=True) as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport_class(quota_project_id="octopus", scopes=["1", "2"]) + adc.assert_called_once_with( + scopes=["1", "2"], + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + quota_project_id="octopus", + ) + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.DatabaseAdminGrpcTransport, + transports.DatabaseAdminGrpcAsyncIOTransport, + transports.DatabaseAdminRestTransport, + ], +) +def test_database_admin_transport_auth_gdch_credentials(transport_class): + host = "https://language.com" + api_audience_tests = [None, "https://language2.com"] + api_audience_expect = [host, "https://language2.com"] + for t, e in zip(api_audience_tests, api_audience_expect): + with mock.patch.object(google.auth, "default", autospec=True) as adc: + gdch_mock = mock.MagicMock() + type(gdch_mock).with_gdch_audience = mock.PropertyMock( + return_value=gdch_mock + ) + adc.return_value = (gdch_mock, None) + transport_class(host=host, api_audience=t) + gdch_mock.with_gdch_audience.assert_called_once_with(e) + + +@pytest.mark.parametrize( + "transport_class,grpc_helpers", + [ + (transports.DatabaseAdminGrpcTransport, grpc_helpers), + (transports.DatabaseAdminGrpcAsyncIOTransport, grpc_helpers_async), + ], +) +def test_database_admin_transport_create_channel(transport_class, grpc_helpers): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object( + google.auth, "default", autospec=True + ) as adc, mock.patch.object( + grpc_helpers, "create_channel", autospec=True + ) as create_channel: + creds = ga_credentials.AnonymousCredentials() + adc.return_value = (creds, None) + transport_class(quota_project_id="octopus", scopes=["1", "2"]) + + create_channel.assert_called_with( + "spanner.googleapis.com:443", + credentials=creds, + credentials_file=None, + quota_project_id="octopus", + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + scopes=["1", "2"], + default_host="spanner.googleapis.com", + ssl_credentials=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.DatabaseAdminGrpcTransport, + transports.DatabaseAdminGrpcAsyncIOTransport, + ], +) +def test_database_admin_grpc_transport_client_cert_source_for_mtls(transport_class): + cred = ga_credentials.AnonymousCredentials() + + # Check ssl_channel_credentials is used if provided. + with mock.patch.object(transport_class, "create_channel") as mock_create_channel: + mock_ssl_channel_creds = mock.Mock() + transport_class( + host="squid.clam.whelk", + credentials=cred, + ssl_channel_credentials=mock_ssl_channel_creds, + ) + mock_create_channel.assert_called_once_with( + "squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_channel_creds, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + # Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls + # is used. + with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()): + with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred: + transport_class( + credentials=cred, + client_cert_source_for_mtls=client_cert_source_callback, + ) + expected_cert, expected_key = client_cert_source_callback() + mock_ssl_cred.assert_called_once_with( + certificate_chain=expected_cert, private_key=expected_key + ) + + +def test_database_admin_http_transport_client_cert_source_for_mtls(): + cred = ga_credentials.AnonymousCredentials() + with mock.patch( + "google.auth.transport.requests.AuthorizedSession.configure_mtls_channel" + ) as mock_configure_mtls_channel: + transports.DatabaseAdminRestTransport( + credentials=cred, client_cert_source_for_mtls=client_cert_source_callback + ) + mock_configure_mtls_channel.assert_called_once_with(client_cert_source_callback) + + +@pytest.mark.parametrize( + "transport_name", + [ + "grpc", + "grpc_asyncio", + "rest", + ], +) +def test_database_admin_host_no_port(transport_name): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="spanner.googleapis.com" + ), + transport=transport_name, + ) + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +@pytest.mark.parametrize( + "transport_name", + [ + "grpc", + "grpc_asyncio", + "rest", + ], +) +def test_database_admin_host_with_port(transport_name): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="spanner.googleapis.com:8000" + ), + transport=transport_name, + ) + assert client.transport._host == ( + "spanner.googleapis.com:8000" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com:8000" + ) + + +@pytest.mark.parametrize( + "transport_name", + [ + "rest", + ], +) +def test_database_admin_client_transport_session_collision(transport_name): + creds1 = ga_credentials.AnonymousCredentials() + creds2 = ga_credentials.AnonymousCredentials() + client1 = DatabaseAdminClient( + credentials=creds1, + transport=transport_name, + ) + client2 = DatabaseAdminClient( + credentials=creds2, + transport=transport_name, + ) + session1 = client1.transport.list_databases._session + session2 = client2.transport.list_databases._session + assert session1 != session2 + session1 = client1.transport.create_database._session + session2 = client2.transport.create_database._session + assert session1 != session2 + session1 = client1.transport.get_database._session + session2 = client2.transport.get_database._session + assert session1 != session2 + session1 = client1.transport.update_database._session + session2 = client2.transport.update_database._session + assert session1 != session2 + session1 = client1.transport.update_database_ddl._session + session2 = client2.transport.update_database_ddl._session + assert session1 != session2 + session1 = client1.transport.drop_database._session + session2 = client2.transport.drop_database._session + assert session1 != session2 + session1 = client1.transport.get_database_ddl._session + session2 = client2.transport.get_database_ddl._session + assert session1 != session2 + session1 = client1.transport.set_iam_policy._session + session2 = client2.transport.set_iam_policy._session + assert session1 != session2 + session1 = client1.transport.get_iam_policy._session + session2 = client2.transport.get_iam_policy._session + assert session1 != session2 + session1 = client1.transport.test_iam_permissions._session + session2 = client2.transport.test_iam_permissions._session + assert session1 != session2 + session1 = client1.transport.create_backup._session + session2 = client2.transport.create_backup._session + assert session1 != session2 + session1 = client1.transport.copy_backup._session + session2 = client2.transport.copy_backup._session + assert session1 != session2 + session1 = client1.transport.get_backup._session + session2 = client2.transport.get_backup._session + assert session1 != session2 + session1 = client1.transport.update_backup._session + session2 = client2.transport.update_backup._session + assert session1 != session2 + session1 = client1.transport.delete_backup._session + session2 = client2.transport.delete_backup._session + assert session1 != session2 + session1 = client1.transport.list_backups._session + session2 = client2.transport.list_backups._session + assert session1 != session2 + session1 = client1.transport.restore_database._session + session2 = client2.transport.restore_database._session + assert session1 != session2 + session1 = client1.transport.list_database_operations._session + session2 = client2.transport.list_database_operations._session + assert session1 != session2 + session1 = client1.transport.list_backup_operations._session + session2 = client2.transport.list_backup_operations._session + assert session1 != session2 + session1 = client1.transport.list_database_roles._session + session2 = client2.transport.list_database_roles._session + assert session1 != session2 + session1 = client1.transport.add_split_points._session + session2 = client2.transport.add_split_points._session + assert session1 != session2 + session1 = client1.transport.create_backup_schedule._session + session2 = client2.transport.create_backup_schedule._session + assert session1 != session2 + session1 = client1.transport.get_backup_schedule._session + session2 = client2.transport.get_backup_schedule._session + assert session1 != session2 + session1 = client1.transport.update_backup_schedule._session + session2 = client2.transport.update_backup_schedule._session + assert session1 != session2 + session1 = client1.transport.delete_backup_schedule._session + session2 = client2.transport.delete_backup_schedule._session + assert session1 != session2 + session1 = client1.transport.list_backup_schedules._session + session2 = client2.transport.list_backup_schedules._session + assert session1 != session2 + session1 = client1.transport.internal_update_graph_operation._session + session2 = client2.transport.internal_update_graph_operation._session + assert session1 != session2 + + +def test_database_admin_grpc_transport_channel(): + channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials()) + + # Check that channel is used if provided. + transport = transports.DatabaseAdminGrpcTransport( + host="squid.clam.whelk", + channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_database_admin_grpc_asyncio_transport_channel(): + channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials()) + + # Check that channel is used if provided. + transport = transports.DatabaseAdminGrpcAsyncIOTransport( + host="squid.clam.whelk", + channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. +@pytest.mark.parametrize( + "transport_class", + [ + transports.DatabaseAdminGrpcTransport, + transports.DatabaseAdminGrpcAsyncIOTransport, + ], +) +def test_database_admin_transport_channel_mtls_with_client_cert_source(transport_class): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel" + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = ga_credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(google.auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. +@pytest.mark.parametrize( + "transport_class", + [ + transports.DatabaseAdminGrpcTransport, + transports.DatabaseAdminGrpcAsyncIOTransport, + ], +) +def test_database_admin_transport_channel_mtls_with_adc(transport_class): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel" + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_database_admin_grpc_lro_client(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance( + transport.operations_client, + operations_v1.OperationsClient, + ) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_database_admin_grpc_lro_async_client(): + client = DatabaseAdminAsyncClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc_asyncio", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance( + transport.operations_client, + operations_v1.OperationsAsyncClient, + ) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_backup_path(): + project = "squid" + instance = "clam" + backup = "whelk" + expected = "projects/{project}/instances/{instance}/backups/{backup}".format( + project=project, + instance=instance, + backup=backup, + ) + actual = DatabaseAdminClient.backup_path(project, instance, backup) + assert expected == actual + + +def test_parse_backup_path(): + expected = { + "project": "octopus", + "instance": "oyster", + "backup": "nudibranch", + } + path = DatabaseAdminClient.backup_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_backup_path(path) + assert expected == actual + + +def test_backup_schedule_path(): + project = "cuttlefish" + instance = "mussel" + database = "winkle" + schedule = "nautilus" + expected = "projects/{project}/instances/{instance}/databases/{database}/backupSchedules/{schedule}".format( + project=project, + instance=instance, + database=database, + schedule=schedule, + ) + actual = DatabaseAdminClient.backup_schedule_path( + project, instance, database, schedule + ) + assert expected == actual + + +def test_parse_backup_schedule_path(): + expected = { + "project": "scallop", + "instance": "abalone", + "database": "squid", + "schedule": "clam", + } + path = DatabaseAdminClient.backup_schedule_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_backup_schedule_path(path) + assert expected == actual + + +def test_crypto_key_path(): + project = "whelk" + location = "octopus" + key_ring = "oyster" + crypto_key = "nudibranch" + expected = "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}".format( + project=project, + location=location, + key_ring=key_ring, + crypto_key=crypto_key, + ) + actual = DatabaseAdminClient.crypto_key_path( + project, location, key_ring, crypto_key + ) + assert expected == actual + + +def test_parse_crypto_key_path(): + expected = { + "project": "cuttlefish", + "location": "mussel", + "key_ring": "winkle", + "crypto_key": "nautilus", + } + path = DatabaseAdminClient.crypto_key_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_crypto_key_path(path) + assert expected == actual + + +def test_crypto_key_version_path(): + project = "scallop" + location = "abalone" + key_ring = "squid" + crypto_key = "clam" + crypto_key_version = "whelk" + expected = "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}/cryptoKeyVersions/{crypto_key_version}".format( + project=project, + location=location, + key_ring=key_ring, + crypto_key=crypto_key, + crypto_key_version=crypto_key_version, + ) + actual = DatabaseAdminClient.crypto_key_version_path( + project, location, key_ring, crypto_key, crypto_key_version + ) + assert expected == actual + + +def test_parse_crypto_key_version_path(): + expected = { + "project": "octopus", + "location": "oyster", + "key_ring": "nudibranch", + "crypto_key": "cuttlefish", + "crypto_key_version": "mussel", + } + path = DatabaseAdminClient.crypto_key_version_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_crypto_key_version_path(path) + assert expected == actual + + +def test_database_path(): + project = "winkle" + instance = "nautilus" + database = "scallop" + expected = "projects/{project}/instances/{instance}/databases/{database}".format( + project=project, + instance=instance, + database=database, + ) + actual = DatabaseAdminClient.database_path(project, instance, database) + assert expected == actual + + +def test_parse_database_path(): + expected = { + "project": "abalone", + "instance": "squid", + "database": "clam", + } + path = DatabaseAdminClient.database_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_database_path(path) + assert expected == actual + + +def test_database_role_path(): + project = "whelk" + instance = "octopus" + database = "oyster" + role = "nudibranch" + expected = "projects/{project}/instances/{instance}/databases/{database}/databaseRoles/{role}".format( + project=project, + instance=instance, + database=database, + role=role, + ) + actual = DatabaseAdminClient.database_role_path(project, instance, database, role) + assert expected == actual + + +def test_parse_database_role_path(): + expected = { + "project": "cuttlefish", + "instance": "mussel", + "database": "winkle", + "role": "nautilus", + } + path = DatabaseAdminClient.database_role_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_database_role_path(path) + assert expected == actual + + +def test_instance_path(): + project = "scallop" + instance = "abalone" + expected = "projects/{project}/instances/{instance}".format( + project=project, + instance=instance, + ) + actual = DatabaseAdminClient.instance_path(project, instance) + assert expected == actual + + +def test_parse_instance_path(): + expected = { + "project": "squid", + "instance": "clam", + } + path = DatabaseAdminClient.instance_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_instance_path(path) + assert expected == actual + + +def test_instance_partition_path(): + project = "whelk" + instance = "octopus" + instance_partition = "oyster" + expected = "projects/{project}/instances/{instance}/instancePartitions/{instance_partition}".format( + project=project, + instance=instance, + instance_partition=instance_partition, + ) + actual = DatabaseAdminClient.instance_partition_path( + project, instance, instance_partition + ) + assert expected == actual + + +def test_parse_instance_partition_path(): + expected = { + "project": "nudibranch", + "instance": "cuttlefish", + "instance_partition": "mussel", + } + path = DatabaseAdminClient.instance_partition_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_instance_partition_path(path) + assert expected == actual + + +def test_common_billing_account_path(): + billing_account = "winkle" + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = DatabaseAdminClient.common_billing_account_path(billing_account) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "nautilus", + } + path = DatabaseAdminClient.common_billing_account_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_common_billing_account_path(path) + assert expected == actual + + +def test_common_folder_path(): + folder = "scallop" + expected = "folders/{folder}".format( + folder=folder, + ) + actual = DatabaseAdminClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "abalone", + } + path = DatabaseAdminClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "squid" + expected = "organizations/{organization}".format( + organization=organization, + ) + actual = DatabaseAdminClient.common_organization_path(organization) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "clam", + } + path = DatabaseAdminClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_common_organization_path(path) + assert expected == actual + + +def test_common_project_path(): + project = "whelk" + expected = "projects/{project}".format( + project=project, + ) + actual = DatabaseAdminClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "octopus", + } + path = DatabaseAdminClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "oyster" + location = "nudibranch" + expected = "projects/{project}/locations/{location}".format( + project=project, + location=location, + ) + actual = DatabaseAdminClient.common_location_path(project, location) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "cuttlefish", + "location": "mussel", + } + path = DatabaseAdminClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = DatabaseAdminClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_with_default_client_info(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.DatabaseAdminTransport, "_prep_wrapped_messages" + ) as prep: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.DatabaseAdminTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = DatabaseAdminClient.get_transport_class() + transport = transport_class( + credentials=ga_credentials.AnonymousCredentials(), + client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + +def test_delete_operation(transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.DeleteOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.delete_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_delete_operation_async(transport: str = "grpc_asyncio"): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.DeleteOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_operation_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.DeleteOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + call.return_value = None + + client.delete_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_delete_operation_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.DeleteOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +def test_delete_operation_from_dict(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + response = client.delete_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +@pytest.mark.asyncio +async def test_delete_operation_from_dict_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +def test_cancel_operation(transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.CancelOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.cancel_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_cancel_operation_async(transport: str = "grpc_asyncio"): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.CancelOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.cancel_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_cancel_operation_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.CancelOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + call.return_value = None + + client.cancel_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_cancel_operation_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.CancelOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.cancel_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +def test_cancel_operation_from_dict(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + response = client.cancel_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +@pytest.mark.asyncio +async def test_cancel_operation_from_dict_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.cancel_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +def test_get_operation(transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.GetOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation() + response = client.get_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.Operation) + + +@pytest.mark.asyncio +async def test_get_operation_async(transport: str = "grpc_asyncio"): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.GetOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation() + ) + response = await client.get_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.Operation) + + +def test_get_operation_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.GetOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + call.return_value = operations_pb2.Operation() + + client.get_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_operation_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.GetOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation() + ) + await client.get_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +def test_get_operation_from_dict(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation() + + response = client.get_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +@pytest.mark.asyncio +async def test_get_operation_from_dict_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation() + ) + response = await client.get_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +def test_list_operations(transport: str = "grpc"): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.ListOperationsRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.ListOperationsResponse() + response = client.list_operations(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.ListOperationsResponse) + + +@pytest.mark.asyncio +async def test_list_operations_async(transport: str = "grpc_asyncio"): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.ListOperationsRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.ListOperationsResponse() + ) + response = await client.list_operations(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.ListOperationsResponse) + + +def test_list_operations_field_headers(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.ListOperationsRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + call.return_value = operations_pb2.ListOperationsResponse() + + client.list_operations(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_operations_field_headers_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.ListOperationsRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.ListOperationsResponse() + ) + await client.list_operations(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +def test_list_operations_from_dict(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.ListOperationsResponse() + + response = client.list_operations( + request={ + "name": "locations", + } + ) + call.assert_called() + + +@pytest.mark.asyncio +async def test_list_operations_from_dict_async(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.ListOperationsResponse() + ) + response = await client.list_operations( + request={ + "name": "locations", + } + ) + call.assert_called() + + +def test_transport_close_grpc(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="grpc" + ) + with mock.patch.object( + type(getattr(client.transport, "_grpc_channel")), "close" + ) as close: + with client: + close.assert_not_called() + close.assert_called_once() + + +@pytest.mark.asyncio +async def test_transport_close_grpc_asyncio(): + client = DatabaseAdminAsyncClient( + credentials=async_anonymous_credentials(), transport="grpc_asyncio" + ) + with mock.patch.object( + type(getattr(client.transport, "_grpc_channel")), "close" + ) as close: + async with client: + close.assert_not_called() + close.assert_called_once() + + +def test_transport_close_rest(): + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + with mock.patch.object( + type(getattr(client.transport, "_session")), "close" + ) as close: + with client: + close.assert_not_called() + close.assert_called_once() + + +def test_client_ctx(): + transports = [ + "rest", + "grpc", + ] + for transport in transports: + client = DatabaseAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport=transport + ) + # Test client calls underlying transport. + with mock.patch.object(type(client.transport), "close") as close: + close.assert_not_called() + with client: + pass + close.assert_called() + + +@pytest.mark.parametrize( + "client_class,transport_class", + [ + (DatabaseAdminClient, transports.DatabaseAdminGrpcTransport), + (DatabaseAdminAsyncClient, transports.DatabaseAdminGrpcAsyncIOTransport), + ], +) +def test_api_key_credentials(client_class, transport_class): + with mock.patch.object( + google.auth._default, "get_api_key_credentials", create=True + ) as get_api_key_credentials: + mock_cred = mock.Mock() + get_api_key_credentials.return_value = mock_cred + options = client_options.ClientOptions() + options.api_key = "api_key" + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=mock_cred, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) diff --git a/tests/unit/gapic/spanner_admin_instance_v1/__init__.py b/tests/unit/gapic/spanner_admin_instance_v1/__init__.py new file mode 100644 index 0000000000..cbf94b283c --- /dev/null +++ b/tests/unit/gapic/spanner_admin_instance_v1/__init__.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/tests/unit/gapic/spanner_admin_instance_v1/test_instance_admin.py b/tests/unit/gapic/spanner_admin_instance_v1/test_instance_admin.py new file mode 100644 index 0000000000..52424e65d3 --- /dev/null +++ b/tests/unit/gapic/spanner_admin_instance_v1/test_instance_admin.py @@ -0,0 +1,19822 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import os +import re + +# try/except added for compatibility with python < 3.8 +try: + from unittest import mock + from unittest.mock import AsyncMock # pragma: NO COVER +except ImportError: # pragma: NO COVER + import mock + +import grpc +from grpc.experimental import aio +from collections.abc import Iterable, AsyncIterable +from google.protobuf import json_format +import json +import math +import pytest +from google.api_core import api_core_version +from proto.marshal.rules.dates import DurationRule, TimestampRule +from proto.marshal.rules import wrappers +from requests import Response +from requests import Request, PreparedRequest +from requests.sessions import Session +from google.protobuf import json_format + +try: + from google.auth.aio import credentials as ga_credentials_async + + HAS_GOOGLE_AUTH_AIO = True +except ImportError: # pragma: NO COVER + HAS_GOOGLE_AUTH_AIO = False + +from google.api_core import client_options +from google.api_core import exceptions as core_exceptions +from google.api_core import future +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.api_core import operation +from google.api_core import operation_async # type: ignore +from google.api_core import operations_v1 +from google.api_core import path_template +from google.api_core import retry as retries +from google.auth import credentials as ga_credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.spanner_admin_instance_v1.services.instance_admin import ( + InstanceAdminAsyncClient, +) +from google.cloud.spanner_admin_instance_v1.services.instance_admin import ( + InstanceAdminClient, +) +from google.cloud.spanner_admin_instance_v1.services.instance_admin import pagers +from google.cloud.spanner_admin_instance_v1.services.instance_admin import transports +from google.cloud.spanner_admin_instance_v1.types import common +from google.cloud.spanner_admin_instance_v1.types import spanner_instance_admin +from google.iam.v1 import iam_policy_pb2 # type: ignore +from google.iam.v1 import options_pb2 # type: ignore +from google.iam.v1 import policy_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.longrunning import operations_pb2 # type: ignore +from google.oauth2 import service_account +from google.protobuf import field_mask_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from google.type import expr_pb2 # type: ignore +import google.auth + + +CRED_INFO_JSON = { + "credential_source": "/path/to/file", + "credential_type": "service account credentials", + "principal": "service-account@example.com", +} +CRED_INFO_STRING = json.dumps(CRED_INFO_JSON) + + +async def mock_async_gen(data, chunk_size=1): + for i in range(0, len(data)): # pragma: NO COVER + chunk = data[i : i + chunk_size] + yield chunk.encode("utf-8") + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# TODO: use async auth anon credentials by default once the minimum version of google-auth is upgraded. +# See related issue: https://github.com/googleapis/gapic-generator-python/issues/2107. +def async_anonymous_credentials(): + if HAS_GOOGLE_AUTH_AIO: + return ga_credentials_async.AnonymousCredentials() + return ga_credentials.AnonymousCredentials() + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +# If default endpoint template is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint template so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint_template(client): + return ( + "test.{UNIVERSE_DOMAIN}" + if ("localhost" in client._DEFAULT_ENDPOINT_TEMPLATE) + else client._DEFAULT_ENDPOINT_TEMPLATE + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert InstanceAdminClient._get_default_mtls_endpoint(None) is None + assert ( + InstanceAdminClient._get_default_mtls_endpoint(api_endpoint) + == api_mtls_endpoint + ) + assert ( + InstanceAdminClient._get_default_mtls_endpoint(api_mtls_endpoint) + == api_mtls_endpoint + ) + assert ( + InstanceAdminClient._get_default_mtls_endpoint(sandbox_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + InstanceAdminClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + InstanceAdminClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi + ) + + +def test__read_environment_variables(): + assert InstanceAdminClient._read_environment_variables() == (False, "auto", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + assert InstanceAdminClient._read_environment_variables() == (True, "auto", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "false"}): + assert InstanceAdminClient._read_environment_variables() == ( + False, + "auto", + None, + ) + + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + InstanceAdminClient._read_environment_variables() + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + assert InstanceAdminClient._read_environment_variables() == ( + False, + "never", + None, + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + assert InstanceAdminClient._read_environment_variables() == ( + False, + "always", + None, + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}): + assert InstanceAdminClient._read_environment_variables() == ( + False, + "auto", + None, + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + InstanceAdminClient._read_environment_variables() + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + with mock.patch.dict(os.environ, {"GOOGLE_CLOUD_UNIVERSE_DOMAIN": "foo.com"}): + assert InstanceAdminClient._read_environment_variables() == ( + False, + "auto", + "foo.com", + ) + + +def test__get_client_cert_source(): + mock_provided_cert_source = mock.Mock() + mock_default_cert_source = mock.Mock() + + assert InstanceAdminClient._get_client_cert_source(None, False) is None + assert ( + InstanceAdminClient._get_client_cert_source(mock_provided_cert_source, False) + is None + ) + assert ( + InstanceAdminClient._get_client_cert_source(mock_provided_cert_source, True) + == mock_provided_cert_source + ) + + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", return_value=True + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=mock_default_cert_source, + ): + assert ( + InstanceAdminClient._get_client_cert_source(None, True) + is mock_default_cert_source + ) + assert ( + InstanceAdminClient._get_client_cert_source( + mock_provided_cert_source, "true" + ) + is mock_provided_cert_source + ) + + +@mock.patch.object( + InstanceAdminClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(InstanceAdminClient), +) +@mock.patch.object( + InstanceAdminAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(InstanceAdminAsyncClient), +) +def test__get_api_endpoint(): + api_override = "foo.com" + mock_client_cert_source = mock.Mock() + default_universe = InstanceAdminClient._DEFAULT_UNIVERSE + default_endpoint = InstanceAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=default_universe + ) + mock_universe = "bar.com" + mock_endpoint = InstanceAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=mock_universe + ) + + assert ( + InstanceAdminClient._get_api_endpoint( + api_override, mock_client_cert_source, default_universe, "always" + ) + == api_override + ) + assert ( + InstanceAdminClient._get_api_endpoint( + None, mock_client_cert_source, default_universe, "auto" + ) + == InstanceAdminClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + InstanceAdminClient._get_api_endpoint(None, None, default_universe, "auto") + == default_endpoint + ) + assert ( + InstanceAdminClient._get_api_endpoint(None, None, default_universe, "always") + == InstanceAdminClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + InstanceAdminClient._get_api_endpoint( + None, mock_client_cert_source, default_universe, "always" + ) + == InstanceAdminClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + InstanceAdminClient._get_api_endpoint(None, None, mock_universe, "never") + == mock_endpoint + ) + assert ( + InstanceAdminClient._get_api_endpoint(None, None, default_universe, "never") + == default_endpoint + ) + + with pytest.raises(MutualTLSChannelError) as excinfo: + InstanceAdminClient._get_api_endpoint( + None, mock_client_cert_source, mock_universe, "auto" + ) + assert ( + str(excinfo.value) + == "mTLS is not supported in any universe other than googleapis.com." + ) + + +def test__get_universe_domain(): + client_universe_domain = "foo.com" + universe_domain_env = "bar.com" + + assert ( + InstanceAdminClient._get_universe_domain( + client_universe_domain, universe_domain_env + ) + == client_universe_domain + ) + assert ( + InstanceAdminClient._get_universe_domain(None, universe_domain_env) + == universe_domain_env + ) + assert ( + InstanceAdminClient._get_universe_domain(None, None) + == InstanceAdminClient._DEFAULT_UNIVERSE + ) + + with pytest.raises(ValueError) as excinfo: + InstanceAdminClient._get_universe_domain("", None) + assert str(excinfo.value) == "Universe Domain cannot be an empty string." + + +@pytest.mark.parametrize( + "error_code,cred_info_json,show_cred_info", + [ + (401, CRED_INFO_JSON, True), + (403, CRED_INFO_JSON, True), + (404, CRED_INFO_JSON, True), + (500, CRED_INFO_JSON, False), + (401, None, False), + (403, None, False), + (404, None, False), + (500, None, False), + ], +) +def test__add_cred_info_for_auth_errors(error_code, cred_info_json, show_cred_info): + cred = mock.Mock(["get_cred_info"]) + cred.get_cred_info = mock.Mock(return_value=cred_info_json) + client = InstanceAdminClient(credentials=cred) + client._transport._credentials = cred + + error = core_exceptions.GoogleAPICallError("message", details=["foo"]) + error.code = error_code + + client._add_cred_info_for_auth_errors(error) + if show_cred_info: + assert error.details == ["foo", CRED_INFO_STRING] + else: + assert error.details == ["foo"] + + +@pytest.mark.parametrize("error_code", [401, 403, 404, 500]) +def test__add_cred_info_for_auth_errors_no_get_cred_info(error_code): + cred = mock.Mock([]) + assert not hasattr(cred, "get_cred_info") + client = InstanceAdminClient(credentials=cred) + client._transport._credentials = cred + + error = core_exceptions.GoogleAPICallError("message", details=[]) + error.code = error_code + + client._add_cred_info_for_auth_errors(error) + assert error.details == [] + + +@pytest.mark.parametrize( + "client_class,transport_name", + [ + (InstanceAdminClient, "grpc"), + (InstanceAdminAsyncClient, "grpc_asyncio"), + (InstanceAdminClient, "rest"), + ], +) +def test_instance_admin_client_from_service_account_info(client_class, transport_name): + creds = ga_credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_info" + ) as factory: + factory.return_value = creds + info = {"valid": True} + client = client_class.from_service_account_info(info, transport=transport_name) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +@pytest.mark.parametrize( + "transport_class,transport_name", + [ + (transports.InstanceAdminGrpcTransport, "grpc"), + (transports.InstanceAdminGrpcAsyncIOTransport, "grpc_asyncio"), + (transports.InstanceAdminRestTransport, "rest"), + ], +) +def test_instance_admin_client_service_account_always_use_jwt( + transport_class, transport_name +): + with mock.patch.object( + service_account.Credentials, "with_always_use_jwt_access", create=True + ) as use_jwt: + creds = service_account.Credentials(None, None, None) + transport = transport_class(credentials=creds, always_use_jwt_access=True) + use_jwt.assert_called_once_with(True) + + with mock.patch.object( + service_account.Credentials, "with_always_use_jwt_access", create=True + ) as use_jwt: + creds = service_account.Credentials(None, None, None) + transport = transport_class(credentials=creds, always_use_jwt_access=False) + use_jwt.assert_not_called() + + +@pytest.mark.parametrize( + "client_class,transport_name", + [ + (InstanceAdminClient, "grpc"), + (InstanceAdminAsyncClient, "grpc_asyncio"), + (InstanceAdminClient, "rest"), + ], +) +def test_instance_admin_client_from_service_account_file(client_class, transport_name): + creds = ga_credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file( + "dummy/file/path.json", transport=transport_name + ) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + client = client_class.from_service_account_json( + "dummy/file/path.json", transport=transport_name + ) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +def test_instance_admin_client_get_transport_class(): + transport = InstanceAdminClient.get_transport_class() + available_transports = [ + transports.InstanceAdminGrpcTransport, + transports.InstanceAdminRestTransport, + ] + assert transport in available_transports + + transport = InstanceAdminClient.get_transport_class("grpc") + assert transport == transports.InstanceAdminGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + (InstanceAdminClient, transports.InstanceAdminGrpcTransport, "grpc"), + ( + InstanceAdminAsyncClient, + transports.InstanceAdminGrpcAsyncIOTransport, + "grpc_asyncio", + ), + (InstanceAdminClient, transports.InstanceAdminRestTransport, "rest"), + ], +) +@mock.patch.object( + InstanceAdminClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(InstanceAdminClient), +) +@mock.patch.object( + InstanceAdminAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(InstanceAdminAsyncClient), +) +def test_instance_admin_client_client_options( + client_class, transport_class, transport_name +): + # Check that if channel is provided we won't create a new one. + with mock.patch.object(InstanceAdminClient, "get_transport_class") as gtc: + transport = transport_class(credentials=ga_credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object(InstanceAdminClient, "get_transport_class") as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name, client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + client = client_class(transport=transport_name) + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + client = client_class(transport=transport_name) + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + # Check the case api_endpoint is provided + options = client_options.ClientOptions( + api_audience="https://language.googleapis.com" + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience="https://language.googleapis.com", + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + (InstanceAdminClient, transports.InstanceAdminGrpcTransport, "grpc", "true"), + ( + InstanceAdminAsyncClient, + transports.InstanceAdminGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + (InstanceAdminClient, transports.InstanceAdminGrpcTransport, "grpc", "false"), + ( + InstanceAdminAsyncClient, + transports.InstanceAdminGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + (InstanceAdminClient, transports.InstanceAdminRestTransport, "rest", "true"), + (InstanceAdminClient, transports.InstanceAdminRestTransport, "rest", "false"), + ], +) +@mock.patch.object( + InstanceAdminClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(InstanceAdminClient), +) +@mock.patch.object( + InstanceAdminAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(InstanceAdminAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_instance_admin_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + + if use_client_cert_env == "false": + expected_client_cert_source = None + expected_host = client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ) + else: + expected_client_cert_source = client_cert_source_callback + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=client_cert_source_callback, + ): + if use_client_cert_env == "false": + expected_host = client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ) + expected_client_cert_source = None + else: + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_client_cert_source = client_cert_source_callback + + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + +@pytest.mark.parametrize( + "client_class", [InstanceAdminClient, InstanceAdminAsyncClient] +) +@mock.patch.object( + InstanceAdminClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(InstanceAdminClient), +) +@mock.patch.object( + InstanceAdminAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(InstanceAdminAsyncClient), +) +def test_instance_admin_client_get_mtls_endpoint_and_cert_source(client_class): + mock_client_cert_source = mock.Mock() + + # Test the case GOOGLE_API_USE_CLIENT_CERTIFICATE is "true". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + mock_api_endpoint = "foo" + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=mock_api_endpoint + ) + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source( + options + ) + assert api_endpoint == mock_api_endpoint + assert cert_source == mock_client_cert_source + + # Test the case GOOGLE_API_USE_CLIENT_CERTIFICATE is "false". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "false"}): + mock_client_cert_source = mock.Mock() + mock_api_endpoint = "foo" + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=mock_api_endpoint + ) + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source( + options + ) + assert api_endpoint == mock_api_endpoint + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "auto" and default cert doesn't exist. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "auto" and default cert exists. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=mock_client_cert_source, + ): + ( + api_endpoint, + cert_source, + ) = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + assert cert_source == mock_client_cert_source + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + client_class.get_mtls_endpoint_and_cert_source() + + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + client_class.get_mtls_endpoint_and_cert_source() + + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + +@pytest.mark.parametrize( + "client_class", [InstanceAdminClient, InstanceAdminAsyncClient] +) +@mock.patch.object( + InstanceAdminClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(InstanceAdminClient), +) +@mock.patch.object( + InstanceAdminAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(InstanceAdminAsyncClient), +) +def test_instance_admin_client_client_api_endpoint(client_class): + mock_client_cert_source = client_cert_source_callback + api_override = "foo.com" + default_universe = InstanceAdminClient._DEFAULT_UNIVERSE + default_endpoint = InstanceAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=default_universe + ) + mock_universe = "bar.com" + mock_endpoint = InstanceAdminClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=mock_universe + ) + + # If ClientOptions.api_endpoint is set and GOOGLE_API_USE_CLIENT_CERTIFICATE="true", + # use ClientOptions.api_endpoint as the api endpoint regardless. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.requests.AuthorizedSession.configure_mtls_channel" + ): + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=api_override + ) + client = client_class( + client_options=options, + credentials=ga_credentials.AnonymousCredentials(), + ) + assert client.api_endpoint == api_override + + # If ClientOptions.api_endpoint is not set and GOOGLE_API_USE_MTLS_ENDPOINT="never", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with GDU as the api endpoint. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + client = client_class(credentials=ga_credentials.AnonymousCredentials()) + assert client.api_endpoint == default_endpoint + + # If ClientOptions.api_endpoint is not set and GOOGLE_API_USE_MTLS_ENDPOINT="always", + # use the DEFAULT_MTLS_ENDPOINT as the api endpoint. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + client = client_class(credentials=ga_credentials.AnonymousCredentials()) + assert client.api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + + # If ClientOptions.api_endpoint is not set, GOOGLE_API_USE_MTLS_ENDPOINT="auto" (default), + # GOOGLE_API_USE_CLIENT_CERTIFICATE="false" (default), default cert source doesn't exist, + # and ClientOptions.universe_domain="bar.com", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with universe domain as the api endpoint. + options = client_options.ClientOptions() + universe_exists = hasattr(options, "universe_domain") + if universe_exists: + options = client_options.ClientOptions(universe_domain=mock_universe) + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + else: + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + assert client.api_endpoint == ( + mock_endpoint if universe_exists else default_endpoint + ) + assert client.universe_domain == ( + mock_universe if universe_exists else default_universe + ) + + # If ClientOptions does not have a universe domain attribute and GOOGLE_API_USE_MTLS_ENDPOINT="never", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with GDU as the api endpoint. + options = client_options.ClientOptions() + if hasattr(options, "universe_domain"): + delattr(options, "universe_domain") + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + assert client.api_endpoint == default_endpoint + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + (InstanceAdminClient, transports.InstanceAdminGrpcTransport, "grpc"), + ( + InstanceAdminAsyncClient, + transports.InstanceAdminGrpcAsyncIOTransport, + "grpc_asyncio", + ), + (InstanceAdminClient, transports.InstanceAdminRestTransport, "rest"), + ], +) +def test_instance_admin_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions( + scopes=["1", "2"], + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=["1", "2"], + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,grpc_helpers", + [ + ( + InstanceAdminClient, + transports.InstanceAdminGrpcTransport, + "grpc", + grpc_helpers, + ), + ( + InstanceAdminAsyncClient, + transports.InstanceAdminGrpcAsyncIOTransport, + "grpc_asyncio", + grpc_helpers_async, + ), + (InstanceAdminClient, transports.InstanceAdminRestTransport, "rest", None), + ], +) +def test_instance_admin_client_client_options_credentials_file( + client_class, transport_class, transport_name, grpc_helpers +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + +def test_instance_admin_client_client_options_from_dict(): + with mock.patch( + "google.cloud.spanner_admin_instance_v1.services.instance_admin.transports.InstanceAdminGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = InstanceAdminClient( + client_options={"api_endpoint": "squid.clam.whelk"} + ) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,grpc_helpers", + [ + ( + InstanceAdminClient, + transports.InstanceAdminGrpcTransport, + "grpc", + grpc_helpers, + ), + ( + InstanceAdminAsyncClient, + transports.InstanceAdminGrpcAsyncIOTransport, + "grpc_asyncio", + grpc_helpers_async, + ), + ], +) +def test_instance_admin_client_create_channel_credentials_file( + client_class, transport_class, transport_name, grpc_helpers +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) + + # test that the credentials from file are saved and used as the credentials. + with mock.patch.object( + google.auth, "load_credentials_from_file", autospec=True + ) as load_creds, mock.patch.object( + google.auth, "default", autospec=True + ) as adc, mock.patch.object( + grpc_helpers, "create_channel" + ) as create_channel: + creds = ga_credentials.AnonymousCredentials() + file_creds = ga_credentials.AnonymousCredentials() + load_creds.return_value = (file_creds, None) + adc.return_value = (creds, None) + client = client_class(client_options=options, transport=transport_name) + create_channel.assert_called_with( + "spanner.googleapis.com:443", + credentials=file_creds, + credentials_file=None, + quota_project_id=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + scopes=None, + default_host="spanner.googleapis.com", + ssl_credentials=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstanceConfigsRequest, + dict, + ], +) +def test_list_instance_configs(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstanceConfigsResponse( + next_page_token="next_page_token_value", + ) + response = client.list_instance_configs(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstanceConfigsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstanceConfigsPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_instance_configs_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.ListInstanceConfigsRequest( + parent="parent_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_instance_configs(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.ListInstanceConfigsRequest( + parent="parent_value", + page_token="page_token_value", + ) + + +def test_list_instance_configs_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_instance_configs + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_instance_configs + ] = mock_rpc + request = {} + client.list_instance_configs(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instance_configs(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instance_configs_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_instance_configs + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_instance_configs + ] = mock_rpc + + request = {} + await client.list_instance_configs(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_instance_configs(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instance_configs_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.ListInstanceConfigsRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstanceConfigsResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_instance_configs(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstanceConfigsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstanceConfigsAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_instance_configs_async_from_dict(): + await test_list_instance_configs_async(request_type=dict) + + +def test_list_instance_configs_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstanceConfigsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + call.return_value = spanner_instance_admin.ListInstanceConfigsResponse() + client.list_instance_configs(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_instance_configs_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstanceConfigsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstanceConfigsResponse() + ) + await client.list_instance_configs(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_instance_configs_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstanceConfigsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_instance_configs( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_instance_configs_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instance_configs( + spanner_instance_admin.ListInstanceConfigsRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_instance_configs_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstanceConfigsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstanceConfigsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_instance_configs( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_instance_configs_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_instance_configs( + spanner_instance_admin.ListInstanceConfigsRequest(), + parent="parent_value", + ) + + +def test_list_instance_configs_pager(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_instance_configs(request={}, retry=retry, timeout=timeout) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all( + isinstance(i, spanner_instance_admin.InstanceConfig) for i in results + ) + + +def test_list_instance_configs_pages(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + ), + RuntimeError, + ) + pages = list(client.list_instance_configs(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_instance_configs_async_pager(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_instance_configs( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all( + isinstance(i, spanner_instance_admin.InstanceConfig) for i in responses + ) + + +@pytest.mark.asyncio +async def test_list_instance_configs_async_pages(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_instance_configs(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.GetInstanceConfigRequest, + dict, + ], +) +def test_get_instance_config(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.InstanceConfig( + name="name_value", + display_name="display_name_value", + config_type=spanner_instance_admin.InstanceConfig.Type.GOOGLE_MANAGED, + base_config="base_config_value", + etag="etag_value", + leader_options=["leader_options_value"], + reconciling=True, + state=spanner_instance_admin.InstanceConfig.State.CREATING, + free_instance_availability=spanner_instance_admin.InstanceConfig.FreeInstanceAvailability.AVAILABLE, + quorum_type=spanner_instance_admin.InstanceConfig.QuorumType.REGION, + storage_limit_per_processing_unit=3540, + ) + response = client.get_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.GetInstanceConfigRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.InstanceConfig) + assert response.name == "name_value" + assert response.display_name == "display_name_value" + assert ( + response.config_type + == spanner_instance_admin.InstanceConfig.Type.GOOGLE_MANAGED + ) + assert response.base_config == "base_config_value" + assert response.etag == "etag_value" + assert response.leader_options == ["leader_options_value"] + assert response.reconciling is True + assert response.state == spanner_instance_admin.InstanceConfig.State.CREATING + assert ( + response.free_instance_availability + == spanner_instance_admin.InstanceConfig.FreeInstanceAvailability.AVAILABLE + ) + assert ( + response.quorum_type == spanner_instance_admin.InstanceConfig.QuorumType.REGION + ) + assert response.storage_limit_per_processing_unit == 3540 + + +def test_get_instance_config_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.GetInstanceConfigRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_instance_config(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.GetInstanceConfigRequest( + name="name_value", + ) + + +def test_get_instance_config_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.get_instance_config in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.get_instance_config + ] = mock_rpc + request = {} + client.get_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_instance_config_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_instance_config + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_instance_config + ] = mock_rpc + + request = {} + await client.get_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_instance_config_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.GetInstanceConfigRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.InstanceConfig( + name="name_value", + display_name="display_name_value", + config_type=spanner_instance_admin.InstanceConfig.Type.GOOGLE_MANAGED, + base_config="base_config_value", + etag="etag_value", + leader_options=["leader_options_value"], + reconciling=True, + state=spanner_instance_admin.InstanceConfig.State.CREATING, + free_instance_availability=spanner_instance_admin.InstanceConfig.FreeInstanceAvailability.AVAILABLE, + quorum_type=spanner_instance_admin.InstanceConfig.QuorumType.REGION, + storage_limit_per_processing_unit=3540, + ) + ) + response = await client.get_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.GetInstanceConfigRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.InstanceConfig) + assert response.name == "name_value" + assert response.display_name == "display_name_value" + assert ( + response.config_type + == spanner_instance_admin.InstanceConfig.Type.GOOGLE_MANAGED + ) + assert response.base_config == "base_config_value" + assert response.etag == "etag_value" + assert response.leader_options == ["leader_options_value"] + assert response.reconciling is True + assert response.state == spanner_instance_admin.InstanceConfig.State.CREATING + assert ( + response.free_instance_availability + == spanner_instance_admin.InstanceConfig.FreeInstanceAvailability.AVAILABLE + ) + assert ( + response.quorum_type == spanner_instance_admin.InstanceConfig.QuorumType.REGION + ) + assert response.storage_limit_per_processing_unit == 3540 + + +@pytest.mark.asyncio +async def test_get_instance_config_async_from_dict(): + await test_get_instance_config_async(request_type=dict) + + +def test_get_instance_config_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.GetInstanceConfigRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + call.return_value = spanner_instance_admin.InstanceConfig() + client.get_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_instance_config_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.GetInstanceConfigRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.InstanceConfig() + ) + await client.get_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_get_instance_config_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.InstanceConfig() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_instance_config( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_get_instance_config_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_instance_config( + spanner_instance_admin.GetInstanceConfigRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_get_instance_config_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.InstanceConfig() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.InstanceConfig() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_instance_config( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_instance_config_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_instance_config( + spanner_instance_admin.GetInstanceConfigRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.CreateInstanceConfigRequest, + dict, + ], +) +def test_create_instance_config(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.create_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.CreateInstanceConfigRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_create_instance_config_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.CreateInstanceConfigRequest( + parent="parent_value", + instance_config_id="instance_config_id_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.create_instance_config(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.CreateInstanceConfigRequest( + parent="parent_value", + instance_config_id="instance_config_id_value", + ) + + +def test_create_instance_config_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.create_instance_config + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.create_instance_config + ] = mock_rpc + request = {} + client.create_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_instance_config_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.create_instance_config + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.create_instance_config + ] = mock_rpc + + request = {} + await client.create_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.create_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_instance_config_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.CreateInstanceConfigRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.create_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.CreateInstanceConfigRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_create_instance_config_async_from_dict(): + await test_create_instance_config_async(request_type=dict) + + +def test_create_instance_config_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.CreateInstanceConfigRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_create_instance_config_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.CreateInstanceConfigRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.create_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_create_instance_config_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.create_instance_config( + parent="parent_value", + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + instance_config_id="instance_config_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].instance_config + mock_val = spanner_instance_admin.InstanceConfig(name="name_value") + assert arg == mock_val + arg = args[0].instance_config_id + mock_val = "instance_config_id_value" + assert arg == mock_val + + +def test_create_instance_config_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_instance_config( + spanner_instance_admin.CreateInstanceConfigRequest(), + parent="parent_value", + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + instance_config_id="instance_config_id_value", + ) + + +@pytest.mark.asyncio +async def test_create_instance_config_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.create_instance_config( + parent="parent_value", + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + instance_config_id="instance_config_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].instance_config + mock_val = spanner_instance_admin.InstanceConfig(name="name_value") + assert arg == mock_val + arg = args[0].instance_config_id + mock_val = "instance_config_id_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_create_instance_config_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.create_instance_config( + spanner_instance_admin.CreateInstanceConfigRequest(), + parent="parent_value", + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + instance_config_id="instance_config_id_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.UpdateInstanceConfigRequest, + dict, + ], +) +def test_update_instance_config(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.update_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.UpdateInstanceConfigRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_update_instance_config_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.UpdateInstanceConfigRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.update_instance_config(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.UpdateInstanceConfigRequest() + + +def test_update_instance_config_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.update_instance_config + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.update_instance_config + ] = mock_rpc + request = {} + client.update_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_instance_config_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.update_instance_config + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.update_instance_config + ] = mock_rpc + + request = {} + await client.update_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.update_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_instance_config_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.UpdateInstanceConfigRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.update_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.UpdateInstanceConfigRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_update_instance_config_async_from_dict(): + await test_update_instance_config_async(request_type=dict) + + +def test_update_instance_config_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.UpdateInstanceConfigRequest() + + request.instance_config.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "instance_config.name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_update_instance_config_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.UpdateInstanceConfigRequest() + + request.instance_config.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.update_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "instance_config.name=name_value", + ) in kw["metadata"] + + +def test_update_instance_config_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.update_instance_config( + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].instance_config + mock_val = spanner_instance_admin.InstanceConfig(name="name_value") + assert arg == mock_val + arg = args[0].update_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +def test_update_instance_config_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_instance_config( + spanner_instance_admin.UpdateInstanceConfigRequest(), + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.asyncio +async def test_update_instance_config_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.update_instance_config( + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].instance_config + mock_val = spanner_instance_admin.InstanceConfig(name="name_value") + assert arg == mock_val + arg = args[0].update_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_update_instance_config_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.update_instance_config( + spanner_instance_admin.UpdateInstanceConfigRequest(), + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.DeleteInstanceConfigRequest, + dict, + ], +) +def test_delete_instance_config(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.delete_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.DeleteInstanceConfigRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_instance_config_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.DeleteInstanceConfigRequest( + name="name_value", + etag="etag_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.delete_instance_config(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.DeleteInstanceConfigRequest( + name="name_value", + etag="etag_value", + ) + + +def test_delete_instance_config_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.delete_instance_config + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.delete_instance_config + ] = mock_rpc + request = {} + client.delete_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_instance_config_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.delete_instance_config + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.delete_instance_config + ] = mock_rpc + + request = {} + await client.delete_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.delete_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_instance_config_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.DeleteInstanceConfigRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.DeleteInstanceConfigRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_delete_instance_config_async_from_dict(): + await test_delete_instance_config_async(request_type=dict) + + +def test_delete_instance_config_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.DeleteInstanceConfigRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + call.return_value = None + client.delete_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_delete_instance_config_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.DeleteInstanceConfigRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_delete_instance_config_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.delete_instance_config( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_delete_instance_config_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_instance_config( + spanner_instance_admin.DeleteInstanceConfigRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_delete_instance_config_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.delete_instance_config( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_delete_instance_config_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.delete_instance_config( + spanner_instance_admin.DeleteInstanceConfigRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstanceConfigOperationsRequest, + dict, + ], +) +def test_list_instance_config_operations(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstanceConfigOperationsResponse( + next_page_token="next_page_token_value", + ) + response = client.list_instance_config_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstanceConfigOperationsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstanceConfigOperationsPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_instance_config_operations_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.ListInstanceConfigOperationsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_instance_config_operations(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.ListInstanceConfigOperationsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + +def test_list_instance_config_operations_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_instance_config_operations + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_instance_config_operations + ] = mock_rpc + request = {} + client.list_instance_config_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instance_config_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instance_config_operations_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_instance_config_operations + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_instance_config_operations + ] = mock_rpc + + request = {} + await client.list_instance_config_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_instance_config_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instance_config_operations_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.ListInstanceConfigOperationsRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstanceConfigOperationsResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_instance_config_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstanceConfigOperationsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstanceConfigOperationsAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_instance_config_operations_async_from_dict(): + await test_list_instance_config_operations_async(request_type=dict) + + +def test_list_instance_config_operations_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstanceConfigOperationsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + call.return_value = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse() + ) + client.list_instance_config_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_instance_config_operations_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstanceConfigOperationsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstanceConfigOperationsResponse() + ) + await client.list_instance_config_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_instance_config_operations_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_instance_config_operations( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_instance_config_operations_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instance_config_operations( + spanner_instance_admin.ListInstanceConfigOperationsRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_instance_config_operations_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse() + ) + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstanceConfigOperationsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_instance_config_operations( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_instance_config_operations_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_instance_config_operations( + spanner_instance_admin.ListInstanceConfigOperationsRequest(), + parent="parent_value", + ) + + +def test_list_instance_config_operations_pager(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_instance_config_operations( + request={}, retry=retry, timeout=timeout + ) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in results) + + +def test_list_instance_config_operations_pages(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + pages = list(client.list_instance_config_operations(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_instance_config_operations_async_pager(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_instance_config_operations( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in responses) + + +@pytest.mark.asyncio +async def test_list_instance_config_operations_async_pages(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_instance_config_operations(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstancesRequest, + dict, + ], +) +def test_list_instances(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstancesResponse( + next_page_token="next_page_token_value", + unreachable=["unreachable_value"], + ) + response = client.list_instances(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstancesRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancesPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable == ["unreachable_value"] + + +def test_list_instances_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.ListInstancesRequest( + parent="parent_value", + page_token="page_token_value", + filter="filter_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_instances(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.ListInstancesRequest( + parent="parent_value", + page_token="page_token_value", + filter="filter_value", + ) + + +def test_list_instances_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.list_instances in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.list_instances] = mock_rpc + request = {} + client.list_instances(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instances(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instances_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_instances + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_instances + ] = mock_rpc + + request = {} + await client.list_instances(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_instances(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instances_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.ListInstancesRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancesResponse( + next_page_token="next_page_token_value", + unreachable=["unreachable_value"], + ) + ) + response = await client.list_instances(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstancesRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancesAsyncPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable == ["unreachable_value"] + + +@pytest.mark.asyncio +async def test_list_instances_async_from_dict(): + await test_list_instances_async(request_type=dict) + + +def test_list_instances_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstancesRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + call.return_value = spanner_instance_admin.ListInstancesResponse() + client.list_instances(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_instances_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstancesRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancesResponse() + ) + await client.list_instances(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_instances_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstancesResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_instances( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_instances_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instances( + spanner_instance_admin.ListInstancesRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_instances_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstancesResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancesResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_instances( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_instances_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_instances( + spanner_instance_admin.ListInstancesRequest(), + parent="parent_value", + ) + + +def test_list_instances_pager(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_instances(request={}, retry=retry, timeout=timeout) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, spanner_instance_admin.Instance) for i in results) + + +def test_list_instances_pages(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + ), + RuntimeError, + ) + pages = list(client.list_instances(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_instances_async_pager(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instances), "__call__", new_callable=mock.AsyncMock + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_instances( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, spanner_instance_admin.Instance) for i in responses) + + +@pytest.mark.asyncio +async def test_list_instances_async_pages(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instances), "__call__", new_callable=mock.AsyncMock + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_instances(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstancePartitionsRequest, + dict, + ], +) +def test_list_instance_partitions(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstancePartitionsResponse( + next_page_token="next_page_token_value", + unreachable=["unreachable_value"], + ) + response = client.list_instance_partitions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstancePartitionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancePartitionsPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable == ["unreachable_value"] + + +def test_list_instance_partitions_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.ListInstancePartitionsRequest( + parent="parent_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_instance_partitions(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.ListInstancePartitionsRequest( + parent="parent_value", + page_token="page_token_value", + ) + + +def test_list_instance_partitions_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_instance_partitions + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_instance_partitions + ] = mock_rpc + request = {} + client.list_instance_partitions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instance_partitions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instance_partitions_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_instance_partitions + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_instance_partitions + ] = mock_rpc + + request = {} + await client.list_instance_partitions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_instance_partitions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instance_partitions_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.ListInstancePartitionsRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancePartitionsResponse( + next_page_token="next_page_token_value", + unreachable=["unreachable_value"], + ) + ) + response = await client.list_instance_partitions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstancePartitionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancePartitionsAsyncPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable == ["unreachable_value"] + + +@pytest.mark.asyncio +async def test_list_instance_partitions_async_from_dict(): + await test_list_instance_partitions_async(request_type=dict) + + +def test_list_instance_partitions_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstancePartitionsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + call.return_value = spanner_instance_admin.ListInstancePartitionsResponse() + client.list_instance_partitions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_instance_partitions_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstancePartitionsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancePartitionsResponse() + ) + await client.list_instance_partitions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_instance_partitions_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstancePartitionsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_instance_partitions( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_instance_partitions_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instance_partitions( + spanner_instance_admin.ListInstancePartitionsRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_instance_partitions_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.ListInstancePartitionsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancePartitionsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_instance_partitions( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_instance_partitions_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_instance_partitions( + spanner_instance_admin.ListInstancePartitionsRequest(), + parent="parent_value", + ) + + +def test_list_instance_partitions_pager(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_instance_partitions( + request={}, retry=retry, timeout=timeout + ) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all( + isinstance(i, spanner_instance_admin.InstancePartition) for i in results + ) + + +def test_list_instance_partitions_pages(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + ), + RuntimeError, + ) + pages = list(client.list_instance_partitions(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_instance_partitions_async_pager(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_instance_partitions( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all( + isinstance(i, spanner_instance_admin.InstancePartition) for i in responses + ) + + +@pytest.mark.asyncio +async def test_list_instance_partitions_async_pages(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_instance_partitions(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.GetInstanceRequest, + dict, + ], +) +def test_get_instance(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.Instance( + name="name_value", + config="config_value", + display_name="display_name_value", + node_count=1070, + processing_units=1743, + state=spanner_instance_admin.Instance.State.CREATING, + instance_type=spanner_instance_admin.Instance.InstanceType.PROVISIONED, + endpoint_uris=["endpoint_uris_value"], + edition=spanner_instance_admin.Instance.Edition.STANDARD, + default_backup_schedule_type=spanner_instance_admin.Instance.DefaultBackupScheduleType.NONE, + ) + response = client.get_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.GetInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.Instance) + assert response.name == "name_value" + assert response.config == "config_value" + assert response.display_name == "display_name_value" + assert response.node_count == 1070 + assert response.processing_units == 1743 + assert response.state == spanner_instance_admin.Instance.State.CREATING + assert ( + response.instance_type + == spanner_instance_admin.Instance.InstanceType.PROVISIONED + ) + assert response.endpoint_uris == ["endpoint_uris_value"] + assert response.edition == spanner_instance_admin.Instance.Edition.STANDARD + assert ( + response.default_backup_schedule_type + == spanner_instance_admin.Instance.DefaultBackupScheduleType.NONE + ) + + +def test_get_instance_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.GetInstanceRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_instance(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.GetInstanceRequest( + name="name_value", + ) + + +def test_get_instance_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_instance] = mock_rpc + request = {} + client.get_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_instance_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_instance + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_instance + ] = mock_rpc + + request = {} + await client.get_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_instance_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.GetInstanceRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.Instance( + name="name_value", + config="config_value", + display_name="display_name_value", + node_count=1070, + processing_units=1743, + state=spanner_instance_admin.Instance.State.CREATING, + instance_type=spanner_instance_admin.Instance.InstanceType.PROVISIONED, + endpoint_uris=["endpoint_uris_value"], + edition=spanner_instance_admin.Instance.Edition.STANDARD, + default_backup_schedule_type=spanner_instance_admin.Instance.DefaultBackupScheduleType.NONE, + ) + ) + response = await client.get_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.GetInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.Instance) + assert response.name == "name_value" + assert response.config == "config_value" + assert response.display_name == "display_name_value" + assert response.node_count == 1070 + assert response.processing_units == 1743 + assert response.state == spanner_instance_admin.Instance.State.CREATING + assert ( + response.instance_type + == spanner_instance_admin.Instance.InstanceType.PROVISIONED + ) + assert response.endpoint_uris == ["endpoint_uris_value"] + assert response.edition == spanner_instance_admin.Instance.Edition.STANDARD + assert ( + response.default_backup_schedule_type + == spanner_instance_admin.Instance.DefaultBackupScheduleType.NONE + ) + + +@pytest.mark.asyncio +async def test_get_instance_async_from_dict(): + await test_get_instance_async(request_type=dict) + + +def test_get_instance_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.GetInstanceRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + call.return_value = spanner_instance_admin.Instance() + client.get_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_instance_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.GetInstanceRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.Instance() + ) + await client.get_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_get_instance_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.Instance() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_instance( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_get_instance_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_instance( + spanner_instance_admin.GetInstanceRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_get_instance_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.Instance() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.Instance() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_instance( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_instance_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_instance( + spanner_instance_admin.GetInstanceRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.CreateInstanceRequest, + dict, + ], +) +def test_create_instance(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.create_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.CreateInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_create_instance_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.CreateInstanceRequest( + parent="parent_value", + instance_id="instance_id_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.create_instance(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.CreateInstanceRequest( + parent="parent_value", + instance_id="instance_id_value", + ) + + +def test_create_instance_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.create_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.create_instance] = mock_rpc + request = {} + client.create_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_instance_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.create_instance + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.create_instance + ] = mock_rpc + + request = {} + await client.create_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.create_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_instance_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.CreateInstanceRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.create_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.CreateInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_create_instance_async_from_dict(): + await test_create_instance_async(request_type=dict) + + +def test_create_instance_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.CreateInstanceRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_create_instance_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.CreateInstanceRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.create_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_create_instance_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.create_instance( + parent="parent_value", + instance_id="instance_id_value", + instance=spanner_instance_admin.Instance(name="name_value"), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].instance_id + mock_val = "instance_id_value" + assert arg == mock_val + arg = args[0].instance + mock_val = spanner_instance_admin.Instance(name="name_value") + assert arg == mock_val + + +def test_create_instance_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_instance( + spanner_instance_admin.CreateInstanceRequest(), + parent="parent_value", + instance_id="instance_id_value", + instance=spanner_instance_admin.Instance(name="name_value"), + ) + + +@pytest.mark.asyncio +async def test_create_instance_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.create_instance( + parent="parent_value", + instance_id="instance_id_value", + instance=spanner_instance_admin.Instance(name="name_value"), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].instance_id + mock_val = "instance_id_value" + assert arg == mock_val + arg = args[0].instance + mock_val = spanner_instance_admin.Instance(name="name_value") + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_create_instance_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.create_instance( + spanner_instance_admin.CreateInstanceRequest(), + parent="parent_value", + instance_id="instance_id_value", + instance=spanner_instance_admin.Instance(name="name_value"), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.UpdateInstanceRequest, + dict, + ], +) +def test_update_instance(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.update_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.UpdateInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_update_instance_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.UpdateInstanceRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.update_instance(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.UpdateInstanceRequest() + + +def test_update_instance_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.update_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.update_instance] = mock_rpc + request = {} + client.update_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_instance_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.update_instance + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.update_instance + ] = mock_rpc + + request = {} + await client.update_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.update_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_instance_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.UpdateInstanceRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.update_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.UpdateInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_update_instance_async_from_dict(): + await test_update_instance_async(request_type=dict) + + +def test_update_instance_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.UpdateInstanceRequest() + + request.instance.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "instance.name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_update_instance_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.UpdateInstanceRequest() + + request.instance.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.update_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "instance.name=name_value", + ) in kw["metadata"] + + +def test_update_instance_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.update_instance( + instance=spanner_instance_admin.Instance(name="name_value"), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].instance + mock_val = spanner_instance_admin.Instance(name="name_value") + assert arg == mock_val + arg = args[0].field_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +def test_update_instance_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_instance( + spanner_instance_admin.UpdateInstanceRequest(), + instance=spanner_instance_admin.Instance(name="name_value"), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.asyncio +async def test_update_instance_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.update_instance( + instance=spanner_instance_admin.Instance(name="name_value"), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].instance + mock_val = spanner_instance_admin.Instance(name="name_value") + assert arg == mock_val + arg = args[0].field_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_update_instance_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.update_instance( + spanner_instance_admin.UpdateInstanceRequest(), + instance=spanner_instance_admin.Instance(name="name_value"), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.DeleteInstanceRequest, + dict, + ], +) +def test_delete_instance(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.delete_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.DeleteInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_instance_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.DeleteInstanceRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.delete_instance(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.DeleteInstanceRequest( + name="name_value", + ) + + +def test_delete_instance_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.delete_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.delete_instance] = mock_rpc + request = {} + client.delete_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_instance_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.delete_instance + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.delete_instance + ] = mock_rpc + + request = {} + await client.delete_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.delete_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_instance_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.DeleteInstanceRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.DeleteInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_delete_instance_async_from_dict(): + await test_delete_instance_async(request_type=dict) + + +def test_delete_instance_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.DeleteInstanceRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + call.return_value = None + client.delete_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_delete_instance_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.DeleteInstanceRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_delete_instance_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.delete_instance( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_delete_instance_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_instance( + spanner_instance_admin.DeleteInstanceRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_delete_instance_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.delete_instance( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_delete_instance_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.delete_instance( + spanner_instance_admin.DeleteInstanceRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.SetIamPolicyRequest, + dict, + ], +) +def test_set_iam_policy(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + response = client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.SetIamPolicyRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +def test_set_iam_policy_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.set_iam_policy(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == iam_policy_pb2.SetIamPolicyRequest( + resource="resource_value", + ) + + +def test_set_iam_policy_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.set_iam_policy in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.set_iam_policy] = mock_rpc + request = {} + client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.set_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_set_iam_policy_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.set_iam_policy + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.set_iam_policy + ] = mock_rpc + + request = {} + await client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.set_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_set_iam_policy_async( + transport: str = "grpc_asyncio", request_type=iam_policy_pb2.SetIamPolicyRequest +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + ) + response = await client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.SetIamPolicyRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +@pytest.mark.asyncio +async def test_set_iam_policy_async_from_dict(): + await test_set_iam_policy_async(request_type=dict) + + +def test_set_iam_policy_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.SetIamPolicyRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + call.return_value = policy_pb2.Policy() + client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_set_iam_policy_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.SetIamPolicyRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy_pb2.Policy()) + await client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +def test_set_iam_policy_from_dict_foreign(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + response = client.set_iam_policy( + request={ + "resource": "resource_value", + "policy": policy_pb2.Policy(version=774), + "update_mask": field_mask_pb2.FieldMask(paths=["paths_value"]), + } + ) + call.assert_called() + + +def test_set_iam_policy_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.set_iam_policy( + resource="resource_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + + +def test_set_iam_policy_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.set_iam_policy( + iam_policy_pb2.SetIamPolicyRequest(), + resource="resource_value", + ) + + +@pytest.mark.asyncio +async def test_set_iam_policy_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy_pb2.Policy()) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.set_iam_policy( + resource="resource_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_set_iam_policy_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.set_iam_policy( + iam_policy_pb2.SetIamPolicyRequest(), + resource="resource_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.GetIamPolicyRequest, + dict, + ], +) +def test_get_iam_policy(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + response = client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.GetIamPolicyRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +def test_get_iam_policy_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_iam_policy(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == iam_policy_pb2.GetIamPolicyRequest( + resource="resource_value", + ) + + +def test_get_iam_policy_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_iam_policy in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_iam_policy] = mock_rpc + request = {} + client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_iam_policy_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_iam_policy + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_iam_policy + ] = mock_rpc + + request = {} + await client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_iam_policy_async( + transport: str = "grpc_asyncio", request_type=iam_policy_pb2.GetIamPolicyRequest +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + ) + response = await client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.GetIamPolicyRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +@pytest.mark.asyncio +async def test_get_iam_policy_async_from_dict(): + await test_get_iam_policy_async(request_type=dict) + + +def test_get_iam_policy_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.GetIamPolicyRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + call.return_value = policy_pb2.Policy() + client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_iam_policy_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.GetIamPolicyRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy_pb2.Policy()) + await client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +def test_get_iam_policy_from_dict_foreign(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + response = client.get_iam_policy( + request={ + "resource": "resource_value", + "options": options_pb2.GetPolicyOptions(requested_policy_version=2598), + } + ) + call.assert_called() + + +def test_get_iam_policy_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_iam_policy( + resource="resource_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + + +def test_get_iam_policy_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_iam_policy( + iam_policy_pb2.GetIamPolicyRequest(), + resource="resource_value", + ) + + +@pytest.mark.asyncio +async def test_get_iam_policy_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = policy_pb2.Policy() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(policy_pb2.Policy()) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_iam_policy( + resource="resource_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_iam_policy_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_iam_policy( + iam_policy_pb2.GetIamPolicyRequest(), + resource="resource_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.TestIamPermissionsRequest, + dict, + ], +) +def test_test_iam_permissions(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iam_policy_pb2.TestIamPermissionsResponse( + permissions=["permissions_value"], + ) + response = client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.TestIamPermissionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, iam_policy_pb2.TestIamPermissionsResponse) + assert response.permissions == ["permissions_value"] + + +def test_test_iam_permissions_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.test_iam_permissions(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == iam_policy_pb2.TestIamPermissionsRequest( + resource="resource_value", + ) + + +def test_test_iam_permissions_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.test_iam_permissions in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.test_iam_permissions + ] = mock_rpc + request = {} + client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.test_iam_permissions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_test_iam_permissions_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.test_iam_permissions + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.test_iam_permissions + ] = mock_rpc + + request = {} + await client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.test_iam_permissions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_test_iam_permissions_async( + transport: str = "grpc_asyncio", + request_type=iam_policy_pb2.TestIamPermissionsRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + iam_policy_pb2.TestIamPermissionsResponse( + permissions=["permissions_value"], + ) + ) + response = await client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = iam_policy_pb2.TestIamPermissionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, iam_policy_pb2.TestIamPermissionsResponse) + assert response.permissions == ["permissions_value"] + + +@pytest.mark.asyncio +async def test_test_iam_permissions_async_from_dict(): + await test_test_iam_permissions_async(request_type=dict) + + +def test_test_iam_permissions_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.TestIamPermissionsRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_test_iam_permissions_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = iam_policy_pb2.TestIamPermissionsRequest() + + request.resource = "resource_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + iam_policy_pb2.TestIamPermissionsResponse() + ) + await client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "resource=resource_value", + ) in kw["metadata"] + + +def test_test_iam_permissions_from_dict_foreign(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + response = client.test_iam_permissions( + request={ + "resource": "resource_value", + "permissions": ["permissions_value"], + } + ) + call.assert_called() + + +def test_test_iam_permissions_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.test_iam_permissions( + resource="resource_value", + permissions=["permissions_value"], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + arg = args[0].permissions + mock_val = ["permissions_value"] + assert arg == mock_val + + +def test_test_iam_permissions_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.test_iam_permissions( + iam_policy_pb2.TestIamPermissionsRequest(), + resource="resource_value", + permissions=["permissions_value"], + ) + + +@pytest.mark.asyncio +async def test_test_iam_permissions_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + iam_policy_pb2.TestIamPermissionsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.test_iam_permissions( + resource="resource_value", + permissions=["permissions_value"], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].resource + mock_val = "resource_value" + assert arg == mock_val + arg = args[0].permissions + mock_val = ["permissions_value"] + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_test_iam_permissions_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.test_iam_permissions( + iam_policy_pb2.TestIamPermissionsRequest(), + resource="resource_value", + permissions=["permissions_value"], + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.GetInstancePartitionRequest, + dict, + ], +) +def test_get_instance_partition(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.InstancePartition( + name="name_value", + config="config_value", + display_name="display_name_value", + state=spanner_instance_admin.InstancePartition.State.CREATING, + referencing_databases=["referencing_databases_value"], + referencing_backups=["referencing_backups_value"], + etag="etag_value", + node_count=1070, + ) + response = client.get_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.GetInstancePartitionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.InstancePartition) + assert response.name == "name_value" + assert response.config == "config_value" + assert response.display_name == "display_name_value" + assert response.state == spanner_instance_admin.InstancePartition.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.referencing_backups == ["referencing_backups_value"] + assert response.etag == "etag_value" + + +def test_get_instance_partition_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.GetInstancePartitionRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_instance_partition(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.GetInstancePartitionRequest( + name="name_value", + ) + + +def test_get_instance_partition_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.get_instance_partition + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.get_instance_partition + ] = mock_rpc + request = {} + client.get_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_instance_partition_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_instance_partition + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_instance_partition + ] = mock_rpc + + request = {} + await client.get_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_instance_partition_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.GetInstancePartitionRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.InstancePartition( + name="name_value", + config="config_value", + display_name="display_name_value", + state=spanner_instance_admin.InstancePartition.State.CREATING, + referencing_databases=["referencing_databases_value"], + referencing_backups=["referencing_backups_value"], + etag="etag_value", + ) + ) + response = await client.get_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.GetInstancePartitionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.InstancePartition) + assert response.name == "name_value" + assert response.config == "config_value" + assert response.display_name == "display_name_value" + assert response.state == spanner_instance_admin.InstancePartition.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.referencing_backups == ["referencing_backups_value"] + assert response.etag == "etag_value" + + +@pytest.mark.asyncio +async def test_get_instance_partition_async_from_dict(): + await test_get_instance_partition_async(request_type=dict) + + +def test_get_instance_partition_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.GetInstancePartitionRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + call.return_value = spanner_instance_admin.InstancePartition() + client.get_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_instance_partition_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.GetInstancePartitionRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.InstancePartition() + ) + await client.get_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_get_instance_partition_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.InstancePartition() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_instance_partition( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_get_instance_partition_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_instance_partition( + spanner_instance_admin.GetInstancePartitionRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_get_instance_partition_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner_instance_admin.InstancePartition() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.InstancePartition() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_instance_partition( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_instance_partition_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_instance_partition( + spanner_instance_admin.GetInstancePartitionRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.CreateInstancePartitionRequest, + dict, + ], +) +def test_create_instance_partition(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.create_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.CreateInstancePartitionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_create_instance_partition_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.CreateInstancePartitionRequest( + parent="parent_value", + instance_partition_id="instance_partition_id_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.create_instance_partition(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.CreateInstancePartitionRequest( + parent="parent_value", + instance_partition_id="instance_partition_id_value", + ) + + +def test_create_instance_partition_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.create_instance_partition + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.create_instance_partition + ] = mock_rpc + request = {} + client.create_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_instance_partition_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.create_instance_partition + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.create_instance_partition + ] = mock_rpc + + request = {} + await client.create_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.create_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_instance_partition_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.CreateInstancePartitionRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.create_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.CreateInstancePartitionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_create_instance_partition_async_from_dict(): + await test_create_instance_partition_async(request_type=dict) + + +def test_create_instance_partition_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.CreateInstancePartitionRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_create_instance_partition_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.CreateInstancePartitionRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.create_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_create_instance_partition_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.create_instance_partition( + parent="parent_value", + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + instance_partition_id="instance_partition_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].instance_partition + mock_val = spanner_instance_admin.InstancePartition(name="name_value") + assert arg == mock_val + arg = args[0].instance_partition_id + mock_val = "instance_partition_id_value" + assert arg == mock_val + + +def test_create_instance_partition_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_instance_partition( + spanner_instance_admin.CreateInstancePartitionRequest(), + parent="parent_value", + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + instance_partition_id="instance_partition_id_value", + ) + + +@pytest.mark.asyncio +async def test_create_instance_partition_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.create_instance_partition( + parent="parent_value", + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + instance_partition_id="instance_partition_id_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + arg = args[0].instance_partition + mock_val = spanner_instance_admin.InstancePartition(name="name_value") + assert arg == mock_val + arg = args[0].instance_partition_id + mock_val = "instance_partition_id_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_create_instance_partition_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.create_instance_partition( + spanner_instance_admin.CreateInstancePartitionRequest(), + parent="parent_value", + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + instance_partition_id="instance_partition_id_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.DeleteInstancePartitionRequest, + dict, + ], +) +def test_delete_instance_partition(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.delete_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.DeleteInstancePartitionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_instance_partition_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.DeleteInstancePartitionRequest( + name="name_value", + etag="etag_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.delete_instance_partition(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.DeleteInstancePartitionRequest( + name="name_value", + etag="etag_value", + ) + + +def test_delete_instance_partition_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.delete_instance_partition + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.delete_instance_partition + ] = mock_rpc + request = {} + client.delete_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_instance_partition_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.delete_instance_partition + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.delete_instance_partition + ] = mock_rpc + + request = {} + await client.delete_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.delete_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_instance_partition_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.DeleteInstancePartitionRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.DeleteInstancePartitionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_delete_instance_partition_async_from_dict(): + await test_delete_instance_partition_async(request_type=dict) + + +def test_delete_instance_partition_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.DeleteInstancePartitionRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + call.return_value = None + client.delete_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_delete_instance_partition_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.DeleteInstancePartitionRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_delete_instance_partition_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.delete_instance_partition( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_delete_instance_partition_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_instance_partition( + spanner_instance_admin.DeleteInstancePartitionRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_delete_instance_partition_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = None + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.delete_instance_partition( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_delete_instance_partition_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.delete_instance_partition( + spanner_instance_admin.DeleteInstancePartitionRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.UpdateInstancePartitionRequest, + dict, + ], +) +def test_update_instance_partition(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.update_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.UpdateInstancePartitionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_update_instance_partition_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.UpdateInstancePartitionRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.update_instance_partition(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.UpdateInstancePartitionRequest() + + +def test_update_instance_partition_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.update_instance_partition + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.update_instance_partition + ] = mock_rpc + request = {} + client.update_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_instance_partition_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.update_instance_partition + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.update_instance_partition + ] = mock_rpc + + request = {} + await client.update_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.update_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_update_instance_partition_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.UpdateInstancePartitionRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.update_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.UpdateInstancePartitionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_update_instance_partition_async_from_dict(): + await test_update_instance_partition_async(request_type=dict) + + +def test_update_instance_partition_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.UpdateInstancePartitionRequest() + + request.instance_partition.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "instance_partition.name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_update_instance_partition_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.UpdateInstancePartitionRequest() + + request.instance_partition.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.update_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "instance_partition.name=name_value", + ) in kw["metadata"] + + +def test_update_instance_partition_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.update_instance_partition( + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].instance_partition + mock_val = spanner_instance_admin.InstancePartition(name="name_value") + assert arg == mock_val + arg = args[0].field_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +def test_update_instance_partition_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_instance_partition( + spanner_instance_admin.UpdateInstancePartitionRequest(), + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.asyncio +async def test_update_instance_partition_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.update_instance_partition( + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].instance_partition + mock_val = spanner_instance_admin.InstancePartition(name="name_value") + assert arg == mock_val + arg = args[0].field_mask + mock_val = field_mask_pb2.FieldMask(paths=["paths_value"]) + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_update_instance_partition_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.update_instance_partition( + spanner_instance_admin.UpdateInstancePartitionRequest(), + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstancePartitionOperationsRequest, + dict, + ], +) +def test_list_instance_partition_operations(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse( + next_page_token="next_page_token_value", + unreachable_instance_partitions=[ + "unreachable_instance_partitions_value" + ], + ) + ) + response = client.list_instance_partition_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstancePartitionOperationsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancePartitionOperationsPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable_instance_partitions == [ + "unreachable_instance_partitions_value" + ] + + +def test_list_instance_partition_operations_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.ListInstancePartitionOperationsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_instance_partition_operations(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.ListInstancePartitionOperationsRequest( + parent="parent_value", + filter="filter_value", + page_token="page_token_value", + ) + + +def test_list_instance_partition_operations_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_instance_partition_operations + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_instance_partition_operations + ] = mock_rpc + request = {} + client.list_instance_partition_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instance_partition_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instance_partition_operations_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_instance_partition_operations + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_instance_partition_operations + ] = mock_rpc + + request = {} + await client.list_instance_partition_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_instance_partition_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_instance_partition_operations_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.ListInstancePartitionOperationsRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancePartitionOperationsResponse( + next_page_token="next_page_token_value", + unreachable_instance_partitions=[ + "unreachable_instance_partitions_value" + ], + ) + ) + response = await client.list_instance_partition_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.ListInstancePartitionOperationsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancePartitionOperationsAsyncPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable_instance_partitions == [ + "unreachable_instance_partitions_value" + ] + + +@pytest.mark.asyncio +async def test_list_instance_partition_operations_async_from_dict(): + await test_list_instance_partition_operations_async(request_type=dict) + + +def test_list_instance_partition_operations_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstancePartitionOperationsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + call.return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse() + ) + client.list_instance_partition_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_instance_partition_operations_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.ListInstancePartitionOperationsRequest() + + request.parent = "parent_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancePartitionOperationsResponse() + ) + await client.list_instance_partition_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "parent=parent_value", + ) in kw["metadata"] + + +def test_list_instance_partition_operations_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_instance_partition_operations( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +def test_list_instance_partition_operations_flattened_error(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instance_partition_operations( + spanner_instance_admin.ListInstancePartitionOperationsRequest(), + parent="parent_value", + ) + + +@pytest.mark.asyncio +async def test_list_instance_partition_operations_flattened_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse() + ) + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancePartitionOperationsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_instance_partition_operations( + parent="parent_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].parent + mock_val = "parent_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_instance_partition_operations_flattened_error_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_instance_partition_operations( + spanner_instance_admin.ListInstancePartitionOperationsRequest(), + parent="parent_value", + ) + + +def test_list_instance_partition_operations_pager(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)), + ) + pager = client.list_instance_partition_operations( + request={}, retry=retry, timeout=timeout + ) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in results) + + +def test_list_instance_partition_operations_pages(transport_name: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + pages = list(client.list_instance_partition_operations(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_instance_partition_operations_async_pager(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_instance_partition_operations( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in responses) + + +@pytest.mark.asyncio +async def test_list_instance_partition_operations_async_pages(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), + "__call__", + new_callable=mock.AsyncMock, + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_instance_partition_operations(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.MoveInstanceRequest, + dict, + ], +) +def test_move_instance(request_type, transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.move_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + response = client.move_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.MoveInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_move_instance_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner_instance_admin.MoveInstanceRequest( + name="name_value", + target_config="target_config_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.move_instance), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.move_instance(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner_instance_admin.MoveInstanceRequest( + name="name_value", + target_config="target_config_value", + ) + + +def test_move_instance_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.move_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.move_instance] = mock_rpc + request = {} + client.move_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.move_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_move_instance_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.move_instance + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.move_instance + ] = mock_rpc + + request = {} + await client.move_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods call wrapper_fn to build a cached + # client._transport.operations_client instance on first rpc call. + # Subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + await client.move_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_move_instance_async( + transport: str = "grpc_asyncio", + request_type=spanner_instance_admin.MoveInstanceRequest, +): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.move_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + response = await client.move_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner_instance_admin.MoveInstanceRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_move_instance_async_from_dict(): + await test_move_instance_async(request_type=dict) + + +def test_move_instance_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.MoveInstanceRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.move_instance), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.move_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_move_instance_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner_instance_admin.MoveInstanceRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.move_instance), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/op") + ) + await client.move_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_list_instance_configs_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_instance_configs + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_instance_configs + ] = mock_rpc + + request = {} + client.list_instance_configs(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instance_configs(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_instance_configs_rest_required_fields( + request_type=spanner_instance_admin.ListInstanceConfigsRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instance_configs._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instance_configs._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstanceConfigsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstanceConfigsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_instance_configs(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_instance_configs_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_instance_configs._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_instance_configs_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstanceConfigsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstanceConfigsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_instance_configs(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*}/instanceConfigs" % client.transport._host, + args[1], + ) + + +def test_list_instance_configs_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instance_configs( + spanner_instance_admin.ListInstanceConfigsRequest(), + parent="parent_value", + ) + + +def test_list_instance_configs_rest_pager(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigsResponse( + instance_configs=[ + spanner_instance_admin.InstanceConfig(), + spanner_instance_admin.InstanceConfig(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + spanner_instance_admin.ListInstanceConfigsResponse.to_json(x) + for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1"} + + pager = client.list_instance_configs(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all( + isinstance(i, spanner_instance_admin.InstanceConfig) for i in results + ) + + pages = list(client.list_instance_configs(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_get_instance_config_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.get_instance_config in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.get_instance_config + ] = mock_rpc + + request = {} + client.get_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_instance_config_rest_required_fields( + request_type=spanner_instance_admin.GetInstanceConfigRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_instance_config._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_instance_config._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.InstanceConfig() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.InstanceConfig.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_instance_config(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_instance_config_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_instance_config._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_get_instance_config_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.InstanceConfig() + + # get arguments that satisfy an http rule for this method + sample_request = {"name": "projects/sample1/instanceConfigs/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_instance_admin.InstanceConfig.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_instance_config(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instanceConfigs/*}" % client.transport._host, + args[1], + ) + + +def test_get_instance_config_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_instance_config( + spanner_instance_admin.GetInstanceConfigRequest(), + name="name_value", + ) + + +def test_create_instance_config_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.create_instance_config + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.create_instance_config + ] = mock_rpc + + request = {} + client.create_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_create_instance_config_rest_required_fields( + request_type=spanner_instance_admin.CreateInstanceConfigRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request_init["instance_config_id"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_instance_config._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + jsonified_request["instanceConfigId"] = "instance_config_id_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_instance_config._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + assert "instanceConfigId" in jsonified_request + assert jsonified_request["instanceConfigId"] == "instance_config_id_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.create_instance_config(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_create_instance_config_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.create_instance_config._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "parent", + "instanceConfigId", + "instanceConfig", + ) + ) + ) + + +def test_create_instance_config_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + instance_config_id="instance_config_id_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.create_instance_config(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*}/instanceConfigs" % client.transport._host, + args[1], + ) + + +def test_create_instance_config_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_instance_config( + spanner_instance_admin.CreateInstanceConfigRequest(), + parent="parent_value", + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + instance_config_id="instance_config_id_value", + ) + + +def test_update_instance_config_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.update_instance_config + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.update_instance_config + ] = mock_rpc + + request = {} + client.update_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_update_instance_config_rest_required_fields( + request_type=spanner_instance_admin.UpdateInstanceConfigRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_instance_config._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_instance_config._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "patch", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.update_instance_config(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_update_instance_config_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.update_instance_config._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "instanceConfig", + "updateMask", + ) + ) + ) + + +def test_update_instance_config_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = { + "instance_config": {"name": "projects/sample1/instanceConfigs/sample2"} + } + + # get truthy value for each flattened field + mock_args = dict( + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.update_instance_config(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{instance_config.name=projects/*/instanceConfigs/*}" + % client.transport._host, + args[1], + ) + + +def test_update_instance_config_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_instance_config( + spanner_instance_admin.UpdateInstanceConfigRequest(), + instance_config=spanner_instance_admin.InstanceConfig(name="name_value"), + update_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +def test_delete_instance_config_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.delete_instance_config + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.delete_instance_config + ] = mock_rpc + + request = {} + client.delete_instance_config(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_instance_config(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_delete_instance_config_rest_required_fields( + request_type=spanner_instance_admin.DeleteInstanceConfigRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_instance_config._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_instance_config._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "etag", + "validate_only", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = None + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "delete", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.delete_instance_config(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_delete_instance_config_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.delete_instance_config._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "etag", + "validateOnly", + ) + ) + & set(("name",)) + ) + + +def test_delete_instance_config_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # get arguments that satisfy an http rule for this method + sample_request = {"name": "projects/sample1/instanceConfigs/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.delete_instance_config(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instanceConfigs/*}" % client.transport._host, + args[1], + ) + + +def test_delete_instance_config_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_instance_config( + spanner_instance_admin.DeleteInstanceConfigRequest(), + name="name_value", + ) + + +def test_list_instance_config_operations_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_instance_config_operations + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_instance_config_operations + ] = mock_rpc + + request = {} + client.list_instance_config_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instance_config_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_instance_config_operations_rest_required_fields( + request_type=spanner_instance_admin.ListInstanceConfigOperationsRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instance_config_operations._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instance_config_operations._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "filter", + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstanceConfigOperationsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse.pb( + return_value + ) + ) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_instance_config_operations(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_instance_config_operations_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_instance_config_operations._get_unset_required_fields( + {} + ) + assert set(unset_fields) == ( + set( + ( + "filter", + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_instance_config_operations_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstanceConfigOperationsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstanceConfigOperationsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_instance_config_operations(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*}/instanceConfigOperations" + % client.transport._host, + args[1], + ) + + +def test_list_instance_config_operations_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instance_config_operations( + spanner_instance_admin.ListInstanceConfigOperationsRequest(), + parent="parent_value", + ) + + +def test_list_instance_config_operations_rest_pager(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstanceConfigOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + spanner_instance_admin.ListInstanceConfigOperationsResponse.to_json(x) + for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1"} + + pager = client.list_instance_config_operations(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in results) + + pages = list( + client.list_instance_config_operations(request=sample_request).pages + ) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_list_instances_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.list_instances in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.list_instances] = mock_rpc + + request = {} + client.list_instances(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instances(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_instances_rest_required_fields( + request_type=spanner_instance_admin.ListInstancesRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instances._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instances._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "filter", + "instance_deadline", + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancesResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstancesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_instances(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_instances_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_instances._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "filter", + "instanceDeadline", + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_instances_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancesResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstancesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_instances(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*}/instances" % client.transport._host, args[1] + ) + + +def test_list_instances_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instances( + spanner_instance_admin.ListInstancesRequest(), + parent="parent_value", + ) + + +def test_list_instances_rest_pager(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancesResponse( + instances=[ + spanner_instance_admin.Instance(), + spanner_instance_admin.Instance(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + spanner_instance_admin.ListInstancesResponse.to_json(x) for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1"} + + pager = client.list_instances(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, spanner_instance_admin.Instance) for i in results) + + pages = list(client.list_instances(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_list_instance_partitions_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_instance_partitions + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_instance_partitions + ] = mock_rpc + + request = {} + client.list_instance_partitions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instance_partitions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_instance_partitions_rest_required_fields( + request_type=spanner_instance_admin.ListInstancePartitionsRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instance_partitions._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instance_partitions._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "instance_partition_deadline", + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancePartitionsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstancePartitionsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_instance_partitions(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_instance_partitions_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_instance_partitions._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "instancePartitionDeadline", + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_instance_partitions_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancePartitionsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstancePartitionsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_instance_partitions(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/instancePartitions" + % client.transport._host, + args[1], + ) + + +def test_list_instance_partitions_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instance_partitions( + spanner_instance_admin.ListInstancePartitionsRequest(), + parent="parent_value", + ) + + +def test_list_instance_partitions_rest_pager(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionsResponse( + instance_partitions=[ + spanner_instance_admin.InstancePartition(), + spanner_instance_admin.InstancePartition(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + spanner_instance_admin.ListInstancePartitionsResponse.to_json(x) + for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1/instances/sample2"} + + pager = client.list_instance_partitions(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all( + isinstance(i, spanner_instance_admin.InstancePartition) for i in results + ) + + pages = list(client.list_instance_partitions(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_get_instance_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_instance] = mock_rpc + + request = {} + client.get_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_instance_rest_required_fields( + request_type=spanner_instance_admin.GetInstanceRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_instance._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set(("field_mask",)) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.Instance() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.Instance.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_instance(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_instance_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_instance._get_unset_required_fields({}) + assert set(unset_fields) == (set(("fieldMask",)) & set(("name",))) + + +def test_get_instance_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.Instance() + + # get arguments that satisfy an http rule for this method + sample_request = {"name": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_instance_admin.Instance.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_instance(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*}" % client.transport._host, args[1] + ) + + +def test_get_instance_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_instance( + spanner_instance_admin.GetInstanceRequest(), + name="name_value", + ) + + +def test_create_instance_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.create_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.create_instance] = mock_rpc + + request = {} + client.create_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_create_instance_rest_required_fields( + request_type=spanner_instance_admin.CreateInstanceRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request_init["instance_id"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + jsonified_request["instanceId"] = "instance_id_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + assert "instanceId" in jsonified_request + assert jsonified_request["instanceId"] == "instance_id_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.create_instance(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_create_instance_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.create_instance._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "parent", + "instanceId", + "instance", + ) + ) + ) + + +def test_create_instance_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + instance_id="instance_id_value", + instance=spanner_instance_admin.Instance(name="name_value"), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.create_instance(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*}/instances" % client.transport._host, args[1] + ) + + +def test_create_instance_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_instance( + spanner_instance_admin.CreateInstanceRequest(), + parent="parent_value", + instance_id="instance_id_value", + instance=spanner_instance_admin.Instance(name="name_value"), + ) + + +def test_update_instance_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.update_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.update_instance] = mock_rpc + + request = {} + client.update_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_update_instance_rest_required_fields( + request_type=spanner_instance_admin.UpdateInstanceRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "patch", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.update_instance(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_update_instance_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.update_instance._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "instance", + "fieldMask", + ) + ) + ) + + +def test_update_instance_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = {"instance": {"name": "projects/sample1/instances/sample2"}} + + # get truthy value for each flattened field + mock_args = dict( + instance=spanner_instance_admin.Instance(name="name_value"), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.update_instance(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{instance.name=projects/*/instances/*}" % client.transport._host, + args[1], + ) + + +def test_update_instance_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_instance( + spanner_instance_admin.UpdateInstanceRequest(), + instance=spanner_instance_admin.Instance(name="name_value"), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +def test_delete_instance_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.delete_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.delete_instance] = mock_rpc + + request = {} + client.delete_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_delete_instance_rest_required_fields( + request_type=spanner_instance_admin.DeleteInstanceRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = None + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "delete", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.delete_instance(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_delete_instance_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.delete_instance._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_delete_instance_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # get arguments that satisfy an http rule for this method + sample_request = {"name": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.delete_instance(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*}" % client.transport._host, args[1] + ) + + +def test_delete_instance_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_instance( + spanner_instance_admin.DeleteInstanceRequest(), + name="name_value", + ) + + +def test_set_iam_policy_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.set_iam_policy in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.set_iam_policy] = mock_rpc + + request = {} + client.set_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.set_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_set_iam_policy_rest_required_fields( + request_type=iam_policy_pb2.SetIamPolicyRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["resource"] = "" + request = request_type(**request_init) + pb_request = request + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).set_iam_policy._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["resource"] = "resource_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).set_iam_policy._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "resource" in jsonified_request + assert jsonified_request["resource"] == "resource_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.set_iam_policy(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_set_iam_policy_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.set_iam_policy._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "resource", + "policy", + ) + ) + ) + + +def test_set_iam_policy_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy() + + # get arguments that satisfy an http rule for this method + sample_request = {"resource": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + resource="resource_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.set_iam_policy(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{resource=projects/*/instances/*}:setIamPolicy" + % client.transport._host, + args[1], + ) + + +def test_set_iam_policy_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.set_iam_policy( + iam_policy_pb2.SetIamPolicyRequest(), + resource="resource_value", + ) + + +def test_get_iam_policy_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_iam_policy in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_iam_policy] = mock_rpc + + request = {} + client.get_iam_policy(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_iam_policy(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_iam_policy_rest_required_fields( + request_type=iam_policy_pb2.GetIamPolicyRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["resource"] = "" + request = request_type(**request_init) + pb_request = request + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_iam_policy._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["resource"] = "resource_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_iam_policy._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "resource" in jsonified_request + assert jsonified_request["resource"] == "resource_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_iam_policy(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_iam_policy_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_iam_policy._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("resource",))) + + +def test_get_iam_policy_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy() + + # get arguments that satisfy an http rule for this method + sample_request = {"resource": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + resource="resource_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_iam_policy(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{resource=projects/*/instances/*}:getIamPolicy" + % client.transport._host, + args[1], + ) + + +def test_get_iam_policy_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_iam_policy( + iam_policy_pb2.GetIamPolicyRequest(), + resource="resource_value", + ) + + +def test_test_iam_permissions_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.test_iam_permissions in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.test_iam_permissions + ] = mock_rpc + + request = {} + client.test_iam_permissions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.test_iam_permissions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_test_iam_permissions_rest_required_fields( + request_type=iam_policy_pb2.TestIamPermissionsRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["resource"] = "" + request_init["permissions"] = "" + request = request_type(**request_init) + pb_request = request + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).test_iam_permissions._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["resource"] = "resource_value" + jsonified_request["permissions"] = "permissions_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).test_iam_permissions._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "resource" in jsonified_request + assert jsonified_request["resource"] == "resource_value" + assert "permissions" in jsonified_request + assert jsonified_request["permissions"] == "permissions_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = iam_policy_pb2.TestIamPermissionsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.test_iam_permissions(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_test_iam_permissions_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.test_iam_permissions._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "resource", + "permissions", + ) + ) + ) + + +def test_test_iam_permissions_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = iam_policy_pb2.TestIamPermissionsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"resource": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + resource="resource_value", + permissions=["permissions_value"], + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.test_iam_permissions(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{resource=projects/*/instances/*}:testIamPermissions" + % client.transport._host, + args[1], + ) + + +def test_test_iam_permissions_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.test_iam_permissions( + iam_policy_pb2.TestIamPermissionsRequest(), + resource="resource_value", + permissions=["permissions_value"], + ) + + +def test_get_instance_partition_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.get_instance_partition + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.get_instance_partition + ] = mock_rpc + + request = {} + client.get_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_instance_partition_rest_required_fields( + request_type=spanner_instance_admin.GetInstancePartitionRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_instance_partition._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_instance_partition._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.InstancePartition() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.InstancePartition.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_instance_partition(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_instance_partition_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_instance_partition._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_get_instance_partition_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.InstancePartition() + + # get arguments that satisfy an http rule for this method + sample_request = { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner_instance_admin.InstancePartition.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_instance_partition(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/instancePartitions/*}" + % client.transport._host, + args[1], + ) + + +def test_get_instance_partition_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_instance_partition( + spanner_instance_admin.GetInstancePartitionRequest(), + name="name_value", + ) + + +def test_create_instance_partition_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.create_instance_partition + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.create_instance_partition + ] = mock_rpc + + request = {} + client.create_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.create_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_create_instance_partition_rest_required_fields( + request_type=spanner_instance_admin.CreateInstancePartitionRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request_init["instance_partition_id"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_instance_partition._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + jsonified_request["instancePartitionId"] = "instance_partition_id_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_instance_partition._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + assert "instancePartitionId" in jsonified_request + assert jsonified_request["instancePartitionId"] == "instance_partition_id_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.create_instance_partition(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_create_instance_partition_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.create_instance_partition._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "parent", + "instancePartitionId", + "instancePartition", + ) + ) + ) + + +def test_create_instance_partition_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + instance_partition_id="instance_partition_id_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.create_instance_partition(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/instancePartitions" + % client.transport._host, + args[1], + ) + + +def test_create_instance_partition_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_instance_partition( + spanner_instance_admin.CreateInstancePartitionRequest(), + parent="parent_value", + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + instance_partition_id="instance_partition_id_value", + ) + + +def test_delete_instance_partition_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.delete_instance_partition + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.delete_instance_partition + ] = mock_rpc + + request = {} + client.delete_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_delete_instance_partition_rest_required_fields( + request_type=spanner_instance_admin.DeleteInstancePartitionRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_instance_partition._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_instance_partition._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set(("etag",)) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = None + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "delete", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.delete_instance_partition(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_delete_instance_partition_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.delete_instance_partition._get_unset_required_fields({}) + assert set(unset_fields) == (set(("etag",)) & set(("name",))) + + +def test_delete_instance_partition_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # get arguments that satisfy an http rule for this method + sample_request = { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.delete_instance_partition(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/instancePartitions/*}" + % client.transport._host, + args[1], + ) + + +def test_delete_instance_partition_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_instance_partition( + spanner_instance_admin.DeleteInstancePartitionRequest(), + name="name_value", + ) + + +def test_update_instance_partition_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.update_instance_partition + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.update_instance_partition + ] = mock_rpc + + request = {} + client.update_instance_partition(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.update_instance_partition(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_update_instance_partition_rest_required_fields( + request_type=spanner_instance_admin.UpdateInstancePartitionRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_instance_partition._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).update_instance_partition._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "patch", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.update_instance_partition(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_update_instance_partition_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.update_instance_partition._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "instancePartition", + "fieldMask", + ) + ) + ) + + +def test_update_instance_partition_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # get arguments that satisfy an http rule for this method + sample_request = { + "instance_partition": { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + } + + # get truthy value for each flattened field + mock_args = dict( + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.update_instance_partition(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{instance_partition.name=projects/*/instances/*/instancePartitions/*}" + % client.transport._host, + args[1], + ) + + +def test_update_instance_partition_rest_flattened_error(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.update_instance_partition( + spanner_instance_admin.UpdateInstancePartitionRequest(), + instance_partition=spanner_instance_admin.InstancePartition( + name="name_value" + ), + field_mask=field_mask_pb2.FieldMask(paths=["paths_value"]), + ) + + +def test_list_instance_partition_operations_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.list_instance_partition_operations + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.list_instance_partition_operations + ] = mock_rpc + + request = {} + client.list_instance_partition_operations(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_instance_partition_operations(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_instance_partition_operations_rest_required_fields( + request_type=spanner_instance_admin.ListInstancePartitionOperationsRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["parent"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instance_partition_operations._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["parent"] = "parent_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_instance_partition_operations._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "filter", + "instance_partition_deadline", + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "parent" in jsonified_request + assert jsonified_request["parent"] == "parent_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancePartitionOperationsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse.pb( + return_value + ) + ) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_instance_partition_operations(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_instance_partition_operations_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = ( + transport.list_instance_partition_operations._get_unset_required_fields({}) + ) + assert set(unset_fields) == ( + set( + ( + "filter", + "instancePartitionDeadline", + "pageSize", + "pageToken", + ) + ) + & set(("parent",)) + ) + + +def test_list_instance_partition_operations_rest_flattened(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancePartitionOperationsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = {"parent": "projects/sample1/instances/sample2"} + + # get truthy value for each flattened field + mock_args = dict( + parent="parent_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse.pb( + return_value + ) + ) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_instance_partition_operations(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{parent=projects/*/instances/*}/instancePartitionOperations" + % client.transport._host, + args[1], + ) + + +def test_list_instance_partition_operations_rest_flattened_error( + transport: str = "rest", +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_instance_partition_operations( + spanner_instance_admin.ListInstancePartitionOperationsRequest(), + parent="parent_value", + ) + + +def test_list_instance_partition_operations_rest_pager(transport: str = "rest"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + next_page_token="abc", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[], + next_page_token="def", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + ], + next_page_token="ghi", + ), + spanner_instance_admin.ListInstancePartitionOperationsResponse( + operations=[ + operations_pb2.Operation(), + operations_pb2.Operation(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple( + spanner_instance_admin.ListInstancePartitionOperationsResponse.to_json(x) + for x in response + ) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = {"parent": "projects/sample1/instances/sample2"} + + pager = client.list_instance_partition_operations(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, operations_pb2.Operation) for i in results) + + pages = list( + client.list_instance_partition_operations(request=sample_request).pages + ) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_move_instance_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.move_instance in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.move_instance] = mock_rpc + + request = {} + client.move_instance(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + # Operation methods build a cached wrapper on first rpc call + # subsequent calls should use the cached wrapper + wrapper_fn.reset_mock() + + client.move_instance(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_move_instance_rest_required_fields( + request_type=spanner_instance_admin.MoveInstanceRequest, +): + transport_class = transports.InstanceAdminRestTransport + + request_init = {} + request_init["name"] = "" + request_init["target_config"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).move_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + jsonified_request["targetConfig"] = "target_config_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).move_instance._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + assert "targetConfig" in jsonified_request + assert jsonified_request["targetConfig"] == "target_config_value" + + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.move_instance(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_move_instance_rest_unset_required_fields(): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.move_instance._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "name", + "targetConfig", + ) + ) + ) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.InstanceAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.InstanceAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = InstanceAdminClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide an api_key and a transport instance. + transport = transports.InstanceAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + options = client_options.ClientOptions() + options.api_key = "api_key" + with pytest.raises(ValueError): + client = InstanceAdminClient( + client_options=options, + transport=transport, + ) + + # It is an error to provide an api_key and a credential. + options = client_options.ClientOptions() + options.api_key = "api_key" + with pytest.raises(ValueError): + client = InstanceAdminClient( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.InstanceAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = InstanceAdminClient( + client_options={"scopes": ["1", "2"]}, + transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.InstanceAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + client = InstanceAdminClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.InstanceAdminGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.InstanceAdminGrpcAsyncIOTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.InstanceAdminGrpcTransport, + transports.InstanceAdminGrpcAsyncIOTransport, + transports.InstanceAdminRestTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(google.auth, "default") as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_kind_grpc(): + transport = InstanceAdminClient.get_transport_class("grpc")( + credentials=ga_credentials.AnonymousCredentials() + ) + assert transport.kind == "grpc" + + +def test_initialize_client_w_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="grpc" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instance_configs_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + call.return_value = spanner_instance_admin.ListInstanceConfigsResponse() + client.list_instance_configs(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstanceConfigsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_instance_config_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + call.return_value = spanner_instance_admin.InstanceConfig() + client.get_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_instance_config_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_instance_config_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_instance_config_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + call.return_value = None + client.delete_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instance_config_operations_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + call.return_value = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse() + ) + client.list_instance_config_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstanceConfigOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instances_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + call.return_value = spanner_instance_admin.ListInstancesResponse() + client.list_instances(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instance_partitions_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + call.return_value = spanner_instance_admin.ListInstancePartitionsResponse() + client.list_instance_partitions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancePartitionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_instance_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + call.return_value = spanner_instance_admin.Instance() + client.get_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_instance_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_instance_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_instance_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + call.return_value = None + client.delete_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_set_iam_policy_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + call.return_value = policy_pb2.Policy() + client.set_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.SetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_iam_policy_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + call.return_value = policy_pb2.Policy() + client.get_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.GetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_test_iam_permissions_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + call.return_value = iam_policy_pb2.TestIamPermissionsResponse() + client.test_iam_permissions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.TestIamPermissionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_instance_partition_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + call.return_value = spanner_instance_admin.InstancePartition() + client.get_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_instance_partition_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.create_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_instance_partition_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + call.return_value = None + client.delete_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_instance_partition_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.update_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instance_partition_operations_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + call.return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse() + ) + client.list_instance_partition_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancePartitionOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_move_instance_empty_call_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.move_instance), "__call__") as call: + call.return_value = operations_pb2.Operation(name="operations/op") + client.move_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.MoveInstanceRequest() + + assert args[0] == request_msg + + +def test_transport_kind_grpc_asyncio(): + transport = InstanceAdminAsyncClient.get_transport_class("grpc_asyncio")( + credentials=async_anonymous_credentials() + ) + assert transport.kind == "grpc_asyncio" + + +def test_initialize_client_w_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), transport="grpc_asyncio" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_instance_configs_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstanceConfigsResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_instance_configs(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstanceConfigsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_instance_config_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.InstanceConfig( + name="name_value", + display_name="display_name_value", + config_type=spanner_instance_admin.InstanceConfig.Type.GOOGLE_MANAGED, + base_config="base_config_value", + etag="etag_value", + leader_options=["leader_options_value"], + reconciling=True, + state=spanner_instance_admin.InstanceConfig.State.CREATING, + free_instance_availability=spanner_instance_admin.InstanceConfig.FreeInstanceAvailability.AVAILABLE, + quorum_type=spanner_instance_admin.InstanceConfig.QuorumType.REGION, + storage_limit_per_processing_unit=3540, + ) + ) + await client.get_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_create_instance_config_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.create_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_update_instance_config_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.update_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_delete_instance_config_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_instance_config_operations_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstanceConfigOperationsResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_instance_config_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstanceConfigOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_instances_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancesResponse( + next_page_token="next_page_token_value", + unreachable=["unreachable_value"], + ) + ) + await client.list_instances(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_instance_partitions_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancePartitionsResponse( + next_page_token="next_page_token_value", + unreachable=["unreachable_value"], + ) + ) + await client.list_instance_partitions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancePartitionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_instance_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.Instance( + name="name_value", + config="config_value", + display_name="display_name_value", + node_count=1070, + processing_units=1743, + state=spanner_instance_admin.Instance.State.CREATING, + instance_type=spanner_instance_admin.Instance.InstanceType.PROVISIONED, + endpoint_uris=["endpoint_uris_value"], + edition=spanner_instance_admin.Instance.Edition.STANDARD, + default_backup_schedule_type=spanner_instance_admin.Instance.DefaultBackupScheduleType.NONE, + ) + ) + await client.get_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_create_instance_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.create_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_update_instance_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.update_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_delete_instance_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_set_iam_policy_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + ) + await client.set_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.SetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_iam_policy_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + ) + await client.get_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.GetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_test_iam_permissions_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + iam_policy_pb2.TestIamPermissionsResponse( + permissions=["permissions_value"], + ) + ) + await client.test_iam_permissions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.TestIamPermissionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_instance_partition_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.InstancePartition( + name="name_value", + config="config_value", + display_name="display_name_value", + state=spanner_instance_admin.InstancePartition.State.CREATING, + referencing_databases=["referencing_databases_value"], + referencing_backups=["referencing_backups_value"], + etag="etag_value", + ) + ) + await client.get_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_create_instance_partition_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.create_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_delete_instance_partition_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_update_instance_partition_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.update_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_instance_partition_operations_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner_instance_admin.ListInstancePartitionOperationsResponse( + next_page_token="next_page_token_value", + unreachable_instance_partitions=[ + "unreachable_instance_partitions_value" + ], + ) + ) + await client.list_instance_partition_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancePartitionOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_move_instance_empty_call_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.move_instance), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + await client.move_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.MoveInstanceRequest() + + assert args[0] == request_msg + + +def test_transport_kind_rest(): + transport = InstanceAdminClient.get_transport_class("rest")( + credentials=ga_credentials.AnonymousCredentials() + ) + assert transport.kind == "rest" + + +def test_list_instance_configs_rest_bad_request( + request_type=spanner_instance_admin.ListInstanceConfigsRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_instance_configs(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstanceConfigsRequest, + dict, + ], +) +def test_list_instance_configs_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstanceConfigsResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstanceConfigsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_instance_configs(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstanceConfigsPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_instance_configs_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_list_instance_configs" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_list_instance_configs_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_list_instance_configs" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.ListInstanceConfigsRequest.pb( + spanner_instance_admin.ListInstanceConfigsRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_instance_admin.ListInstanceConfigsResponse.to_json( + spanner_instance_admin.ListInstanceConfigsResponse() + ) + req.return_value.content = return_value + + request = spanner_instance_admin.ListInstanceConfigsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_instance_admin.ListInstanceConfigsResponse() + post_with_metadata.return_value = ( + spanner_instance_admin.ListInstanceConfigsResponse(), + metadata, + ) + + client.list_instance_configs( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_instance_config_rest_bad_request( + request_type=spanner_instance_admin.GetInstanceConfigRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instanceConfigs/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_instance_config(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.GetInstanceConfigRequest, + dict, + ], +) +def test_get_instance_config_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instanceConfigs/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.InstanceConfig( + name="name_value", + display_name="display_name_value", + config_type=spanner_instance_admin.InstanceConfig.Type.GOOGLE_MANAGED, + base_config="base_config_value", + etag="etag_value", + leader_options=["leader_options_value"], + reconciling=True, + state=spanner_instance_admin.InstanceConfig.State.CREATING, + free_instance_availability=spanner_instance_admin.InstanceConfig.FreeInstanceAvailability.AVAILABLE, + quorum_type=spanner_instance_admin.InstanceConfig.QuorumType.REGION, + storage_limit_per_processing_unit=3540, + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.InstanceConfig.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_instance_config(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.InstanceConfig) + assert response.name == "name_value" + assert response.display_name == "display_name_value" + assert ( + response.config_type + == spanner_instance_admin.InstanceConfig.Type.GOOGLE_MANAGED + ) + assert response.base_config == "base_config_value" + assert response.etag == "etag_value" + assert response.leader_options == ["leader_options_value"] + assert response.reconciling is True + assert response.state == spanner_instance_admin.InstanceConfig.State.CREATING + assert ( + response.free_instance_availability + == spanner_instance_admin.InstanceConfig.FreeInstanceAvailability.AVAILABLE + ) + assert ( + response.quorum_type == spanner_instance_admin.InstanceConfig.QuorumType.REGION + ) + assert response.storage_limit_per_processing_unit == 3540 + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_instance_config_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_get_instance_config" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_get_instance_config_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_get_instance_config" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.GetInstanceConfigRequest.pb( + spanner_instance_admin.GetInstanceConfigRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_instance_admin.InstanceConfig.to_json( + spanner_instance_admin.InstanceConfig() + ) + req.return_value.content = return_value + + request = spanner_instance_admin.GetInstanceConfigRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_instance_admin.InstanceConfig() + post_with_metadata.return_value = ( + spanner_instance_admin.InstanceConfig(), + metadata, + ) + + client.get_instance_config( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_create_instance_config_rest_bad_request( + request_type=spanner_instance_admin.CreateInstanceConfigRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.create_instance_config(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.CreateInstanceConfigRequest, + dict, + ], +) +def test_create_instance_config_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.create_instance_config(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_create_instance_config_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_create_instance_config" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_create_instance_config_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_create_instance_config" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.CreateInstanceConfigRequest.pb( + spanner_instance_admin.CreateInstanceConfigRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_instance_admin.CreateInstanceConfigRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.create_instance_config( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_update_instance_config_rest_bad_request( + request_type=spanner_instance_admin.UpdateInstanceConfigRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "instance_config": {"name": "projects/sample1/instanceConfigs/sample2"} + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.update_instance_config(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.UpdateInstanceConfigRequest, + dict, + ], +) +def test_update_instance_config_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "instance_config": {"name": "projects/sample1/instanceConfigs/sample2"} + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.update_instance_config(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_update_instance_config_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_update_instance_config" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_update_instance_config_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_update_instance_config" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.UpdateInstanceConfigRequest.pb( + spanner_instance_admin.UpdateInstanceConfigRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_instance_admin.UpdateInstanceConfigRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.update_instance_config( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_delete_instance_config_rest_bad_request( + request_type=spanner_instance_admin.DeleteInstanceConfigRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instanceConfigs/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.delete_instance_config(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.DeleteInstanceConfigRequest, + dict, + ], +) +def test_delete_instance_config_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instanceConfigs/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "" + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.delete_instance_config(request) + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_delete_instance_config_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_delete_instance_config" + ) as pre: + pre.assert_not_called() + pb_message = spanner_instance_admin.DeleteInstanceConfigRequest.pb( + spanner_instance_admin.DeleteInstanceConfigRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + request = spanner_instance_admin.DeleteInstanceConfigRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + + client.delete_instance_config( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + + +def test_list_instance_config_operations_rest_bad_request( + request_type=spanner_instance_admin.ListInstanceConfigOperationsRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_instance_config_operations(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstanceConfigOperationsRequest, + dict, + ], +) +def test_list_instance_config_operations_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstanceConfigOperationsResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstanceConfigOperationsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_instance_config_operations(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstanceConfigOperationsPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_instance_config_operations_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_list_instance_config_operations" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_list_instance_config_operations_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_list_instance_config_operations" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.ListInstanceConfigOperationsRequest.pb( + spanner_instance_admin.ListInstanceConfigOperationsRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse.to_json( + spanner_instance_admin.ListInstanceConfigOperationsResponse() + ) + ) + req.return_value.content = return_value + + request = spanner_instance_admin.ListInstanceConfigOperationsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse() + ) + post_with_metadata.return_value = ( + spanner_instance_admin.ListInstanceConfigOperationsResponse(), + metadata, + ) + + client.list_instance_config_operations( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_list_instances_rest_bad_request( + request_type=spanner_instance_admin.ListInstancesRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_instances(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstancesRequest, + dict, + ], +) +def test_list_instances_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancesResponse( + next_page_token="next_page_token_value", + unreachable=["unreachable_value"], + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstancesResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_instances(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancesPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable == ["unreachable_value"] + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_instances_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_list_instances" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_list_instances_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_list_instances" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.ListInstancesRequest.pb( + spanner_instance_admin.ListInstancesRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_instance_admin.ListInstancesResponse.to_json( + spanner_instance_admin.ListInstancesResponse() + ) + req.return_value.content = return_value + + request = spanner_instance_admin.ListInstancesRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_instance_admin.ListInstancesResponse() + post_with_metadata.return_value = ( + spanner_instance_admin.ListInstancesResponse(), + metadata, + ) + + client.list_instances( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_list_instance_partitions_rest_bad_request( + request_type=spanner_instance_admin.ListInstancePartitionsRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_instance_partitions(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstancePartitionsRequest, + dict, + ], +) +def test_list_instance_partitions_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancePartitionsResponse( + next_page_token="next_page_token_value", + unreachable=["unreachable_value"], + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.ListInstancePartitionsResponse.pb( + return_value + ) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_instance_partitions(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancePartitionsPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable == ["unreachable_value"] + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_instance_partitions_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_list_instance_partitions" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_list_instance_partitions_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_list_instance_partitions" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.ListInstancePartitionsRequest.pb( + spanner_instance_admin.ListInstancePartitionsRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_instance_admin.ListInstancePartitionsResponse.to_json( + spanner_instance_admin.ListInstancePartitionsResponse() + ) + req.return_value.content = return_value + + request = spanner_instance_admin.ListInstancePartitionsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_instance_admin.ListInstancePartitionsResponse() + post_with_metadata.return_value = ( + spanner_instance_admin.ListInstancePartitionsResponse(), + metadata, + ) + + client.list_instance_partitions( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_instance_rest_bad_request( + request_type=spanner_instance_admin.GetInstanceRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_instance(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.GetInstanceRequest, + dict, + ], +) +def test_get_instance_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.Instance( + name="name_value", + config="config_value", + display_name="display_name_value", + node_count=1070, + processing_units=1743, + state=spanner_instance_admin.Instance.State.CREATING, + instance_type=spanner_instance_admin.Instance.InstanceType.PROVISIONED, + endpoint_uris=["endpoint_uris_value"], + edition=spanner_instance_admin.Instance.Edition.STANDARD, + default_backup_schedule_type=spanner_instance_admin.Instance.DefaultBackupScheduleType.NONE, + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.Instance.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_instance(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.Instance) + assert response.name == "name_value" + assert response.config == "config_value" + assert response.display_name == "display_name_value" + assert response.node_count == 1070 + assert response.processing_units == 1743 + assert response.state == spanner_instance_admin.Instance.State.CREATING + assert ( + response.instance_type + == spanner_instance_admin.Instance.InstanceType.PROVISIONED + ) + assert response.endpoint_uris == ["endpoint_uris_value"] + assert response.edition == spanner_instance_admin.Instance.Edition.STANDARD + assert ( + response.default_backup_schedule_type + == spanner_instance_admin.Instance.DefaultBackupScheduleType.NONE + ) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_instance_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_get_instance" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_get_instance_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_get_instance" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.GetInstanceRequest.pb( + spanner_instance_admin.GetInstanceRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_instance_admin.Instance.to_json( + spanner_instance_admin.Instance() + ) + req.return_value.content = return_value + + request = spanner_instance_admin.GetInstanceRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_instance_admin.Instance() + post_with_metadata.return_value = spanner_instance_admin.Instance(), metadata + + client.get_instance( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_create_instance_rest_bad_request( + request_type=spanner_instance_admin.CreateInstanceRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.create_instance(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.CreateInstanceRequest, + dict, + ], +) +def test_create_instance_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.create_instance(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_create_instance_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_create_instance" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_create_instance_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_create_instance" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.CreateInstanceRequest.pb( + spanner_instance_admin.CreateInstanceRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_instance_admin.CreateInstanceRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.create_instance( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_update_instance_rest_bad_request( + request_type=spanner_instance_admin.UpdateInstanceRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"instance": {"name": "projects/sample1/instances/sample2"}} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.update_instance(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.UpdateInstanceRequest, + dict, + ], +) +def test_update_instance_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"instance": {"name": "projects/sample1/instances/sample2"}} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.update_instance(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_update_instance_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_update_instance" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_update_instance_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_update_instance" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.UpdateInstanceRequest.pb( + spanner_instance_admin.UpdateInstanceRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_instance_admin.UpdateInstanceRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.update_instance( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_delete_instance_rest_bad_request( + request_type=spanner_instance_admin.DeleteInstanceRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.delete_instance(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.DeleteInstanceRequest, + dict, + ], +) +def test_delete_instance_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "" + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.delete_instance(request) + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_delete_instance_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_delete_instance" + ) as pre: + pre.assert_not_called() + pb_message = spanner_instance_admin.DeleteInstanceRequest.pb( + spanner_instance_admin.DeleteInstanceRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + request = spanner_instance_admin.DeleteInstanceRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + + client.delete_instance( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + + +def test_set_iam_policy_rest_bad_request( + request_type=iam_policy_pb2.SetIamPolicyRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.set_iam_policy(request) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.SetIamPolicyRequest, + dict, + ], +) +def test_set_iam_policy_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.set_iam_policy(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_set_iam_policy_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_set_iam_policy" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_set_iam_policy_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_set_iam_policy" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = iam_policy_pb2.SetIamPolicyRequest() + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(policy_pb2.Policy()) + req.return_value.content = return_value + + request = iam_policy_pb2.SetIamPolicyRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = policy_pb2.Policy() + post_with_metadata.return_value = policy_pb2.Policy(), metadata + + client.set_iam_policy( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_iam_policy_rest_bad_request( + request_type=iam_policy_pb2.GetIamPolicyRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_iam_policy(request) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.GetIamPolicyRequest, + dict, + ], +) +def test_get_iam_policy_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = policy_pb2.Policy( + version=774, + etag=b"etag_blob", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_iam_policy(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, policy_pb2.Policy) + assert response.version == 774 + assert response.etag == b"etag_blob" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_iam_policy_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_get_iam_policy" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_get_iam_policy_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_get_iam_policy" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = iam_policy_pb2.GetIamPolicyRequest() + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(policy_pb2.Policy()) + req.return_value.content = return_value + + request = iam_policy_pb2.GetIamPolicyRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = policy_pb2.Policy() + post_with_metadata.return_value = policy_pb2.Policy(), metadata + + client.get_iam_policy( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_test_iam_permissions_rest_bad_request( + request_type=iam_policy_pb2.TestIamPermissionsRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.test_iam_permissions(request) + + +@pytest.mark.parametrize( + "request_type", + [ + iam_policy_pb2.TestIamPermissionsRequest, + dict, + ], +) +def test_test_iam_permissions_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"resource": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = iam_policy_pb2.TestIamPermissionsResponse( + permissions=["permissions_value"], + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.test_iam_permissions(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, iam_policy_pb2.TestIamPermissionsResponse) + assert response.permissions == ["permissions_value"] + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_test_iam_permissions_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_test_iam_permissions" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_test_iam_permissions_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_test_iam_permissions" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = iam_policy_pb2.TestIamPermissionsRequest() + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson( + iam_policy_pb2.TestIamPermissionsResponse() + ) + req.return_value.content = return_value + + request = iam_policy_pb2.TestIamPermissionsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = iam_policy_pb2.TestIamPermissionsResponse() + post_with_metadata.return_value = ( + iam_policy_pb2.TestIamPermissionsResponse(), + metadata, + ) + + client.test_iam_permissions( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_instance_partition_rest_bad_request( + request_type=spanner_instance_admin.GetInstancePartitionRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_instance_partition(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.GetInstancePartitionRequest, + dict, + ], +) +def test_get_instance_partition_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.InstancePartition( + name="name_value", + config="config_value", + display_name="display_name_value", + state=spanner_instance_admin.InstancePartition.State.CREATING, + referencing_databases=["referencing_databases_value"], + referencing_backups=["referencing_backups_value"], + etag="etag_value", + node_count=1070, + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner_instance_admin.InstancePartition.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_instance_partition(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner_instance_admin.InstancePartition) + assert response.name == "name_value" + assert response.config == "config_value" + assert response.display_name == "display_name_value" + assert response.state == spanner_instance_admin.InstancePartition.State.CREATING + assert response.referencing_databases == ["referencing_databases_value"] + assert response.referencing_backups == ["referencing_backups_value"] + assert response.etag == "etag_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_instance_partition_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_get_instance_partition" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_get_instance_partition_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_get_instance_partition" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.GetInstancePartitionRequest.pb( + spanner_instance_admin.GetInstancePartitionRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner_instance_admin.InstancePartition.to_json( + spanner_instance_admin.InstancePartition() + ) + req.return_value.content = return_value + + request = spanner_instance_admin.GetInstancePartitionRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner_instance_admin.InstancePartition() + post_with_metadata.return_value = ( + spanner_instance_admin.InstancePartition(), + metadata, + ) + + client.get_instance_partition( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_create_instance_partition_rest_bad_request( + request_type=spanner_instance_admin.CreateInstancePartitionRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.create_instance_partition(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.CreateInstancePartitionRequest, + dict, + ], +) +def test_create_instance_partition_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.create_instance_partition(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_create_instance_partition_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_create_instance_partition" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_create_instance_partition_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_create_instance_partition" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.CreateInstancePartitionRequest.pb( + spanner_instance_admin.CreateInstancePartitionRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_instance_admin.CreateInstancePartitionRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.create_instance_partition( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_delete_instance_partition_rest_bad_request( + request_type=spanner_instance_admin.DeleteInstancePartitionRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.delete_instance_partition(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.DeleteInstancePartitionRequest, + dict, + ], +) +def test_delete_instance_partition_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "" + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.delete_instance_partition(request) + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_delete_instance_partition_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_delete_instance_partition" + ) as pre: + pre.assert_not_called() + pb_message = spanner_instance_admin.DeleteInstancePartitionRequest.pb( + spanner_instance_admin.DeleteInstancePartitionRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + request = spanner_instance_admin.DeleteInstancePartitionRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + + client.delete_instance_partition( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + + +def test_update_instance_partition_rest_bad_request( + request_type=spanner_instance_admin.UpdateInstancePartitionRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "instance_partition": { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.update_instance_partition(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.UpdateInstancePartitionRequest, + dict, + ], +) +def test_update_instance_partition_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "instance_partition": { + "name": "projects/sample1/instances/sample2/instancePartitions/sample3" + } + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.update_instance_partition(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_update_instance_partition_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_update_instance_partition" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_update_instance_partition_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_update_instance_partition" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.UpdateInstancePartitionRequest.pb( + spanner_instance_admin.UpdateInstancePartitionRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_instance_admin.UpdateInstancePartitionRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.update_instance_partition( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_list_instance_partition_operations_rest_bad_request( + request_type=spanner_instance_admin.ListInstancePartitionOperationsRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_instance_partition_operations(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.ListInstancePartitionOperationsRequest, + dict, + ], +) +def test_list_instance_partition_operations_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"parent": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner_instance_admin.ListInstancePartitionOperationsResponse( + next_page_token="next_page_token_value", + unreachable_instance_partitions=["unreachable_instance_partitions_value"], + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse.pb( + return_value + ) + ) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_instance_partition_operations(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListInstancePartitionOperationsPager) + assert response.next_page_token == "next_page_token_value" + assert response.unreachable_instance_partitions == [ + "unreachable_instance_partitions_value" + ] + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_instance_partition_operations_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_list_instance_partition_operations", + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "post_list_instance_partition_operations_with_metadata", + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, + "pre_list_instance_partition_operations", + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.ListInstancePartitionOperationsRequest.pb( + spanner_instance_admin.ListInstancePartitionOperationsRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse.to_json( + spanner_instance_admin.ListInstancePartitionOperationsResponse() + ) + ) + req.return_value.content = return_value + + request = spanner_instance_admin.ListInstancePartitionOperationsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse() + ) + post_with_metadata.return_value = ( + spanner_instance_admin.ListInstancePartitionOperationsResponse(), + metadata, + ) + + client.list_instance_partition_operations( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_move_instance_rest_bad_request( + request_type=spanner_instance_admin.MoveInstanceRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.move_instance(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner_instance_admin.MoveInstanceRequest, + dict, + ], +) +def test_move_instance_rest_call_success(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"name": "projects/sample1/instances/sample2"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation(name="operations/spam") + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.move_instance(request) + + # Establish that the response is the type that we expect. + json_return_value = json_format.MessageToJson(return_value) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_move_instance_rest_interceptors(null_interceptor): + transport = transports.InstanceAdminRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None + if null_interceptor + else transports.InstanceAdminRestInterceptor(), + ) + client = InstanceAdminClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + operation.Operation, "_set_result_from_operation" + ), mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_move_instance" + ) as post, mock.patch.object( + transports.InstanceAdminRestInterceptor, "post_move_instance_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.InstanceAdminRestInterceptor, "pre_move_instance" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner_instance_admin.MoveInstanceRequest.pb( + spanner_instance_admin.MoveInstanceRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = json_format.MessageToJson(operations_pb2.Operation()) + req.return_value.content = return_value + + request = spanner_instance_admin.MoveInstanceRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = operations_pb2.Operation() + post_with_metadata.return_value = operations_pb2.Operation(), metadata + + client.move_instance( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_cancel_operation_rest_bad_request( + request_type=operations_pb2.CancelOperationRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type() + request = json_format.ParseDict( + { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + }, + request, + ) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = Response() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = Request() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.cancel_operation(request) + + +@pytest.mark.parametrize( + "request_type", + [ + operations_pb2.CancelOperationRequest, + dict, + ], +) +def test_cancel_operation_rest(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + } + request = request_type(**request_init) + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "{}" + response_value.content = json_return_value.encode("UTF-8") + + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.cancel_operation(request) + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_operation_rest_bad_request( + request_type=operations_pb2.DeleteOperationRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type() + request = json_format.ParseDict( + { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + }, + request, + ) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = Response() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = Request() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.delete_operation(request) + + +@pytest.mark.parametrize( + "request_type", + [ + operations_pb2.DeleteOperationRequest, + dict, + ], +) +def test_delete_operation_rest(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + } + request = request_type(**request_init) + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "{}" + response_value.content = json_return_value.encode("UTF-8") + + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.delete_operation(request) + + # Establish that the response is the type that we expect. + assert response is None + + +def test_get_operation_rest_bad_request( + request_type=operations_pb2.GetOperationRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type() + request = json_format.ParseDict( + { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + }, + request, + ) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = Response() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = Request() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_operation(request) + + +@pytest.mark.parametrize( + "request_type", + [ + operations_pb2.GetOperationRequest, + dict, + ], +) +def test_get_operation_rest(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/operations/sample4" + } + request = request_type(**request_init) + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.Operation() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_operation(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.Operation) + + +def test_list_operations_rest_bad_request( + request_type=operations_pb2.ListOperationsRequest, +): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type() + request = json_format.ParseDict( + {"name": "projects/sample1/instances/sample2/databases/sample3/operations"}, + request, + ) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = Response() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = Request() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_operations(request) + + +@pytest.mark.parametrize( + "request_type", + [ + operations_pb2.ListOperationsRequest, + dict, + ], +) +def test_list_operations_rest(request_type): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/operations" + } + request = request_type(**request_init) + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # Designate an appropriate value for the returned response. + return_value = operations_pb2.ListOperationsResponse() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_operations(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.ListOperationsResponse) + + +def test_initialize_client_w_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instance_configs_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_configs), "__call__" + ) as call: + client.list_instance_configs(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstanceConfigsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_instance_config_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_config), "__call__" + ) as call: + client.get_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_instance_config_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_config), "__call__" + ) as call: + client.create_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_instance_config_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_config), "__call__" + ) as call: + client.update_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_instance_config_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_config), "__call__" + ) as call: + client.delete_instance_config(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstanceConfigRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instance_config_operations_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_config_operations), "__call__" + ) as call: + client.list_instance_config_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstanceConfigOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instances_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_instances), "__call__") as call: + client.list_instances(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancesRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instance_partitions_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partitions), "__call__" + ) as call: + client.list_instance_partitions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancePartitionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_instance_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_instance), "__call__") as call: + client.get_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_instance_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_instance), "__call__") as call: + client.create_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_instance_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.update_instance), "__call__") as call: + client.update_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_instance_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_instance), "__call__") as call: + client.delete_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstanceRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_set_iam_policy_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.set_iam_policy), "__call__") as call: + client.set_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.SetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_iam_policy_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_iam_policy), "__call__") as call: + client.get_iam_policy(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.GetIamPolicyRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_test_iam_permissions_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.test_iam_permissions), "__call__" + ) as call: + client.test_iam_permissions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = iam_policy_pb2.TestIamPermissionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_instance_partition_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.get_instance_partition), "__call__" + ) as call: + client.get_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.GetInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_instance_partition_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.create_instance_partition), "__call__" + ) as call: + client.create_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.CreateInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_instance_partition_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.delete_instance_partition), "__call__" + ) as call: + client.delete_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.DeleteInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_update_instance_partition_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.update_instance_partition), "__call__" + ) as call: + client.update_instance_partition(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.UpdateInstancePartitionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_instance_partition_operations_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.list_instance_partition_operations), "__call__" + ) as call: + client.list_instance_partition_operations(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.ListInstancePartitionOperationsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_move_instance_empty_call_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.move_instance), "__call__") as call: + client.move_instance(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner_instance_admin.MoveInstanceRequest() + + assert args[0] == request_msg + + +def test_instance_admin_rest_lro_client(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + transport = client.transport + + # Ensure that we have an api-core operations client. + assert isinstance( + transport.operations_client, + operations_v1.AbstractOperationsClient, + ) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, + transports.InstanceAdminGrpcTransport, + ) + + +def test_instance_admin_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(core_exceptions.DuplicateCredentialArgs): + transport = transports.InstanceAdminTransport( + credentials=ga_credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_instance_admin_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.spanner_admin_instance_v1.services.instance_admin.transports.InstanceAdminTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.InstanceAdminTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ( + "list_instance_configs", + "get_instance_config", + "create_instance_config", + "update_instance_config", + "delete_instance_config", + "list_instance_config_operations", + "list_instances", + "list_instance_partitions", + "get_instance", + "create_instance", + "update_instance", + "delete_instance", + "set_iam_policy", + "get_iam_policy", + "test_iam_permissions", + "get_instance_partition", + "create_instance_partition", + "delete_instance_partition", + "update_instance_partition", + "list_instance_partition_operations", + "move_instance", + "get_operation", + "cancel_operation", + "delete_operation", + "list_operations", + ) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + with pytest.raises(NotImplementedError): + transport.close() + + # Additionally, the LRO client (a property) should + # also raise NotImplementedError + with pytest.raises(NotImplementedError): + transport.operations_client + + # Catch all for all remaining methods and properties + remainder = [ + "kind", + ] + for r in remainder: + with pytest.raises(NotImplementedError): + getattr(transport, r)() + + +def test_instance_admin_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + google.auth, "load_credentials_from_file", autospec=True + ) as load_creds, mock.patch( + "google.cloud.spanner_admin_instance_v1.services.instance_admin.transports.InstanceAdminTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (ga_credentials.AnonymousCredentials(), None) + transport = transports.InstanceAdminTransport( + credentials_file="credentials.json", + quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + quota_project_id="octopus", + ) + + +def test_instance_admin_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(google.auth, "default", autospec=True) as adc, mock.patch( + "google.cloud.spanner_admin_instance_v1.services.instance_admin.transports.InstanceAdminTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport = transports.InstanceAdminTransport() + adc.assert_called_once() + + +def test_instance_admin_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(google.auth, "default", autospec=True) as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + InstanceAdminClient() + adc.assert_called_once_with( + scopes=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + quota_project_id=None, + ) + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.InstanceAdminGrpcTransport, + transports.InstanceAdminGrpcAsyncIOTransport, + ], +) +def test_instance_admin_transport_auth_adc(transport_class): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(google.auth, "default", autospec=True) as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport_class(quota_project_id="octopus", scopes=["1", "2"]) + adc.assert_called_once_with( + scopes=["1", "2"], + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + quota_project_id="octopus", + ) + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.InstanceAdminGrpcTransport, + transports.InstanceAdminGrpcAsyncIOTransport, + transports.InstanceAdminRestTransport, + ], +) +def test_instance_admin_transport_auth_gdch_credentials(transport_class): + host = "https://language.com" + api_audience_tests = [None, "https://language2.com"] + api_audience_expect = [host, "https://language2.com"] + for t, e in zip(api_audience_tests, api_audience_expect): + with mock.patch.object(google.auth, "default", autospec=True) as adc: + gdch_mock = mock.MagicMock() + type(gdch_mock).with_gdch_audience = mock.PropertyMock( + return_value=gdch_mock + ) + adc.return_value = (gdch_mock, None) + transport_class(host=host, api_audience=t) + gdch_mock.with_gdch_audience.assert_called_once_with(e) + + +@pytest.mark.parametrize( + "transport_class,grpc_helpers", + [ + (transports.InstanceAdminGrpcTransport, grpc_helpers), + (transports.InstanceAdminGrpcAsyncIOTransport, grpc_helpers_async), + ], +) +def test_instance_admin_transport_create_channel(transport_class, grpc_helpers): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object( + google.auth, "default", autospec=True + ) as adc, mock.patch.object( + grpc_helpers, "create_channel", autospec=True + ) as create_channel: + creds = ga_credentials.AnonymousCredentials() + adc.return_value = (creds, None) + transport_class(quota_project_id="octopus", scopes=["1", "2"]) + + create_channel.assert_called_with( + "spanner.googleapis.com:443", + credentials=creds, + credentials_file=None, + quota_project_id="octopus", + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.admin", + ), + scopes=["1", "2"], + default_host="spanner.googleapis.com", + ssl_credentials=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.InstanceAdminGrpcTransport, + transports.InstanceAdminGrpcAsyncIOTransport, + ], +) +def test_instance_admin_grpc_transport_client_cert_source_for_mtls(transport_class): + cred = ga_credentials.AnonymousCredentials() + + # Check ssl_channel_credentials is used if provided. + with mock.patch.object(transport_class, "create_channel") as mock_create_channel: + mock_ssl_channel_creds = mock.Mock() + transport_class( + host="squid.clam.whelk", + credentials=cred, + ssl_channel_credentials=mock_ssl_channel_creds, + ) + mock_create_channel.assert_called_once_with( + "squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_channel_creds, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + # Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls + # is used. + with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()): + with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred: + transport_class( + credentials=cred, + client_cert_source_for_mtls=client_cert_source_callback, + ) + expected_cert, expected_key = client_cert_source_callback() + mock_ssl_cred.assert_called_once_with( + certificate_chain=expected_cert, private_key=expected_key + ) + + +def test_instance_admin_http_transport_client_cert_source_for_mtls(): + cred = ga_credentials.AnonymousCredentials() + with mock.patch( + "google.auth.transport.requests.AuthorizedSession.configure_mtls_channel" + ) as mock_configure_mtls_channel: + transports.InstanceAdminRestTransport( + credentials=cred, client_cert_source_for_mtls=client_cert_source_callback + ) + mock_configure_mtls_channel.assert_called_once_with(client_cert_source_callback) + + +@pytest.mark.parametrize( + "transport_name", + [ + "grpc", + "grpc_asyncio", + "rest", + ], +) +def test_instance_admin_host_no_port(transport_name): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="spanner.googleapis.com" + ), + transport=transport_name, + ) + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +@pytest.mark.parametrize( + "transport_name", + [ + "grpc", + "grpc_asyncio", + "rest", + ], +) +def test_instance_admin_host_with_port(transport_name): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="spanner.googleapis.com:8000" + ), + transport=transport_name, + ) + assert client.transport._host == ( + "spanner.googleapis.com:8000" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com:8000" + ) + + +@pytest.mark.parametrize( + "transport_name", + [ + "rest", + ], +) +def test_instance_admin_client_transport_session_collision(transport_name): + creds1 = ga_credentials.AnonymousCredentials() + creds2 = ga_credentials.AnonymousCredentials() + client1 = InstanceAdminClient( + credentials=creds1, + transport=transport_name, + ) + client2 = InstanceAdminClient( + credentials=creds2, + transport=transport_name, + ) + session1 = client1.transport.list_instance_configs._session + session2 = client2.transport.list_instance_configs._session + assert session1 != session2 + session1 = client1.transport.get_instance_config._session + session2 = client2.transport.get_instance_config._session + assert session1 != session2 + session1 = client1.transport.create_instance_config._session + session2 = client2.transport.create_instance_config._session + assert session1 != session2 + session1 = client1.transport.update_instance_config._session + session2 = client2.transport.update_instance_config._session + assert session1 != session2 + session1 = client1.transport.delete_instance_config._session + session2 = client2.transport.delete_instance_config._session + assert session1 != session2 + session1 = client1.transport.list_instance_config_operations._session + session2 = client2.transport.list_instance_config_operations._session + assert session1 != session2 + session1 = client1.transport.list_instances._session + session2 = client2.transport.list_instances._session + assert session1 != session2 + session1 = client1.transport.list_instance_partitions._session + session2 = client2.transport.list_instance_partitions._session + assert session1 != session2 + session1 = client1.transport.get_instance._session + session2 = client2.transport.get_instance._session + assert session1 != session2 + session1 = client1.transport.create_instance._session + session2 = client2.transport.create_instance._session + assert session1 != session2 + session1 = client1.transport.update_instance._session + session2 = client2.transport.update_instance._session + assert session1 != session2 + session1 = client1.transport.delete_instance._session + session2 = client2.transport.delete_instance._session + assert session1 != session2 + session1 = client1.transport.set_iam_policy._session + session2 = client2.transport.set_iam_policy._session + assert session1 != session2 + session1 = client1.transport.get_iam_policy._session + session2 = client2.transport.get_iam_policy._session + assert session1 != session2 + session1 = client1.transport.test_iam_permissions._session + session2 = client2.transport.test_iam_permissions._session + assert session1 != session2 + session1 = client1.transport.get_instance_partition._session + session2 = client2.transport.get_instance_partition._session + assert session1 != session2 + session1 = client1.transport.create_instance_partition._session + session2 = client2.transport.create_instance_partition._session + assert session1 != session2 + session1 = client1.transport.delete_instance_partition._session + session2 = client2.transport.delete_instance_partition._session + assert session1 != session2 + session1 = client1.transport.update_instance_partition._session + session2 = client2.transport.update_instance_partition._session + assert session1 != session2 + session1 = client1.transport.list_instance_partition_operations._session + session2 = client2.transport.list_instance_partition_operations._session + assert session1 != session2 + session1 = client1.transport.move_instance._session + session2 = client2.transport.move_instance._session + assert session1 != session2 + + +def test_instance_admin_grpc_transport_channel(): + channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials()) + + # Check that channel is used if provided. + transport = transports.InstanceAdminGrpcTransport( + host="squid.clam.whelk", + channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_instance_admin_grpc_asyncio_transport_channel(): + channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials()) + + # Check that channel is used if provided. + transport = transports.InstanceAdminGrpcAsyncIOTransport( + host="squid.clam.whelk", + channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. +@pytest.mark.parametrize( + "transport_class", + [ + transports.InstanceAdminGrpcTransport, + transports.InstanceAdminGrpcAsyncIOTransport, + ], +) +def test_instance_admin_transport_channel_mtls_with_client_cert_source(transport_class): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel" + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = ga_credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(google.auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. +@pytest.mark.parametrize( + "transport_class", + [ + transports.InstanceAdminGrpcTransport, + transports.InstanceAdminGrpcAsyncIOTransport, + ], +) +def test_instance_admin_transport_channel_mtls_with_adc(transport_class): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel" + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_instance_admin_grpc_lro_client(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance( + transport.operations_client, + operations_v1.OperationsClient, + ) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_instance_admin_grpc_lro_async_client(): + client = InstanceAdminAsyncClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc_asyncio", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance( + transport.operations_client, + operations_v1.OperationsAsyncClient, + ) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_instance_path(): + project = "squid" + instance = "clam" + expected = "projects/{project}/instances/{instance}".format( + project=project, + instance=instance, + ) + actual = InstanceAdminClient.instance_path(project, instance) + assert expected == actual + + +def test_parse_instance_path(): + expected = { + "project": "whelk", + "instance": "octopus", + } + path = InstanceAdminClient.instance_path(**expected) + + # Check that the path construction is reversible. + actual = InstanceAdminClient.parse_instance_path(path) + assert expected == actual + + +def test_instance_config_path(): + project = "oyster" + instance_config = "nudibranch" + expected = "projects/{project}/instanceConfigs/{instance_config}".format( + project=project, + instance_config=instance_config, + ) + actual = InstanceAdminClient.instance_config_path(project, instance_config) + assert expected == actual + + +def test_parse_instance_config_path(): + expected = { + "project": "cuttlefish", + "instance_config": "mussel", + } + path = InstanceAdminClient.instance_config_path(**expected) + + # Check that the path construction is reversible. + actual = InstanceAdminClient.parse_instance_config_path(path) + assert expected == actual + + +def test_instance_partition_path(): + project = "winkle" + instance = "nautilus" + instance_partition = "scallop" + expected = "projects/{project}/instances/{instance}/instancePartitions/{instance_partition}".format( + project=project, + instance=instance, + instance_partition=instance_partition, + ) + actual = InstanceAdminClient.instance_partition_path( + project, instance, instance_partition + ) + assert expected == actual + + +def test_parse_instance_partition_path(): + expected = { + "project": "abalone", + "instance": "squid", + "instance_partition": "clam", + } + path = InstanceAdminClient.instance_partition_path(**expected) + + # Check that the path construction is reversible. + actual = InstanceAdminClient.parse_instance_partition_path(path) + assert expected == actual + + +def test_common_billing_account_path(): + billing_account = "whelk" + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = InstanceAdminClient.common_billing_account_path(billing_account) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "octopus", + } + path = InstanceAdminClient.common_billing_account_path(**expected) + + # Check that the path construction is reversible. + actual = InstanceAdminClient.parse_common_billing_account_path(path) + assert expected == actual + + +def test_common_folder_path(): + folder = "oyster" + expected = "folders/{folder}".format( + folder=folder, + ) + actual = InstanceAdminClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "nudibranch", + } + path = InstanceAdminClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = InstanceAdminClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "cuttlefish" + expected = "organizations/{organization}".format( + organization=organization, + ) + actual = InstanceAdminClient.common_organization_path(organization) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "mussel", + } + path = InstanceAdminClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = InstanceAdminClient.parse_common_organization_path(path) + assert expected == actual + + +def test_common_project_path(): + project = "winkle" + expected = "projects/{project}".format( + project=project, + ) + actual = InstanceAdminClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "nautilus", + } + path = InstanceAdminClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = InstanceAdminClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "scallop" + location = "abalone" + expected = "projects/{project}/locations/{location}".format( + project=project, + location=location, + ) + actual = InstanceAdminClient.common_location_path(project, location) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "squid", + "location": "clam", + } + path = InstanceAdminClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = InstanceAdminClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_with_default_client_info(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.InstanceAdminTransport, "_prep_wrapped_messages" + ) as prep: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.InstanceAdminTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = InstanceAdminClient.get_transport_class() + transport = transport_class( + credentials=ga_credentials.AnonymousCredentials(), + client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + +def test_delete_operation(transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.DeleteOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.delete_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_delete_operation_async(transport: str = "grpc_asyncio"): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.DeleteOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_operation_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.DeleteOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + call.return_value = None + + client.delete_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_delete_operation_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.DeleteOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +def test_delete_operation_from_dict(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + response = client.delete_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +@pytest.mark.asyncio +async def test_delete_operation_from_dict_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +def test_cancel_operation(transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.CancelOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.cancel_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_cancel_operation_async(transport: str = "grpc_asyncio"): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.CancelOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.cancel_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_cancel_operation_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.CancelOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + call.return_value = None + + client.cancel_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_cancel_operation_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.CancelOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.cancel_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +def test_cancel_operation_from_dict(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + response = client.cancel_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +@pytest.mark.asyncio +async def test_cancel_operation_from_dict_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.cancel_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.cancel_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +def test_get_operation(transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.GetOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation() + response = client.get_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.Operation) + + +@pytest.mark.asyncio +async def test_get_operation_async(transport: str = "grpc_asyncio"): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.GetOperationRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation() + ) + response = await client.get_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.Operation) + + +def test_get_operation_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.GetOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + call.return_value = operations_pb2.Operation() + + client.get_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_operation_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.GetOperationRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation() + ) + await client.get_operation(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +def test_get_operation_from_dict(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation() + + response = client.get_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +@pytest.mark.asyncio +async def test_get_operation_from_dict_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_operation), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation() + ) + response = await client.get_operation( + request={ + "name": "locations", + } + ) + call.assert_called() + + +def test_list_operations(transport: str = "grpc"): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.ListOperationsRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.ListOperationsResponse() + response = client.list_operations(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.ListOperationsResponse) + + +@pytest.mark.asyncio +async def test_list_operations_async(transport: str = "grpc_asyncio"): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = operations_pb2.ListOperationsRequest() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.ListOperationsResponse() + ) + response = await client.list_operations(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, operations_pb2.ListOperationsResponse) + + +def test_list_operations_field_headers(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.ListOperationsRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + call.return_value = operations_pb2.ListOperationsResponse() + + client.list_operations(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_operations_field_headers_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = operations_pb2.ListOperationsRequest() + request.name = "locations" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.ListOperationsResponse() + ) + await client.list_operations(request) + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=locations", + ) in kw["metadata"] + + +def test_list_operations_from_dict(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.ListOperationsResponse() + + response = client.list_operations( + request={ + "name": "locations", + } + ) + call.assert_called() + + +@pytest.mark.asyncio +async def test_list_operations_from_dict_async(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), + ) + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_operations), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.ListOperationsResponse() + ) + response = await client.list_operations( + request={ + "name": "locations", + } + ) + call.assert_called() + + +def test_transport_close_grpc(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="grpc" + ) + with mock.patch.object( + type(getattr(client.transport, "_grpc_channel")), "close" + ) as close: + with client: + close.assert_not_called() + close.assert_called_once() + + +@pytest.mark.asyncio +async def test_transport_close_grpc_asyncio(): + client = InstanceAdminAsyncClient( + credentials=async_anonymous_credentials(), transport="grpc_asyncio" + ) + with mock.patch.object( + type(getattr(client.transport, "_grpc_channel")), "close" + ) as close: + async with client: + close.assert_not_called() + close.assert_called_once() + + +def test_transport_close_rest(): + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + with mock.patch.object( + type(getattr(client.transport, "_session")), "close" + ) as close: + with client: + close.assert_not_called() + close.assert_called_once() + + +def test_client_ctx(): + transports = [ + "rest", + "grpc", + ] + for transport in transports: + client = InstanceAdminClient( + credentials=ga_credentials.AnonymousCredentials(), transport=transport + ) + # Test client calls underlying transport. + with mock.patch.object(type(client.transport), "close") as close: + close.assert_not_called() + with client: + pass + close.assert_called() + + +@pytest.mark.parametrize( + "client_class,transport_class", + [ + (InstanceAdminClient, transports.InstanceAdminGrpcTransport), + (InstanceAdminAsyncClient, transports.InstanceAdminGrpcAsyncIOTransport), + ], +) +def test_api_key_credentials(client_class, transport_class): + with mock.patch.object( + google.auth._default, "get_api_key_credentials", create=True + ) as get_api_key_credentials: + mock_cred = mock.Mock() + get_api_key_credentials.return_value = mock_cred + options = client_options.ClientOptions() + options.api_key = "api_key" + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=mock_cred, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + ) diff --git a/tests/unit/gapic/spanner_v1/__init__.py b/tests/unit/gapic/spanner_v1/__init__.py new file mode 100644 index 0000000000..cbf94b283c --- /dev/null +++ b/tests/unit/gapic/spanner_v1/__init__.py @@ -0,0 +1,15 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/tests/unit/gapic/spanner_v1/test_spanner.py b/tests/unit/gapic/spanner_v1/test_spanner.py new file mode 100644 index 0000000000..83d9d72f7f --- /dev/null +++ b/tests/unit/gapic/spanner_v1/test_spanner.py @@ -0,0 +1,12737 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +import os + +# try/except added for compatibility with python < 3.8 +try: + from unittest import mock + from unittest.mock import AsyncMock # pragma: NO COVER +except ImportError: # pragma: NO COVER + import mock + +import grpc +from grpc.experimental import aio +from collections.abc import Iterable, AsyncIterable +from google.protobuf import json_format +import json +import math +import pytest +from google.api_core import api_core_version +from proto.marshal.rules.dates import DurationRule, TimestampRule +from proto.marshal.rules import wrappers +from requests import Response +from requests import Request, PreparedRequest +from requests.sessions import Session +from google.protobuf import json_format + +try: + from google.auth.aio import credentials as ga_credentials_async + + HAS_GOOGLE_AUTH_AIO = True +except ImportError: # pragma: NO COVER + HAS_GOOGLE_AUTH_AIO = False + +from google.api_core import client_options +from google.api_core import exceptions as core_exceptions +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.api_core import path_template +from google.api_core import retry as retries +from google.auth import credentials as ga_credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.spanner_v1.services.spanner import SpannerAsyncClient +from google.cloud.spanner_v1.services.spanner import SpannerClient +from google.cloud.spanner_v1.services.spanner import pagers +from google.cloud.spanner_v1.services.spanner import transports +from google.cloud.spanner_v1.types import commit_response +from google.cloud.spanner_v1.types import keys +from google.cloud.spanner_v1.types import mutation +from google.cloud.spanner_v1.types import result_set +from google.cloud.spanner_v1.types import spanner +from google.cloud.spanner_v1.types import transaction +from google.cloud.spanner_v1.types import type as gs_type +from google.oauth2 import service_account +from google.protobuf import duration_pb2 # type: ignore +from google.protobuf import struct_pb2 # type: ignore +from google.protobuf import timestamp_pb2 # type: ignore +from google.rpc import status_pb2 # type: ignore +import google.auth + + +CRED_INFO_JSON = { + "credential_source": "/path/to/file", + "credential_type": "service account credentials", + "principal": "service-account@example.com", +} +CRED_INFO_STRING = json.dumps(CRED_INFO_JSON) + + +async def mock_async_gen(data, chunk_size=1): + for i in range(0, len(data)): # pragma: NO COVER + chunk = data[i : i + chunk_size] + yield chunk.encode("utf-8") + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# TODO: use async auth anon credentials by default once the minimum version of google-auth is upgraded. +# See related issue: https://github.com/googleapis/gapic-generator-python/issues/2107. +def async_anonymous_credentials(): + if HAS_GOOGLE_AUTH_AIO: + return ga_credentials_async.AnonymousCredentials() + return ga_credentials.AnonymousCredentials() + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +# If default endpoint template is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint template so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint_template(client): + return ( + "test.{UNIVERSE_DOMAIN}" + if ("localhost" in client._DEFAULT_ENDPOINT_TEMPLATE) + else client._DEFAULT_ENDPOINT_TEMPLATE + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert SpannerClient._get_default_mtls_endpoint(None) is None + assert SpannerClient._get_default_mtls_endpoint(api_endpoint) == api_mtls_endpoint + assert ( + SpannerClient._get_default_mtls_endpoint(api_mtls_endpoint) == api_mtls_endpoint + ) + assert ( + SpannerClient._get_default_mtls_endpoint(sandbox_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + SpannerClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) + == sandbox_mtls_endpoint + ) + assert SpannerClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi + + +def test__read_environment_variables(): + assert SpannerClient._read_environment_variables() == (False, "auto", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + assert SpannerClient._read_environment_variables() == (True, "auto", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "false"}): + assert SpannerClient._read_environment_variables() == (False, "auto", None) + + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + SpannerClient._read_environment_variables() + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + assert SpannerClient._read_environment_variables() == (False, "never", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + assert SpannerClient._read_environment_variables() == (False, "always", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}): + assert SpannerClient._read_environment_variables() == (False, "auto", None) + + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + SpannerClient._read_environment_variables() + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + with mock.patch.dict(os.environ, {"GOOGLE_CLOUD_UNIVERSE_DOMAIN": "foo.com"}): + assert SpannerClient._read_environment_variables() == (False, "auto", "foo.com") + + +def test__get_client_cert_source(): + mock_provided_cert_source = mock.Mock() + mock_default_cert_source = mock.Mock() + + assert SpannerClient._get_client_cert_source(None, False) is None + assert ( + SpannerClient._get_client_cert_source(mock_provided_cert_source, False) is None + ) + assert ( + SpannerClient._get_client_cert_source(mock_provided_cert_source, True) + == mock_provided_cert_source + ) + + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", return_value=True + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=mock_default_cert_source, + ): + assert ( + SpannerClient._get_client_cert_source(None, True) + is mock_default_cert_source + ) + assert ( + SpannerClient._get_client_cert_source(mock_provided_cert_source, "true") + is mock_provided_cert_source + ) + + +@mock.patch.object( + SpannerClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(SpannerClient), +) +@mock.patch.object( + SpannerAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(SpannerAsyncClient), +) +def test__get_api_endpoint(): + api_override = "foo.com" + mock_client_cert_source = mock.Mock() + default_universe = SpannerClient._DEFAULT_UNIVERSE + default_endpoint = SpannerClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=default_universe + ) + mock_universe = "bar.com" + mock_endpoint = SpannerClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=mock_universe + ) + + assert ( + SpannerClient._get_api_endpoint( + api_override, mock_client_cert_source, default_universe, "always" + ) + == api_override + ) + assert ( + SpannerClient._get_api_endpoint( + None, mock_client_cert_source, default_universe, "auto" + ) + == SpannerClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + SpannerClient._get_api_endpoint(None, None, default_universe, "auto") + == default_endpoint + ) + assert ( + SpannerClient._get_api_endpoint(None, None, default_universe, "always") + == SpannerClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + SpannerClient._get_api_endpoint( + None, mock_client_cert_source, default_universe, "always" + ) + == SpannerClient.DEFAULT_MTLS_ENDPOINT + ) + assert ( + SpannerClient._get_api_endpoint(None, None, mock_universe, "never") + == mock_endpoint + ) + assert ( + SpannerClient._get_api_endpoint(None, None, default_universe, "never") + == default_endpoint + ) + + with pytest.raises(MutualTLSChannelError) as excinfo: + SpannerClient._get_api_endpoint( + None, mock_client_cert_source, mock_universe, "auto" + ) + assert ( + str(excinfo.value) + == "mTLS is not supported in any universe other than googleapis.com." + ) + + +def test__get_universe_domain(): + client_universe_domain = "foo.com" + universe_domain_env = "bar.com" + + assert ( + SpannerClient._get_universe_domain(client_universe_domain, universe_domain_env) + == client_universe_domain + ) + assert ( + SpannerClient._get_universe_domain(None, universe_domain_env) + == universe_domain_env + ) + assert ( + SpannerClient._get_universe_domain(None, None) + == SpannerClient._DEFAULT_UNIVERSE + ) + + with pytest.raises(ValueError) as excinfo: + SpannerClient._get_universe_domain("", None) + assert str(excinfo.value) == "Universe Domain cannot be an empty string." + + +@pytest.mark.parametrize( + "error_code,cred_info_json,show_cred_info", + [ + (401, CRED_INFO_JSON, True), + (403, CRED_INFO_JSON, True), + (404, CRED_INFO_JSON, True), + (500, CRED_INFO_JSON, False), + (401, None, False), + (403, None, False), + (404, None, False), + (500, None, False), + ], +) +def test__add_cred_info_for_auth_errors(error_code, cred_info_json, show_cred_info): + cred = mock.Mock(["get_cred_info"]) + cred.get_cred_info = mock.Mock(return_value=cred_info_json) + client = SpannerClient(credentials=cred) + client._transport._credentials = cred + + error = core_exceptions.GoogleAPICallError("message", details=["foo"]) + error.code = error_code + + client._add_cred_info_for_auth_errors(error) + if show_cred_info: + assert error.details == ["foo", CRED_INFO_STRING] + else: + assert error.details == ["foo"] + + +@pytest.mark.parametrize("error_code", [401, 403, 404, 500]) +def test__add_cred_info_for_auth_errors_no_get_cred_info(error_code): + cred = mock.Mock([]) + assert not hasattr(cred, "get_cred_info") + client = SpannerClient(credentials=cred) + client._transport._credentials = cred + + error = core_exceptions.GoogleAPICallError("message", details=[]) + error.code = error_code + + client._add_cred_info_for_auth_errors(error) + assert error.details == [] + + +@pytest.mark.parametrize( + "client_class,transport_name", + [ + (SpannerClient, "grpc"), + (SpannerAsyncClient, "grpc_asyncio"), + (SpannerClient, "rest"), + ], +) +def test_spanner_client_from_service_account_info(client_class, transport_name): + creds = ga_credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_info" + ) as factory: + factory.return_value = creds + info = {"valid": True} + client = client_class.from_service_account_info(info, transport=transport_name) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +@pytest.mark.parametrize( + "transport_class,transport_name", + [ + (transports.SpannerGrpcTransport, "grpc"), + (transports.SpannerGrpcAsyncIOTransport, "grpc_asyncio"), + (transports.SpannerRestTransport, "rest"), + ], +) +def test_spanner_client_service_account_always_use_jwt(transport_class, transport_name): + with mock.patch.object( + service_account.Credentials, "with_always_use_jwt_access", create=True + ) as use_jwt: + creds = service_account.Credentials(None, None, None) + transport = transport_class(credentials=creds, always_use_jwt_access=True) + use_jwt.assert_called_once_with(True) + + with mock.patch.object( + service_account.Credentials, "with_always_use_jwt_access", create=True + ) as use_jwt: + creds = service_account.Credentials(None, None, None) + transport = transport_class(credentials=creds, always_use_jwt_access=False) + use_jwt.assert_not_called() + + +@pytest.mark.parametrize( + "client_class,transport_name", + [ + (SpannerClient, "grpc"), + (SpannerAsyncClient, "grpc_asyncio"), + (SpannerClient, "rest"), + ], +) +def test_spanner_client_from_service_account_file(client_class, transport_name): + creds = ga_credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file( + "dummy/file/path.json", transport=transport_name + ) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + client = client_class.from_service_account_json( + "dummy/file/path.json", transport=transport_name + ) + assert client.transport._credentials == creds + assert isinstance(client, client_class) + + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +def test_spanner_client_get_transport_class(): + transport = SpannerClient.get_transport_class() + available_transports = [ + transports.SpannerGrpcTransport, + transports.SpannerRestTransport, + ] + assert transport in available_transports + + transport = SpannerClient.get_transport_class("grpc") + assert transport == transports.SpannerGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + (SpannerClient, transports.SpannerGrpcTransport, "grpc"), + (SpannerAsyncClient, transports.SpannerGrpcAsyncIOTransport, "grpc_asyncio"), + (SpannerClient, transports.SpannerRestTransport, "rest"), + ], +) +@mock.patch.object( + SpannerClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(SpannerClient), +) +@mock.patch.object( + SpannerAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(SpannerAsyncClient), +) +def test_spanner_client_client_options(client_class, transport_class, transport_name): + # Check that if channel is provided we won't create a new one. + with mock.patch.object(SpannerClient, "get_transport_class") as gtc: + transport = transport_class(credentials=ga_credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object(SpannerClient, "get_transport_class") as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name, client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + client = client_class(transport=transport_name) + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + client = client_class(transport=transport_name) + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + # Check the case api_endpoint is provided + options = client_options.ClientOptions( + api_audience="https://language.googleapis.com" + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience="https://language.googleapis.com", + metrics_interceptor=mock.ANY, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + (SpannerClient, transports.SpannerGrpcTransport, "grpc", "true"), + ( + SpannerAsyncClient, + transports.SpannerGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + (SpannerClient, transports.SpannerGrpcTransport, "grpc", "false"), + ( + SpannerAsyncClient, + transports.SpannerGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + (SpannerClient, transports.SpannerRestTransport, "rest", "true"), + (SpannerClient, transports.SpannerRestTransport, "rest", "false"), + ], +) +@mock.patch.object( + SpannerClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(SpannerClient), +) +@mock.patch.object( + SpannerAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(SpannerAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_spanner_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + + if use_client_cert_env == "false": + expected_client_cert_source = None + expected_host = client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ) + else: + expected_client_cert_source = client_cert_source_callback + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=client_cert_source_callback, + ): + if use_client_cert_env == "false": + expected_host = client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ) + expected_client_cert_source = None + else: + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_client_cert_source = client_cert_source_callback + + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + client_cert_source_for_mtls=expected_client_cert_source, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + patched.return_value = None + client = client_class(transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + +@pytest.mark.parametrize("client_class", [SpannerClient, SpannerAsyncClient]) +@mock.patch.object( + SpannerClient, "DEFAULT_ENDPOINT", modify_default_endpoint(SpannerClient) +) +@mock.patch.object( + SpannerAsyncClient, "DEFAULT_ENDPOINT", modify_default_endpoint(SpannerAsyncClient) +) +def test_spanner_client_get_mtls_endpoint_and_cert_source(client_class): + mock_client_cert_source = mock.Mock() + + # Test the case GOOGLE_API_USE_CLIENT_CERTIFICATE is "true". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + mock_api_endpoint = "foo" + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=mock_api_endpoint + ) + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source( + options + ) + assert api_endpoint == mock_api_endpoint + assert cert_source == mock_client_cert_source + + # Test the case GOOGLE_API_USE_CLIENT_CERTIFICATE is "false". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "false"}): + mock_client_cert_source = mock.Mock() + mock_api_endpoint = "foo" + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=mock_api_endpoint + ) + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source( + options + ) + assert api_endpoint == mock_api_endpoint + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "auto" and default cert doesn't exist. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=False, + ): + api_endpoint, cert_source = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_ENDPOINT + assert cert_source is None + + # Test the case GOOGLE_API_USE_MTLS_ENDPOINT is "auto" and default cert exists. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.mtls.has_default_client_cert_source", + return_value=True, + ): + with mock.patch( + "google.auth.transport.mtls.default_client_cert_source", + return_value=mock_client_cert_source, + ): + ( + api_endpoint, + cert_source, + ) = client_class.get_mtls_endpoint_and_cert_source() + assert api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + assert cert_source == mock_client_cert_source + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError) as excinfo: + client_class.get_mtls_endpoint_and_cert_source() + + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_MTLS_ENDPOINT` must be `never`, `auto` or `always`" + ) + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError) as excinfo: + client_class.get_mtls_endpoint_and_cert_source() + + assert ( + str(excinfo.value) + == "Environment variable `GOOGLE_API_USE_CLIENT_CERTIFICATE` must be either `true` or `false`" + ) + + +@pytest.mark.parametrize("client_class", [SpannerClient, SpannerAsyncClient]) +@mock.patch.object( + SpannerClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(SpannerClient), +) +@mock.patch.object( + SpannerAsyncClient, + "_DEFAULT_ENDPOINT_TEMPLATE", + modify_default_endpoint_template(SpannerAsyncClient), +) +def test_spanner_client_client_api_endpoint(client_class): + mock_client_cert_source = client_cert_source_callback + api_override = "foo.com" + default_universe = SpannerClient._DEFAULT_UNIVERSE + default_endpoint = SpannerClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=default_universe + ) + mock_universe = "bar.com" + mock_endpoint = SpannerClient._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=mock_universe + ) + + # If ClientOptions.api_endpoint is set and GOOGLE_API_USE_CLIENT_CERTIFICATE="true", + # use ClientOptions.api_endpoint as the api endpoint regardless. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "true"}): + with mock.patch( + "google.auth.transport.requests.AuthorizedSession.configure_mtls_channel" + ): + options = client_options.ClientOptions( + client_cert_source=mock_client_cert_source, api_endpoint=api_override + ) + client = client_class( + client_options=options, + credentials=ga_credentials.AnonymousCredentials(), + ) + assert client.api_endpoint == api_override + + # If ClientOptions.api_endpoint is not set and GOOGLE_API_USE_MTLS_ENDPOINT="never", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with GDU as the api endpoint. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + client = client_class(credentials=ga_credentials.AnonymousCredentials()) + assert client.api_endpoint == default_endpoint + + # If ClientOptions.api_endpoint is not set and GOOGLE_API_USE_MTLS_ENDPOINT="always", + # use the DEFAULT_MTLS_ENDPOINT as the api endpoint. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + client = client_class(credentials=ga_credentials.AnonymousCredentials()) + assert client.api_endpoint == client_class.DEFAULT_MTLS_ENDPOINT + + # If ClientOptions.api_endpoint is not set, GOOGLE_API_USE_MTLS_ENDPOINT="auto" (default), + # GOOGLE_API_USE_CLIENT_CERTIFICATE="false" (default), default cert source doesn't exist, + # and ClientOptions.universe_domain="bar.com", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with universe domain as the api endpoint. + options = client_options.ClientOptions() + universe_exists = hasattr(options, "universe_domain") + if universe_exists: + options = client_options.ClientOptions(universe_domain=mock_universe) + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + else: + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + assert client.api_endpoint == ( + mock_endpoint if universe_exists else default_endpoint + ) + assert client.universe_domain == ( + mock_universe if universe_exists else default_universe + ) + + # If ClientOptions does not have a universe domain attribute and GOOGLE_API_USE_MTLS_ENDPOINT="never", + # use the _DEFAULT_ENDPOINT_TEMPLATE populated with GDU as the api endpoint. + options = client_options.ClientOptions() + if hasattr(options, "universe_domain"): + delattr(options, "universe_domain") + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + client = client_class( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + assert client.api_endpoint == default_endpoint + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + (SpannerClient, transports.SpannerGrpcTransport, "grpc"), + (SpannerAsyncClient, transports.SpannerGrpcAsyncIOTransport, "grpc_asyncio"), + (SpannerClient, transports.SpannerRestTransport, "rest"), + ], +) +def test_spanner_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions( + scopes=["1", "2"], + ) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=["1", "2"], + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,grpc_helpers", + [ + (SpannerClient, transports.SpannerGrpcTransport, "grpc", grpc_helpers), + ( + SpannerAsyncClient, + transports.SpannerGrpcAsyncIOTransport, + "grpc_asyncio", + grpc_helpers_async, + ), + (SpannerClient, transports.SpannerRestTransport, "rest", None), + ], +) +def test_spanner_client_client_options_credentials_file( + client_class, transport_class, transport_name, grpc_helpers +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + +def test_spanner_client_client_options_from_dict(): + with mock.patch( + "google.cloud.spanner_v1.services.spanner.transports.SpannerGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = SpannerClient(client_options={"api_endpoint": "squid.clam.whelk"}) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,grpc_helpers", + [ + (SpannerClient, transports.SpannerGrpcTransport, "grpc", grpc_helpers), + ( + SpannerAsyncClient, + transports.SpannerGrpcAsyncIOTransport, + "grpc_asyncio", + grpc_helpers_async, + ), + ], +) +def test_spanner_client_create_channel_credentials_file( + client_class, transport_class, transport_name, grpc_helpers +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options, transport=transport_name) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) + + # test that the credentials from file are saved and used as the credentials. + with mock.patch.object( + google.auth, "load_credentials_from_file", autospec=True + ) as load_creds, mock.patch.object( + google.auth, "default", autospec=True + ) as adc, mock.patch.object( + grpc_helpers, "create_channel" + ) as create_channel: + creds = ga_credentials.AnonymousCredentials() + file_creds = ga_credentials.AnonymousCredentials() + load_creds.return_value = (file_creds, None) + adc.return_value = (creds, None) + client = client_class(client_options=options, transport=transport_name) + create_channel.assert_called_with( + "spanner.googleapis.com:443", + credentials=file_creds, + credentials_file=None, + quota_project_id=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.data", + ), + scopes=None, + default_host="spanner.googleapis.com", + ssl_credentials=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.CreateSessionRequest, + dict, + ], +) +def test_create_session(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.Session( + name="name_value", + creator_role="creator_role_value", + multiplexed=True, + ) + response = client.create_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.CreateSessionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.Session) + assert response.name == "name_value" + assert response.creator_role == "creator_role_value" + assert response.multiplexed is True + + +def test_create_session_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.CreateSessionRequest( + database="database_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.create_session(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.CreateSessionRequest( + database="database_value", + ) + + +def test_create_session_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.create_session in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.create_session] = mock_rpc + request = {} + client.create_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.create_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_session_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.create_session + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.create_session + ] = mock_rpc + + request = {} + await client.create_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.create_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_create_session_async( + transport: str = "grpc_asyncio", request_type=spanner.CreateSessionRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.Session( + name="name_value", + creator_role="creator_role_value", + multiplexed=True, + ) + ) + response = await client.create_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.CreateSessionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.Session) + assert response.name == "name_value" + assert response.creator_role == "creator_role_value" + assert response.multiplexed is True + + +@pytest.mark.asyncio +async def test_create_session_async_from_dict(): + await test_create_session_async(request_type=dict) + + +def test_create_session_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.CreateSessionRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + call.return_value = spanner.Session() + client.create_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_create_session_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.CreateSessionRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(spanner.Session()) + await client.create_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +def test_create_session_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.Session() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.create_session( + database="database_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + + +def test_create_session_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_session( + spanner.CreateSessionRequest(), + database="database_value", + ) + + +@pytest.mark.asyncio +async def test_create_session_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.Session() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(spanner.Session()) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.create_session( + database="database_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_create_session_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.create_session( + spanner.CreateSessionRequest(), + database="database_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.BatchCreateSessionsRequest, + dict, + ], +) +def test_batch_create_sessions(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.BatchCreateSessionsResponse() + response = client.batch_create_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.BatchCreateSessionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.BatchCreateSessionsResponse) + + +def test_batch_create_sessions_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.BatchCreateSessionsRequest( + database="database_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.batch_create_sessions(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.BatchCreateSessionsRequest( + database="database_value", + ) + + +def test_batch_create_sessions_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.batch_create_sessions + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.batch_create_sessions + ] = mock_rpc + request = {} + client.batch_create_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.batch_create_sessions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_batch_create_sessions_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.batch_create_sessions + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.batch_create_sessions + ] = mock_rpc + + request = {} + await client.batch_create_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.batch_create_sessions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_batch_create_sessions_async( + transport: str = "grpc_asyncio", request_type=spanner.BatchCreateSessionsRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.BatchCreateSessionsResponse() + ) + response = await client.batch_create_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.BatchCreateSessionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.BatchCreateSessionsResponse) + + +@pytest.mark.asyncio +async def test_batch_create_sessions_async_from_dict(): + await test_batch_create_sessions_async(request_type=dict) + + +def test_batch_create_sessions_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.BatchCreateSessionsRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + call.return_value = spanner.BatchCreateSessionsResponse() + client.batch_create_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_batch_create_sessions_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.BatchCreateSessionsRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.BatchCreateSessionsResponse() + ) + await client.batch_create_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +def test_batch_create_sessions_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.BatchCreateSessionsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.batch_create_sessions( + database="database_value", + session_count=1420, + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + arg = args[0].session_count + mock_val = 1420 + assert arg == mock_val + + +def test_batch_create_sessions_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.batch_create_sessions( + spanner.BatchCreateSessionsRequest(), + database="database_value", + session_count=1420, + ) + + +@pytest.mark.asyncio +async def test_batch_create_sessions_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.BatchCreateSessionsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.BatchCreateSessionsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.batch_create_sessions( + database="database_value", + session_count=1420, + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + arg = args[0].session_count + mock_val = 1420 + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_batch_create_sessions_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.batch_create_sessions( + spanner.BatchCreateSessionsRequest(), + database="database_value", + session_count=1420, + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.GetSessionRequest, + dict, + ], +) +def test_get_session(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.Session( + name="name_value", + creator_role="creator_role_value", + multiplexed=True, + ) + response = client.get_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.GetSessionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.Session) + assert response.name == "name_value" + assert response.creator_role == "creator_role_value" + assert response.multiplexed is True + + +def test_get_session_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.GetSessionRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.get_session(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.GetSessionRequest( + name="name_value", + ) + + +def test_get_session_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_session in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_session] = mock_rpc + request = {} + client.get_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_session_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.get_session + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.get_session + ] = mock_rpc + + request = {} + await client.get_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.get_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_get_session_async( + transport: str = "grpc_asyncio", request_type=spanner.GetSessionRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.Session( + name="name_value", + creator_role="creator_role_value", + multiplexed=True, + ) + ) + response = await client.get_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.GetSessionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.Session) + assert response.name == "name_value" + assert response.creator_role == "creator_role_value" + assert response.multiplexed is True + + +@pytest.mark.asyncio +async def test_get_session_async_from_dict(): + await test_get_session_async(request_type=dict) + + +def test_get_session_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.GetSessionRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + call.return_value = spanner.Session() + client.get_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_get_session_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.GetSessionRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(spanner.Session()) + await client.get_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_get_session_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.Session() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.get_session( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_get_session_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_session( + spanner.GetSessionRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_get_session_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.Session() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(spanner.Session()) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.get_session( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_get_session_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.get_session( + spanner.GetSessionRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ListSessionsRequest, + dict, + ], +) +def test_list_sessions(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.ListSessionsResponse( + next_page_token="next_page_token_value", + ) + response = client.list_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.ListSessionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListSessionsPager) + assert response.next_page_token == "next_page_token_value" + + +def test_list_sessions_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.ListSessionsRequest( + database="database_value", + page_token="page_token_value", + filter="filter_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.list_sessions(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.ListSessionsRequest( + database="database_value", + page_token="page_token_value", + filter="filter_value", + ) + + +def test_list_sessions_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.list_sessions in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.list_sessions] = mock_rpc + request = {} + client.list_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_sessions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_sessions_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.list_sessions + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.list_sessions + ] = mock_rpc + + request = {} + await client.list_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.list_sessions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_list_sessions_async( + transport: str = "grpc_asyncio", request_type=spanner.ListSessionsRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.ListSessionsResponse( + next_page_token="next_page_token_value", + ) + ) + response = await client.list_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.ListSessionsRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListSessionsAsyncPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.asyncio +async def test_list_sessions_async_from_dict(): + await test_list_sessions_async(request_type=dict) + + +def test_list_sessions_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ListSessionsRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + call.return_value = spanner.ListSessionsResponse() + client.list_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_list_sessions_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ListSessionsRequest() + + request.database = "database_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.ListSessionsResponse() + ) + await client.list_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "database=database_value", + ) in kw["metadata"] + + +def test_list_sessions_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.ListSessionsResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.list_sessions( + database="database_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + + +def test_list_sessions_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_sessions( + spanner.ListSessionsRequest(), + database="database_value", + ) + + +@pytest.mark.asyncio +async def test_list_sessions_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.ListSessionsResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.ListSessionsResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.list_sessions( + database="database_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].database + mock_val = "database_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_list_sessions_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.list_sessions( + spanner.ListSessionsRequest(), + database="database_value", + ) + + +def test_list_sessions_pager(transport_name: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + spanner.Session(), + ], + next_page_token="abc", + ), + spanner.ListSessionsResponse( + sessions=[], + next_page_token="def", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + ], + next_page_token="ghi", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + ], + ), + RuntimeError, + ) + + expected_metadata = () + retry = retries.Retry() + timeout = 5 + expected_metadata = tuple(expected_metadata) + ( + gapic_v1.routing_header.to_grpc_metadata((("database", ""),)), + ) + pager = client.list_sessions(request={}, retry=retry, timeout=timeout) + + assert pager._metadata == expected_metadata + assert pager._retry == retry + assert pager._timeout == timeout + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, spanner.Session) for i in results) + + +def test_list_sessions_pages(transport_name: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport_name, + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + spanner.Session(), + ], + next_page_token="abc", + ), + spanner.ListSessionsResponse( + sessions=[], + next_page_token="def", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + ], + next_page_token="ghi", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + ], + ), + RuntimeError, + ) + pages = list(client.list_sessions(request={}).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.asyncio +async def test_list_sessions_async_pager(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_sessions), "__call__", new_callable=mock.AsyncMock + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + spanner.Session(), + ], + next_page_token="abc", + ), + spanner.ListSessionsResponse( + sessions=[], + next_page_token="def", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + ], + next_page_token="ghi", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + ], + ), + RuntimeError, + ) + async_pager = await client.list_sessions( + request={}, + ) + assert async_pager.next_page_token == "abc" + responses = [] + async for response in async_pager: # pragma: no branch + responses.append(response) + + assert len(responses) == 6 + assert all(isinstance(i, spanner.Session) for i in responses) + + +@pytest.mark.asyncio +async def test_list_sessions_async_pages(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.list_sessions), "__call__", new_callable=mock.AsyncMock + ) as call: + # Set the response to a series of pages. + call.side_effect = ( + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + spanner.Session(), + ], + next_page_token="abc", + ), + spanner.ListSessionsResponse( + sessions=[], + next_page_token="def", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + ], + next_page_token="ghi", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + ], + ), + RuntimeError, + ) + pages = [] + # Workaround issue in python 3.9 related to code coverage by adding `# pragma: no branch` + # See https://github.com/googleapis/gapic-generator-python/pull/1174#issuecomment-1025132372 + async for page_ in ( # pragma: no branch + await client.list_sessions(request={}) + ).pages: + pages.append(page_) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.DeleteSessionRequest, + dict, + ], +) +def test_delete_session(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.delete_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.DeleteSessionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_delete_session_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.DeleteSessionRequest( + name="name_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.delete_session(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.DeleteSessionRequest( + name="name_value", + ) + + +def test_delete_session_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.delete_session in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.delete_session] = mock_rpc + request = {} + client.delete_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_session_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.delete_session + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.delete_session + ] = mock_rpc + + request = {} + await client.delete_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.delete_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_delete_session_async( + transport: str = "grpc_asyncio", request_type=spanner.DeleteSessionRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.delete_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.DeleteSessionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_delete_session_async_from_dict(): + await test_delete_session_async(request_type=dict) + + +def test_delete_session_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.DeleteSessionRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + call.return_value = None + client.delete_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_delete_session_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.DeleteSessionRequest() + + request.name = "name_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_session(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "name=name_value", + ) in kw["metadata"] + + +def test_delete_session_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.delete_session( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +def test_delete_session_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_session( + spanner.DeleteSessionRequest(), + name="name_value", + ) + + +@pytest.mark.asyncio +async def test_delete_session_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.delete_session( + name="name_value", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].name + mock_val = "name_value" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_delete_session_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.delete_session( + spanner.DeleteSessionRequest(), + name="name_value", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ExecuteSqlRequest, + dict, + ], +) +def test_execute_sql(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.execute_sql), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = result_set.ResultSet() + response = client.execute_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.ExecuteSqlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, result_set.ResultSet) + + +def test_execute_sql_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.execute_sql), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.execute_sql(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + +def test_execute_sql_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.execute_sql in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.execute_sql] = mock_rpc + request = {} + client.execute_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.execute_sql(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_execute_sql_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.execute_sql + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.execute_sql + ] = mock_rpc + + request = {} + await client.execute_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.execute_sql(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_execute_sql_async( + transport: str = "grpc_asyncio", request_type=spanner.ExecuteSqlRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.execute_sql), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + result_set.ResultSet() + ) + response = await client.execute_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.ExecuteSqlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, result_set.ResultSet) + + +@pytest.mark.asyncio +async def test_execute_sql_async_from_dict(): + await test_execute_sql_async(request_type=dict) + + +def test_execute_sql_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ExecuteSqlRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.execute_sql), "__call__") as call: + call.return_value = result_set.ResultSet() + client.execute_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_execute_sql_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ExecuteSqlRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.execute_sql), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + result_set.ResultSet() + ) + await client.execute_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ExecuteSqlRequest, + dict, + ], +) +def test_execute_streaming_sql(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_streaming_sql), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iter([result_set.PartialResultSet()]) + response = client.execute_streaming_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.ExecuteSqlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + for message in response: + assert isinstance(message, result_set.PartialResultSet) + + +def test_execute_streaming_sql_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_streaming_sql), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.execute_streaming_sql(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.ExecuteSqlRequest( + session="session_value", + sql="sql_value", + ) + + +def test_execute_streaming_sql_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.execute_streaming_sql + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.execute_streaming_sql + ] = mock_rpc + request = {} + client.execute_streaming_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.execute_streaming_sql(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_execute_streaming_sql_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.execute_streaming_sql + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.execute_streaming_sql + ] = mock_rpc + + request = {} + await client.execute_streaming_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.execute_streaming_sql(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_execute_streaming_sql_async( + transport: str = "grpc_asyncio", request_type=spanner.ExecuteSqlRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_streaming_sql), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[result_set.PartialResultSet()] + ) + response = await client.execute_streaming_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.ExecuteSqlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + message = await response.read() + assert isinstance(message, result_set.PartialResultSet) + + +@pytest.mark.asyncio +async def test_execute_streaming_sql_async_from_dict(): + await test_execute_streaming_sql_async(request_type=dict) + + +def test_execute_streaming_sql_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ExecuteSqlRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_streaming_sql), "__call__" + ) as call: + call.return_value = iter([result_set.PartialResultSet()]) + client.execute_streaming_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_execute_streaming_sql_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ExecuteSqlRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_streaming_sql), "__call__" + ) as call: + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[result_set.PartialResultSet()] + ) + await client.execute_streaming_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ExecuteBatchDmlRequest, + dict, + ], +) +def test_execute_batch_dml(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_batch_dml), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.ExecuteBatchDmlResponse() + response = client.execute_batch_dml(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.ExecuteBatchDmlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.ExecuteBatchDmlResponse) + + +def test_execute_batch_dml_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.ExecuteBatchDmlRequest( + session="session_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_batch_dml), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.execute_batch_dml(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.ExecuteBatchDmlRequest( + session="session_value", + ) + + +def test_execute_batch_dml_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.execute_batch_dml in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.execute_batch_dml + ] = mock_rpc + request = {} + client.execute_batch_dml(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.execute_batch_dml(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_execute_batch_dml_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.execute_batch_dml + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.execute_batch_dml + ] = mock_rpc + + request = {} + await client.execute_batch_dml(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.execute_batch_dml(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_execute_batch_dml_async( + transport: str = "grpc_asyncio", request_type=spanner.ExecuteBatchDmlRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_batch_dml), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.ExecuteBatchDmlResponse() + ) + response = await client.execute_batch_dml(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.ExecuteBatchDmlRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.ExecuteBatchDmlResponse) + + +@pytest.mark.asyncio +async def test_execute_batch_dml_async_from_dict(): + await test_execute_batch_dml_async(request_type=dict) + + +def test_execute_batch_dml_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ExecuteBatchDmlRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_batch_dml), "__call__" + ) as call: + call.return_value = spanner.ExecuteBatchDmlResponse() + client.execute_batch_dml(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_execute_batch_dml_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ExecuteBatchDmlRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.execute_batch_dml), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.ExecuteBatchDmlResponse() + ) + await client.execute_batch_dml(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ReadRequest, + dict, + ], +) +def test_read(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = result_set.ResultSet() + response = client.read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.ReadRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, result_set.ResultSet) + + +def test_read_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.ReadRequest( + session="session_value", + table="table_value", + index="index_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.read), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.read(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.ReadRequest( + session="session_value", + table="table_value", + index="index_value", + ) + + +def test_read_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.read in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.read] = mock_rpc + request = {} + client.read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_read_async_use_cached_wrapped_rpc(transport: str = "grpc_asyncio"): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.read in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.read + ] = mock_rpc + + request = {} + await client.read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_read_async( + transport: str = "grpc_asyncio", request_type=spanner.ReadRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + result_set.ResultSet() + ) + response = await client.read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.ReadRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, result_set.ResultSet) + + +@pytest.mark.asyncio +async def test_read_async_from_dict(): + await test_read_async(request_type=dict) + + +def test_read_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ReadRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.read), "__call__") as call: + call.return_value = result_set.ResultSet() + client.read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_read_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ReadRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.read), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + result_set.ResultSet() + ) + await client.read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ReadRequest, + dict, + ], +) +def test_streaming_read(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.streaming_read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = iter([result_set.PartialResultSet()]) + response = client.streaming_read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.ReadRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + for message in response: + assert isinstance(message, result_set.PartialResultSet) + + +def test_streaming_read_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.ReadRequest( + session="session_value", + table="table_value", + index="index_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.streaming_read), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.streaming_read(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.ReadRequest( + session="session_value", + table="table_value", + index="index_value", + ) + + +def test_streaming_read_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.streaming_read in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.streaming_read] = mock_rpc + request = {} + client.streaming_read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.streaming_read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_streaming_read_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.streaming_read + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.streaming_read + ] = mock_rpc + + request = {} + await client.streaming_read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.streaming_read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_streaming_read_async( + transport: str = "grpc_asyncio", request_type=spanner.ReadRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.streaming_read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[result_set.PartialResultSet()] + ) + response = await client.streaming_read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.ReadRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + message = await response.read() + assert isinstance(message, result_set.PartialResultSet) + + +@pytest.mark.asyncio +async def test_streaming_read_async_from_dict(): + await test_streaming_read_async(request_type=dict) + + +def test_streaming_read_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ReadRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.streaming_read), "__call__") as call: + call.return_value = iter([result_set.PartialResultSet()]) + client.streaming_read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_streaming_read_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.ReadRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.streaming_read), "__call__") as call: + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[result_set.PartialResultSet()] + ) + await client.streaming_read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.BeginTransactionRequest, + dict, + ], +) +def test_begin_transaction(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = transaction.Transaction( + id=b"id_blob", + ) + response = client.begin_transaction(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.BeginTransactionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, transaction.Transaction) + assert response.id == b"id_blob" + + +def test_begin_transaction_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.BeginTransactionRequest( + session="session_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.begin_transaction(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.BeginTransactionRequest( + session="session_value", + ) + + +def test_begin_transaction_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.begin_transaction in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.begin_transaction + ] = mock_rpc + request = {} + client.begin_transaction(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.begin_transaction(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_begin_transaction_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.begin_transaction + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.begin_transaction + ] = mock_rpc + + request = {} + await client.begin_transaction(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.begin_transaction(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_begin_transaction_async( + transport: str = "grpc_asyncio", request_type=spanner.BeginTransactionRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + transaction.Transaction( + id=b"id_blob", + ) + ) + response = await client.begin_transaction(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.BeginTransactionRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, transaction.Transaction) + assert response.id == b"id_blob" + + +@pytest.mark.asyncio +async def test_begin_transaction_async_from_dict(): + await test_begin_transaction_async(request_type=dict) + + +def test_begin_transaction_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.BeginTransactionRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + call.return_value = transaction.Transaction() + client.begin_transaction(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_begin_transaction_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.BeginTransactionRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + transaction.Transaction() + ) + await client.begin_transaction(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +def test_begin_transaction_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = transaction.Transaction() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.begin_transaction( + session="session_value", + options=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].session + mock_val = "session_value" + assert arg == mock_val + arg = args[0].options + mock_val = transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ) + assert arg == mock_val + + +def test_begin_transaction_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.begin_transaction( + spanner.BeginTransactionRequest(), + session="session_value", + options=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + +@pytest.mark.asyncio +async def test_begin_transaction_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = transaction.Transaction() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + transaction.Transaction() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.begin_transaction( + session="session_value", + options=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].session + mock_val = "session_value" + assert arg == mock_val + arg = args[0].options + mock_val = transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ) + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_begin_transaction_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.begin_transaction( + spanner.BeginTransactionRequest(), + session="session_value", + options=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.CommitRequest, + dict, + ], +) +def test_commit(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = commit_response.CommitResponse() + response = client.commit(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.CommitRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, commit_response.CommitResponse) + + +def test_commit_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.CommitRequest( + session="session_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.commit(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.CommitRequest( + session="session_value", + ) + + +def test_commit_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.commit in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.commit] = mock_rpc + request = {} + client.commit(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.commit(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_commit_async_use_cached_wrapped_rpc(transport: str = "grpc_asyncio"): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.commit + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.commit + ] = mock_rpc + + request = {} + await client.commit(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.commit(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_commit_async( + transport: str = "grpc_asyncio", request_type=spanner.CommitRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + commit_response.CommitResponse() + ) + response = await client.commit(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.CommitRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, commit_response.CommitResponse) + + +@pytest.mark.asyncio +async def test_commit_async_from_dict(): + await test_commit_async(request_type=dict) + + +def test_commit_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.CommitRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + call.return_value = commit_response.CommitResponse() + client.commit(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_commit_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.CommitRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + commit_response.CommitResponse() + ) + await client.commit(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +def test_commit_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = commit_response.CommitResponse() + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.commit( + session="session_value", + transaction_id=b"transaction_id_blob", + mutations=[ + mutation.Mutation(insert=mutation.Mutation.Write(table="table_value")) + ], + single_use_transaction=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].session + mock_val = "session_value" + assert arg == mock_val + arg = args[0].mutations + mock_val = [ + mutation.Mutation(insert=mutation.Mutation.Write(table="table_value")) + ] + assert arg == mock_val + assert args[0].single_use_transaction == transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ) + + +def test_commit_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.commit( + spanner.CommitRequest(), + session="session_value", + transaction_id=b"transaction_id_blob", + mutations=[ + mutation.Mutation(insert=mutation.Mutation.Write(table="table_value")) + ], + single_use_transaction=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + +@pytest.mark.asyncio +async def test_commit_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = commit_response.CommitResponse() + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + commit_response.CommitResponse() + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.commit( + session="session_value", + transaction_id=b"transaction_id_blob", + mutations=[ + mutation.Mutation(insert=mutation.Mutation.Write(table="table_value")) + ], + single_use_transaction=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].session + mock_val = "session_value" + assert arg == mock_val + arg = args[0].mutations + mock_val = [ + mutation.Mutation(insert=mutation.Mutation.Write(table="table_value")) + ] + assert arg == mock_val + assert args[0].single_use_transaction == transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ) + + +@pytest.mark.asyncio +async def test_commit_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.commit( + spanner.CommitRequest(), + session="session_value", + transaction_id=b"transaction_id_blob", + mutations=[ + mutation.Mutation(insert=mutation.Mutation.Write(table="table_value")) + ], + single_use_transaction=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.RollbackRequest, + dict, + ], +) +def test_rollback(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + response = client.rollback(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.RollbackRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +def test_rollback_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.RollbackRequest( + session="session_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.rollback(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.RollbackRequest( + session="session_value", + ) + + +def test_rollback_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.rollback in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.rollback] = mock_rpc + request = {} + client.rollback(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.rollback(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_rollback_async_use_cached_wrapped_rpc(transport: str = "grpc_asyncio"): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.rollback + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.rollback + ] = mock_rpc + + request = {} + await client.rollback(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.rollback(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_rollback_async( + transport: str = "grpc_asyncio", request_type=spanner.RollbackRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + response = await client.rollback(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.RollbackRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.asyncio +async def test_rollback_async_from_dict(): + await test_rollback_async(request_type=dict) + + +def test_rollback_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.RollbackRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + call.return_value = None + client.rollback(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_rollback_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.RollbackRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.rollback(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +def test_rollback_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.rollback( + session="session_value", + transaction_id=b"transaction_id_blob", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].session + mock_val = "session_value" + assert arg == mock_val + arg = args[0].transaction_id + mock_val = b"transaction_id_blob" + assert arg == mock_val + + +def test_rollback_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.rollback( + spanner.RollbackRequest(), + session="session_value", + transaction_id=b"transaction_id_blob", + ) + + +@pytest.mark.asyncio +async def test_rollback_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = None + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.rollback( + session="session_value", + transaction_id=b"transaction_id_blob", + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].session + mock_val = "session_value" + assert arg == mock_val + arg = args[0].transaction_id + mock_val = b"transaction_id_blob" + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_rollback_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.rollback( + spanner.RollbackRequest(), + session="session_value", + transaction_id=b"transaction_id_blob", + ) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.PartitionQueryRequest, + dict, + ], +) +def test_partition_query(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_query), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.PartitionResponse() + response = client.partition_query(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.PartitionQueryRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.PartitionResponse) + + +def test_partition_query_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.PartitionQueryRequest( + session="session_value", + sql="sql_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_query), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.partition_query(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.PartitionQueryRequest( + session="session_value", + sql="sql_value", + ) + + +def test_partition_query_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.partition_query in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.partition_query] = mock_rpc + request = {} + client.partition_query(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.partition_query(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_partition_query_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.partition_query + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.partition_query + ] = mock_rpc + + request = {} + await client.partition_query(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.partition_query(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_partition_query_async( + transport: str = "grpc_asyncio", request_type=spanner.PartitionQueryRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_query), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.PartitionResponse() + ) + response = await client.partition_query(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.PartitionQueryRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.PartitionResponse) + + +@pytest.mark.asyncio +async def test_partition_query_async_from_dict(): + await test_partition_query_async(request_type=dict) + + +def test_partition_query_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.PartitionQueryRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_query), "__call__") as call: + call.return_value = spanner.PartitionResponse() + client.partition_query(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_partition_query_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.PartitionQueryRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_query), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.PartitionResponse() + ) + await client.partition_query(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.PartitionReadRequest, + dict, + ], +) +def test_partition_read(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = spanner.PartitionResponse() + response = client.partition_read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.PartitionReadRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.PartitionResponse) + + +def test_partition_read_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.PartitionReadRequest( + session="session_value", + table="table_value", + index="index_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_read), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.partition_read(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.PartitionReadRequest( + session="session_value", + table="table_value", + index="index_value", + ) + + +def test_partition_read_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.partition_read in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.partition_read] = mock_rpc + request = {} + client.partition_read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.partition_read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_partition_read_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.partition_read + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.partition_read + ] = mock_rpc + + request = {} + await client.partition_read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.partition_read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_partition_read_async( + transport: str = "grpc_asyncio", request_type=spanner.PartitionReadRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.PartitionResponse() + ) + response = await client.partition_read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.PartitionReadRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.PartitionResponse) + + +@pytest.mark.asyncio +async def test_partition_read_async_from_dict(): + await test_partition_read_async(request_type=dict) + + +def test_partition_read_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.PartitionReadRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_read), "__call__") as call: + call.return_value = spanner.PartitionResponse() + client.partition_read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_partition_read_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.PartitionReadRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.partition_read), "__call__") as call: + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.PartitionResponse() + ) + await client.partition_read(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.BatchWriteRequest, + dict, + ], +) +def test_batch_write(request_type, transport: str = "grpc"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = iter([spanner.BatchWriteResponse()]) + response = client.batch_write(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + request = spanner.BatchWriteRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + for message in response: + assert isinstance(message, spanner.BatchWriteResponse) + + +def test_batch_write_non_empty_request_with_auto_populated_field(): + # This test is a coverage failsafe to make sure that UUID4 fields are + # automatically populated, according to AIP-4235, with non-empty requests. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Populate all string fields in the request which are not UUID4 + # since we want to check that UUID4 are populated automatically + # if they meet the requirements of AIP 4235. + request = spanner.BatchWriteRequest( + session="session_value", + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + call.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client.batch_write(request=request) + call.assert_called() + _, args, _ = call.mock_calls[0] + assert args[0] == spanner.BatchWriteRequest( + session="session_value", + ) + + +def test_batch_write_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.batch_write in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.batch_write] = mock_rpc + request = {} + client.batch_write(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.batch_write(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_batch_write_async_use_cached_wrapped_rpc( + transport: str = "grpc_asyncio", +): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method_async.wrap_method") as wrapper_fn: + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._client._transport.batch_write + in client._client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.AsyncMock() + mock_rpc.return_value = mock.Mock() + client._client._transport._wrapped_methods[ + client._client._transport.batch_write + ] = mock_rpc + + request = {} + await client.batch_write(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + await client.batch_write(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +@pytest.mark.asyncio +async def test_batch_write_async( + transport: str = "grpc_asyncio", request_type=spanner.BatchWriteRequest +): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[spanner.BatchWriteResponse()] + ) + response = await client.batch_write(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + request = spanner.BatchWriteRequest() + assert args[0] == request + + # Establish that the response is the type that we expect. + message = await response.read() + assert isinstance(message, spanner.BatchWriteResponse) + + +@pytest.mark.asyncio +async def test_batch_write_async_from_dict(): + await test_batch_write_async(request_type=dict) + + +def test_batch_write_field_headers(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.BatchWriteRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + call.return_value = iter([spanner.BatchWriteResponse()]) + client.batch_write(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +@pytest.mark.asyncio +async def test_batch_write_field_headers_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Any value that is part of the HTTP/1.1 URI should be sent as + # a field header. Set these to a non-empty value. + request = spanner.BatchWriteRequest() + + request.session = "session_value" + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[spanner.BatchWriteResponse()] + ) + await client.batch_write(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + assert args[0] == request + + # Establish that the field header was sent. + _, _, kw = call.mock_calls[0] + assert ( + "x-goog-request-params", + "session=session_value", + ) in kw["metadata"] + + +def test_batch_write_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = iter([spanner.BatchWriteResponse()]) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.batch_write( + session="session_value", + mutation_groups=[ + spanner.BatchWriteRequest.MutationGroup( + mutations=[ + mutation.Mutation( + insert=mutation.Mutation.Write(table="table_value") + ) + ] + ) + ], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + arg = args[0].session + mock_val = "session_value" + assert arg == mock_val + arg = args[0].mutation_groups + mock_val = [ + spanner.BatchWriteRequest.MutationGroup( + mutations=[ + mutation.Mutation( + insert=mutation.Mutation.Write(table="table_value") + ) + ] + ) + ] + assert arg == mock_val + + +def test_batch_write_flattened_error(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.batch_write( + spanner.BatchWriteRequest(), + session="session_value", + mutation_groups=[ + spanner.BatchWriteRequest.MutationGroup( + mutations=[ + mutation.Mutation( + insert=mutation.Mutation.Write(table="table_value") + ) + ] + ) + ], + ) + + +@pytest.mark.asyncio +async def test_batch_write_flattened_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = iter([spanner.BatchWriteResponse()]) + + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.batch_write( + session="session_value", + mutation_groups=[ + spanner.BatchWriteRequest.MutationGroup( + mutations=[ + mutation.Mutation( + insert=mutation.Mutation.Write(table="table_value") + ) + ] + ) + ], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + arg = args[0].session + mock_val = "session_value" + assert arg == mock_val + arg = args[0].mutation_groups + mock_val = [ + spanner.BatchWriteRequest.MutationGroup( + mutations=[ + mutation.Mutation( + insert=mutation.Mutation.Write(table="table_value") + ) + ] + ) + ] + assert arg == mock_val + + +@pytest.mark.asyncio +async def test_batch_write_flattened_error_async(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.batch_write( + spanner.BatchWriteRequest(), + session="session_value", + mutation_groups=[ + spanner.BatchWriteRequest.MutationGroup( + mutations=[ + mutation.Mutation( + insert=mutation.Mutation.Write(table="table_value") + ) + ] + ) + ], + ) + + +def test_create_session_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.create_session in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.create_session] = mock_rpc + + request = {} + client.create_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.create_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_create_session_rest_required_fields(request_type=spanner.CreateSessionRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["database"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_session._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["database"] = "database_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).create_session._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "database" in jsonified_request + assert jsonified_request["database"] == "database_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner.Session() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.Session.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.create_session(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_create_session_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.create_session._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "database", + "session", + ) + ) + ) + + +def test_create_session_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.Session() + + # get arguments that satisfy an http rule for this method + sample_request = { + "database": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + database="database_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner.Session.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.create_session(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{database=projects/*/instances/*/databases/*}/sessions" + % client.transport._host, + args[1], + ) + + +def test_create_session_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.create_session( + spanner.CreateSessionRequest(), + database="database_value", + ) + + +def test_batch_create_sessions_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.batch_create_sessions + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.batch_create_sessions + ] = mock_rpc + + request = {} + client.batch_create_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.batch_create_sessions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_batch_create_sessions_rest_required_fields( + request_type=spanner.BatchCreateSessionsRequest, +): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["database"] = "" + request_init["session_count"] = 0 + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).batch_create_sessions._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["database"] = "database_value" + jsonified_request["sessionCount"] = 1420 + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).batch_create_sessions._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "database" in jsonified_request + assert jsonified_request["database"] == "database_value" + assert "sessionCount" in jsonified_request + assert jsonified_request["sessionCount"] == 1420 + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner.BatchCreateSessionsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.BatchCreateSessionsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.batch_create_sessions(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_batch_create_sessions_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.batch_create_sessions._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "database", + "sessionCount", + ) + ) + ) + + +def test_batch_create_sessions_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.BatchCreateSessionsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "database": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + database="database_value", + session_count=1420, + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner.BatchCreateSessionsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.batch_create_sessions(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{database=projects/*/instances/*/databases/*}/sessions:batchCreate" + % client.transport._host, + args[1], + ) + + +def test_batch_create_sessions_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.batch_create_sessions( + spanner.BatchCreateSessionsRequest(), + database="database_value", + session_count=1420, + ) + + +def test_get_session_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.get_session in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.get_session] = mock_rpc + + request = {} + client.get_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.get_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_get_session_rest_required_fields(request_type=spanner.GetSessionRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_session._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).get_session._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner.Session() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.Session.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.get_session(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_get_session_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.get_session._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_get_session_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.Session() + + # get arguments that satisfy an http rule for this method + sample_request = { + "name": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner.Session.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.get_session(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/databases/*/sessions/*}" + % client.transport._host, + args[1], + ) + + +def test_get_session_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.get_session( + spanner.GetSessionRequest(), + name="name_value", + ) + + +def test_list_sessions_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.list_sessions in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.list_sessions] = mock_rpc + + request = {} + client.list_sessions(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.list_sessions(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_list_sessions_rest_required_fields(request_type=spanner.ListSessionsRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["database"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_sessions._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["database"] = "database_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).list_sessions._get_unset_required_fields(jsonified_request) + # Check that path parameters and body parameters are not mixing in. + assert not set(unset_fields) - set( + ( + "filter", + "page_size", + "page_token", + ) + ) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "database" in jsonified_request + assert jsonified_request["database"] == "database_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner.ListSessionsResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "get", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.ListSessionsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.list_sessions(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_list_sessions_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.list_sessions._get_unset_required_fields({}) + assert set(unset_fields) == ( + set( + ( + "filter", + "pageSize", + "pageToken", + ) + ) + & set(("database",)) + ) + + +def test_list_sessions_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.ListSessionsResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "database": "projects/sample1/instances/sample2/databases/sample3" + } + + # get truthy value for each flattened field + mock_args = dict( + database="database_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner.ListSessionsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.list_sessions(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{database=projects/*/instances/*/databases/*}/sessions" + % client.transport._host, + args[1], + ) + + +def test_list_sessions_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.list_sessions( + spanner.ListSessionsRequest(), + database="database_value", + ) + + +def test_list_sessions_rest_pager(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # TODO(kbandes): remove this mock unless there's a good reason for it. + # with mock.patch.object(path_template, 'transcode') as transcode: + # Set the response as a series of pages + response = ( + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + spanner.Session(), + ], + next_page_token="abc", + ), + spanner.ListSessionsResponse( + sessions=[], + next_page_token="def", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + ], + next_page_token="ghi", + ), + spanner.ListSessionsResponse( + sessions=[ + spanner.Session(), + spanner.Session(), + ], + ), + ) + # Two responses for two calls + response = response + response + + # Wrap the values into proper Response objs + response = tuple(spanner.ListSessionsResponse.to_json(x) for x in response) + return_values = tuple(Response() for i in response) + for return_val, response_val in zip(return_values, response): + return_val._content = response_val.encode("UTF-8") + return_val.status_code = 200 + req.side_effect = return_values + + sample_request = { + "database": "projects/sample1/instances/sample2/databases/sample3" + } + + pager = client.list_sessions(request=sample_request) + + results = list(pager) + assert len(results) == 6 + assert all(isinstance(i, spanner.Session) for i in results) + + pages = list(client.list_sessions(request=sample_request).pages) + for page_, token in zip(pages, ["abc", "def", "ghi", ""]): + assert page_.raw_page.next_page_token == token + + +def test_delete_session_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.delete_session in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.delete_session] = mock_rpc + + request = {} + client.delete_session(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.delete_session(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_delete_session_rest_required_fields(request_type=spanner.DeleteSessionRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["name"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_session._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["name"] = "name_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).delete_session._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "name" in jsonified_request + assert jsonified_request["name"] == "name_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = None + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "delete", + "query_params": pb_request, + } + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.delete_session(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_delete_session_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.delete_session._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("name",))) + + +def test_delete_session_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # get arguments that satisfy an http rule for this method + sample_request = { + "name": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + + # get truthy value for each flattened field + mock_args = dict( + name="name_value", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.delete_session(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{name=projects/*/instances/*/databases/*/sessions/*}" + % client.transport._host, + args[1], + ) + + +def test_delete_session_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.delete_session( + spanner.DeleteSessionRequest(), + name="name_value", + ) + + +def test_execute_sql_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.execute_sql in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.execute_sql] = mock_rpc + + request = {} + client.execute_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.execute_sql(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_execute_sql_rest_required_fields(request_type=spanner.ExecuteSqlRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request_init["sql"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).execute_sql._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + jsonified_request["sql"] = "sql_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).execute_sql._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + assert "sql" in jsonified_request + assert jsonified_request["sql"] == "sql_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = result_set.ResultSet() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = result_set.ResultSet.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.execute_sql(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_execute_sql_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.execute_sql._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "sql", + ) + ) + ) + + +def test_execute_streaming_sql_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert ( + client._transport.execute_streaming_sql + in client._transport._wrapped_methods + ) + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.execute_streaming_sql + ] = mock_rpc + + request = {} + client.execute_streaming_sql(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.execute_streaming_sql(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_execute_streaming_sql_rest_required_fields( + request_type=spanner.ExecuteSqlRequest, +): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request_init["sql"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).execute_streaming_sql._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + jsonified_request["sql"] = "sql_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).execute_streaming_sql._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + assert "sql" in jsonified_request + assert jsonified_request["sql"] == "sql_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = result_set.PartialResultSet() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = result_set.PartialResultSet.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + json_return_value = "[{}]".format(json_return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + with mock.patch.object(response_value, "iter_content") as iter_content: + iter_content.return_value = iter(json_return_value) + response = client.execute_streaming_sql(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_execute_streaming_sql_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.execute_streaming_sql._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "sql", + ) + ) + ) + + +def test_execute_batch_dml_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.execute_batch_dml in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.execute_batch_dml + ] = mock_rpc + + request = {} + client.execute_batch_dml(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.execute_batch_dml(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_execute_batch_dml_rest_required_fields( + request_type=spanner.ExecuteBatchDmlRequest, +): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request_init["seqno"] = 0 + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).execute_batch_dml._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + jsonified_request["seqno"] = 550 + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).execute_batch_dml._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + assert "seqno" in jsonified_request + assert jsonified_request["seqno"] == 550 + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner.ExecuteBatchDmlResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.ExecuteBatchDmlResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.execute_batch_dml(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_execute_batch_dml_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.execute_batch_dml._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "transaction", + "statements", + "seqno", + ) + ) + ) + + +def test_read_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.read in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.read] = mock_rpc + + request = {} + client.read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_read_rest_required_fields(request_type=spanner.ReadRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request_init["table"] = "" + request_init["columns"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).read._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + jsonified_request["table"] = "table_value" + jsonified_request["columns"] = "columns_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).read._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + assert "table" in jsonified_request + assert jsonified_request["table"] == "table_value" + assert "columns" in jsonified_request + assert jsonified_request["columns"] == "columns_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = result_set.ResultSet() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = result_set.ResultSet.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.read(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_read_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.read._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "table", + "columns", + "keySet", + ) + ) + ) + + +def test_streaming_read_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.streaming_read in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.streaming_read] = mock_rpc + + request = {} + client.streaming_read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.streaming_read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_streaming_read_rest_required_fields(request_type=spanner.ReadRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request_init["table"] = "" + request_init["columns"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).streaming_read._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + jsonified_request["table"] = "table_value" + jsonified_request["columns"] = "columns_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).streaming_read._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + assert "table" in jsonified_request + assert jsonified_request["table"] == "table_value" + assert "columns" in jsonified_request + assert jsonified_request["columns"] == "columns_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = result_set.PartialResultSet() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = result_set.PartialResultSet.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + json_return_value = "[{}]".format(json_return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + with mock.patch.object(response_value, "iter_content") as iter_content: + iter_content.return_value = iter(json_return_value) + response = client.streaming_read(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_streaming_read_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.streaming_read._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "table", + "columns", + "keySet", + ) + ) + ) + + +def test_begin_transaction_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.begin_transaction in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[ + client._transport.begin_transaction + ] = mock_rpc + + request = {} + client.begin_transaction(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.begin_transaction(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_begin_transaction_rest_required_fields( + request_type=spanner.BeginTransactionRequest, +): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).begin_transaction._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).begin_transaction._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = transaction.Transaction() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = transaction.Transaction.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.begin_transaction(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_begin_transaction_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.begin_transaction._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "options", + ) + ) + ) + + +def test_begin_transaction_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = transaction.Transaction() + + # get arguments that satisfy an http rule for this method + sample_request = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + + # get truthy value for each flattened field + mock_args = dict( + session="session_value", + options=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = transaction.Transaction.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.begin_transaction(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{session=projects/*/instances/*/databases/*/sessions/*}:beginTransaction" + % client.transport._host, + args[1], + ) + + +def test_begin_transaction_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.begin_transaction( + spanner.BeginTransactionRequest(), + session="session_value", + options=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + +def test_commit_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.commit in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.commit] = mock_rpc + + request = {} + client.commit(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.commit(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_commit_rest_required_fields(request_type=spanner.CommitRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).commit._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).commit._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = commit_response.CommitResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = commit_response.CommitResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.commit(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_commit_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.commit._get_unset_required_fields({}) + assert set(unset_fields) == (set(()) & set(("session",))) + + +def test_commit_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = commit_response.CommitResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + + # get truthy value for each flattened field + mock_args = dict( + session="session_value", + mutations=[ + mutation.Mutation(insert=mutation.Mutation.Write(table="table_value")) + ], + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = commit_response.CommitResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.commit(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{session=projects/*/instances/*/databases/*/sessions/*}:commit" + % client.transport._host, + args[1], + ) + + +def test_commit_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.commit( + spanner.CommitRequest(), + session="session_value", + transaction_id=b"transaction_id_blob", + mutations=[ + mutation.Mutation(insert=mutation.Mutation.Write(table="table_value")) + ], + single_use_transaction=transaction.TransactionOptions( + read_write=transaction.TransactionOptions.ReadWrite( + read_lock_mode=transaction.TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC + ) + ), + ) + + +def test_rollback_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.rollback in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.rollback] = mock_rpc + + request = {} + client.rollback(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.rollback(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_rollback_rest_required_fields(request_type=spanner.RollbackRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request_init["transaction_id"] = b"" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).rollback._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + jsonified_request["transactionId"] = b"transaction_id_blob" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).rollback._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + assert "transactionId" in jsonified_request + assert jsonified_request["transactionId"] == b"transaction_id_blob" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = None + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.rollback(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_rollback_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.rollback._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "transactionId", + ) + ) + ) + + +def test_rollback_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # get arguments that satisfy an http rule for this method + sample_request = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + + # get truthy value for each flattened field + mock_args = dict( + session="session_value", + transaction_id=b"transaction_id_blob", + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + json_return_value = "" + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + client.rollback(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{session=projects/*/instances/*/databases/*/sessions/*}:rollback" + % client.transport._host, + args[1], + ) + + +def test_rollback_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.rollback( + spanner.RollbackRequest(), + session="session_value", + transaction_id=b"transaction_id_blob", + ) + + +def test_partition_query_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.partition_query in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.partition_query] = mock_rpc + + request = {} + client.partition_query(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.partition_query(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_partition_query_rest_required_fields( + request_type=spanner.PartitionQueryRequest, +): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request_init["sql"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).partition_query._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + jsonified_request["sql"] = "sql_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).partition_query._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + assert "sql" in jsonified_request + assert jsonified_request["sql"] == "sql_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner.PartitionResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.PartitionResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.partition_query(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_partition_query_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.partition_query._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "sql", + ) + ) + ) + + +def test_partition_read_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.partition_read in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.partition_read] = mock_rpc + + request = {} + client.partition_read(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.partition_read(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_partition_read_rest_required_fields(request_type=spanner.PartitionReadRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request_init["table"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).partition_read._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + jsonified_request["table"] = "table_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).partition_read._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + assert "table" in jsonified_request + assert jsonified_request["table"] == "table_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner.PartitionResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.PartitionResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + response = client.partition_read(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_partition_read_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.partition_read._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "table", + "keySet", + ) + ) + ) + + +def test_batch_write_rest_use_cached_wrapped_rpc(): + # Clients should use _prep_wrapped_messages to create cached wrapped rpcs, + # instead of constructing them on each call + with mock.patch("google.api_core.gapic_v1.method.wrap_method") as wrapper_fn: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Should wrap all calls on client creation + assert wrapper_fn.call_count > 0 + wrapper_fn.reset_mock() + + # Ensure method has been cached + assert client._transport.batch_write in client._transport._wrapped_methods + + # Replace cached wrapped function with mock + mock_rpc = mock.Mock() + mock_rpc.return_value.name = ( + "foo" # operation_request.operation in compute client(s) expect a string. + ) + client._transport._wrapped_methods[client._transport.batch_write] = mock_rpc + + request = {} + client.batch_write(request) + + # Establish that the underlying gRPC stub method was called. + assert mock_rpc.call_count == 1 + + client.batch_write(request) + + # Establish that a new wrapper was not created for this call + assert wrapper_fn.call_count == 0 + assert mock_rpc.call_count == 2 + + +def test_batch_write_rest_required_fields(request_type=spanner.BatchWriteRequest): + transport_class = transports.SpannerRestTransport + + request_init = {} + request_init["session"] = "" + request = request_type(**request_init) + pb_request = request_type.pb(request) + jsonified_request = json.loads( + json_format.MessageToJson(pb_request, use_integers_for_enums=False) + ) + + # verify fields with default values are dropped + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).batch_write._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with default values are now present + + jsonified_request["session"] = "session_value" + + unset_fields = transport_class( + credentials=ga_credentials.AnonymousCredentials() + ).batch_write._get_unset_required_fields(jsonified_request) + jsonified_request.update(unset_fields) + + # verify required fields with non-default values are left alone + assert "session" in jsonified_request + assert jsonified_request["session"] == "session_value" + + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + request = request_type(**request_init) + + # Designate an appropriate value for the returned response. + return_value = spanner.BatchWriteResponse() + # Mock the http request call within the method and fake a response. + with mock.patch.object(Session, "request") as req: + # We need to mock transcode() because providing default values + # for required fields will fail the real version if the http_options + # expect actual values for those fields. + with mock.patch.object(path_template, "transcode") as transcode: + # A uri without fields and an empty body will force all the + # request fields to show up in the query_params. + pb_request = request_type.pb(request) + transcode_result = { + "uri": "v1/sample_method", + "method": "post", + "query_params": pb_request, + } + transcode_result["body"] = pb_request + transcode.return_value = transcode_result + + response_value = Response() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.BatchWriteResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + json_return_value = "[{}]".format(json_return_value) + + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + with mock.patch.object(response_value, "iter_content") as iter_content: + iter_content.return_value = iter(json_return_value) + response = client.batch_write(request) + + expected_params = [("$alt", "json;enum-encoding=int")] + actual_params = req.call_args.kwargs["params"] + assert expected_params == actual_params + + +def test_batch_write_rest_unset_required_fields(): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials + ) + + unset_fields = transport.batch_write._get_unset_required_fields({}) + assert set(unset_fields) == ( + set(()) + & set( + ( + "session", + "mutationGroups", + ) + ) + ) + + +def test_batch_write_rest_flattened(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.BatchWriteResponse() + + # get arguments that satisfy an http rule for this method + sample_request = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + + # get truthy value for each flattened field + mock_args = dict( + session="session_value", + mutation_groups=[ + spanner.BatchWriteRequest.MutationGroup( + mutations=[ + mutation.Mutation( + insert=mutation.Mutation.Write(table="table_value") + ) + ] + ) + ], + ) + mock_args.update(sample_request) + + # Wrap the value into a proper Response obj + response_value = Response() + response_value.status_code = 200 + # Convert return value to protobuf type + return_value = spanner.BatchWriteResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + json_return_value = "[{}]".format(json_return_value) + response_value._content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + with mock.patch.object(response_value, "iter_content") as iter_content: + iter_content.return_value = iter(json_return_value) + client.batch_write(**mock_args) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(req.mock_calls) == 1 + _, args, _ = req.mock_calls[0] + assert path_template.validate( + "%s/v1/{session=projects/*/instances/*/databases/*/sessions/*}:batchWrite" + % client.transport._host, + args[1], + ) + + +def test_batch_write_rest_flattened_error(transport: str = "rest"): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.batch_write( + spanner.BatchWriteRequest(), + session="session_value", + mutation_groups=[ + spanner.BatchWriteRequest.MutationGroup( + mutations=[ + mutation.Mutation( + insert=mutation.Mutation.Write(table="table_value") + ) + ] + ) + ], + ) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.SpannerGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.SpannerGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = SpannerClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide an api_key and a transport instance. + transport = transports.SpannerGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + options = client_options.ClientOptions() + options.api_key = "api_key" + with pytest.raises(ValueError): + client = SpannerClient( + client_options=options, + transport=transport, + ) + + # It is an error to provide an api_key and a credential. + options = client_options.ClientOptions() + options.api_key = "api_key" + with pytest.raises(ValueError): + client = SpannerClient( + client_options=options, credentials=ga_credentials.AnonymousCredentials() + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.SpannerGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = SpannerClient( + client_options={"scopes": ["1", "2"]}, + transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.SpannerGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + client = SpannerClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.SpannerGrpcTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.SpannerGrpcAsyncIOTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.SpannerGrpcTransport, + transports.SpannerGrpcAsyncIOTransport, + transports.SpannerRestTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(google.auth, "default") as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_kind_grpc(): + transport = SpannerClient.get_transport_class("grpc")( + credentials=ga_credentials.AnonymousCredentials() + ) + assert transport.kind == "grpc" + + +def test_initialize_client_w_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="grpc" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_session_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + call.return_value = spanner.Session() + client.create_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.CreateSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_batch_create_sessions_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + call.return_value = spanner.BatchCreateSessionsResponse() + client.batch_create_sessions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BatchCreateSessionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_session_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + call.return_value = spanner.Session() + client.get_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.GetSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_sessions_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + call.return_value = spanner.ListSessionsResponse() + client.list_sessions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ListSessionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_session_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + call.return_value = None + client.delete_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.DeleteSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_execute_sql_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.execute_sql), "__call__") as call: + call.return_value = result_set.ResultSet() + client.execute_sql(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteSqlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_execute_streaming_sql_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.execute_streaming_sql), "__call__" + ) as call: + call.return_value = iter([result_set.PartialResultSet()]) + client.execute_streaming_sql(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteSqlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_execute_batch_dml_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.execute_batch_dml), "__call__" + ) as call: + call.return_value = spanner.ExecuteBatchDmlResponse() + client.execute_batch_dml(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteBatchDmlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_read_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.read), "__call__") as call: + call.return_value = result_set.ResultSet() + client.read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_streaming_read_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.streaming_read), "__call__") as call: + call.return_value = iter([result_set.PartialResultSet()]) + client.streaming_read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_begin_transaction_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + call.return_value = transaction.Transaction() + client.begin_transaction(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BeginTransactionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_commit_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + call.return_value = commit_response.CommitResponse() + client.commit(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.CommitRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_rollback_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + call.return_value = None + client.rollback(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.RollbackRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_partition_query_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.partition_query), "__call__") as call: + call.return_value = spanner.PartitionResponse() + client.partition_query(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.PartitionQueryRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_partition_read_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.partition_read), "__call__") as call: + call.return_value = spanner.PartitionResponse() + client.partition_read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.PartitionReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_batch_write_empty_call_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="grpc", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + call.return_value = iter([spanner.BatchWriteResponse()]) + client.batch_write(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BatchWriteRequest() + + assert args[0] == request_msg + + +def test_transport_kind_grpc_asyncio(): + transport = SpannerAsyncClient.get_transport_class("grpc_asyncio")( + credentials=async_anonymous_credentials() + ) + assert transport.kind == "grpc_asyncio" + + +def test_initialize_client_w_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), transport="grpc_asyncio" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_create_session_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.Session( + name="name_value", + creator_role="creator_role_value", + multiplexed=True, + ) + ) + await client.create_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.CreateSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_batch_create_sessions_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.BatchCreateSessionsResponse() + ) + await client.batch_create_sessions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BatchCreateSessionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_get_session_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.Session( + name="name_value", + creator_role="creator_role_value", + multiplexed=True, + ) + ) + await client.get_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.GetSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_list_sessions_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.ListSessionsResponse( + next_page_token="next_page_token_value", + ) + ) + await client.list_sessions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ListSessionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_delete_session_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.delete_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.DeleteSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_execute_sql_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.execute_sql), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + result_set.ResultSet() + ) + await client.execute_sql(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteSqlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_execute_streaming_sql_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.execute_streaming_sql), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[result_set.PartialResultSet()] + ) + await client.execute_streaming_sql(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteSqlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_execute_batch_dml_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.execute_batch_dml), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.ExecuteBatchDmlResponse() + ) + await client.execute_batch_dml(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteBatchDmlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_read_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + result_set.ResultSet() + ) + await client.read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_streaming_read_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.streaming_read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[result_set.PartialResultSet()] + ) + await client.streaming_read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_begin_transaction_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + transaction.Transaction( + id=b"id_blob", + ) + ) + await client.begin_transaction(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BeginTransactionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_commit_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + commit_response.CommitResponse() + ) + await client.commit(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.CommitRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_rollback_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(None) + await client.rollback(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.RollbackRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_partition_query_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.partition_query), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.PartitionResponse() + ) + await client.partition_query(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.PartitionQueryRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_partition_read_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.partition_read), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + spanner.PartitionResponse() + ) + await client.partition_read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.PartitionReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +@pytest.mark.asyncio +async def test_batch_write_empty_call_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), + transport="grpc_asyncio", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = mock.Mock(aio.UnaryStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[spanner.BatchWriteResponse()] + ) + await client.batch_write(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BatchWriteRequest() + + assert args[0] == request_msg + + +def test_transport_kind_rest(): + transport = SpannerClient.get_transport_class("rest")( + credentials=ga_credentials.AnonymousCredentials() + ) + assert transport.kind == "rest" + + +def test_create_session_rest_bad_request(request_type=spanner.CreateSessionRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.create_session(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.CreateSessionRequest, + dict, + ], +) +def test_create_session_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.Session( + name="name_value", + creator_role="creator_role_value", + multiplexed=True, + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.Session.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.create_session(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.Session) + assert response.name == "name_value" + assert response.creator_role == "creator_role_value" + assert response.multiplexed is True + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_create_session_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_create_session" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_create_session_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_create_session" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.CreateSessionRequest.pb(spanner.CreateSessionRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner.Session.to_json(spanner.Session()) + req.return_value.content = return_value + + request = spanner.CreateSessionRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner.Session() + post_with_metadata.return_value = spanner.Session(), metadata + + client.create_session( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_batch_create_sessions_rest_bad_request( + request_type=spanner.BatchCreateSessionsRequest, +): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.batch_create_sessions(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.BatchCreateSessionsRequest, + dict, + ], +) +def test_batch_create_sessions_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.BatchCreateSessionsResponse() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.BatchCreateSessionsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.batch_create_sessions(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.BatchCreateSessionsResponse) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_batch_create_sessions_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_batch_create_sessions" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_batch_create_sessions_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_batch_create_sessions" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.BatchCreateSessionsRequest.pb( + spanner.BatchCreateSessionsRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner.BatchCreateSessionsResponse.to_json( + spanner.BatchCreateSessionsResponse() + ) + req.return_value.content = return_value + + request = spanner.BatchCreateSessionsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner.BatchCreateSessionsResponse() + post_with_metadata.return_value = ( + spanner.BatchCreateSessionsResponse(), + metadata, + ) + + client.batch_create_sessions( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_get_session_rest_bad_request(request_type=spanner.GetSessionRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.get_session(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.GetSessionRequest, + dict, + ], +) +def test_get_session_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.Session( + name="name_value", + creator_role="creator_role_value", + multiplexed=True, + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.Session.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.get_session(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.Session) + assert response.name == "name_value" + assert response.creator_role == "creator_role_value" + assert response.multiplexed is True + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_get_session_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_get_session" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_get_session_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_get_session" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.GetSessionRequest.pb(spanner.GetSessionRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner.Session.to_json(spanner.Session()) + req.return_value.content = return_value + + request = spanner.GetSessionRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner.Session() + post_with_metadata.return_value = spanner.Session(), metadata + + client.get_session( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_list_sessions_rest_bad_request(request_type=spanner.ListSessionsRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.list_sessions(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ListSessionsRequest, + dict, + ], +) +def test_list_sessions_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = {"database": "projects/sample1/instances/sample2/databases/sample3"} + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.ListSessionsResponse( + next_page_token="next_page_token_value", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.ListSessionsResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.list_sessions(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, pagers.ListSessionsPager) + assert response.next_page_token == "next_page_token_value" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_list_sessions_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_list_sessions" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_list_sessions_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_list_sessions" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.ListSessionsRequest.pb(spanner.ListSessionsRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner.ListSessionsResponse.to_json( + spanner.ListSessionsResponse() + ) + req.return_value.content = return_value + + request = spanner.ListSessionsRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner.ListSessionsResponse() + post_with_metadata.return_value = spanner.ListSessionsResponse(), metadata + + client.list_sessions( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_delete_session_rest_bad_request(request_type=spanner.DeleteSessionRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.delete_session(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.DeleteSessionRequest, + dict, + ], +) +def test_delete_session_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "name": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "" + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.delete_session(request) + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_delete_session_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "pre_delete_session" + ) as pre: + pre.assert_not_called() + pb_message = spanner.DeleteSessionRequest.pb(spanner.DeleteSessionRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + request = spanner.DeleteSessionRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + + client.delete_session( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + + +def test_execute_sql_rest_bad_request(request_type=spanner.ExecuteSqlRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.execute_sql(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ExecuteSqlRequest, + dict, + ], +) +def test_execute_sql_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = result_set.ResultSet() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = result_set.ResultSet.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.execute_sql(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, result_set.ResultSet) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_execute_sql_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_execute_sql" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_execute_sql_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_execute_sql" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.ExecuteSqlRequest.pb(spanner.ExecuteSqlRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = result_set.ResultSet.to_json(result_set.ResultSet()) + req.return_value.content = return_value + + request = spanner.ExecuteSqlRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = result_set.ResultSet() + post_with_metadata.return_value = result_set.ResultSet(), metadata + + client.execute_sql( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_execute_streaming_sql_rest_bad_request(request_type=spanner.ExecuteSqlRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.execute_streaming_sql(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ExecuteSqlRequest, + dict, + ], +) +def test_execute_streaming_sql_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = result_set.PartialResultSet( + chunked_value=True, + resume_token=b"resume_token_blob", + last=True, + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = result_set.PartialResultSet.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + json_return_value = "[{}]".format(json_return_value) + response_value.iter_content = mock.Mock(return_value=iter(json_return_value)) + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.execute_streaming_sql(request) + + assert isinstance(response, Iterable) + response = next(response) + + # Establish that the response is the type that we expect. + assert isinstance(response, result_set.PartialResultSet) + assert response.chunked_value is True + assert response.resume_token == b"resume_token_blob" + assert response.last is True + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_execute_streaming_sql_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_execute_streaming_sql" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_execute_streaming_sql_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_execute_streaming_sql" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.ExecuteSqlRequest.pb(spanner.ExecuteSqlRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = result_set.PartialResultSet.to_json( + result_set.PartialResultSet() + ) + req.return_value.iter_content = mock.Mock(return_value=iter(return_value)) + + request = spanner.ExecuteSqlRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = result_set.PartialResultSet() + post_with_metadata.return_value = result_set.PartialResultSet(), metadata + + client.execute_streaming_sql( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_execute_batch_dml_rest_bad_request( + request_type=spanner.ExecuteBatchDmlRequest, +): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.execute_batch_dml(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ExecuteBatchDmlRequest, + dict, + ], +) +def test_execute_batch_dml_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.ExecuteBatchDmlResponse() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.ExecuteBatchDmlResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.execute_batch_dml(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.ExecuteBatchDmlResponse) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_execute_batch_dml_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_execute_batch_dml" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_execute_batch_dml_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_execute_batch_dml" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.ExecuteBatchDmlRequest.pb(spanner.ExecuteBatchDmlRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner.ExecuteBatchDmlResponse.to_json( + spanner.ExecuteBatchDmlResponse() + ) + req.return_value.content = return_value + + request = spanner.ExecuteBatchDmlRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner.ExecuteBatchDmlResponse() + post_with_metadata.return_value = spanner.ExecuteBatchDmlResponse(), metadata + + client.execute_batch_dml( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_read_rest_bad_request(request_type=spanner.ReadRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.read(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ReadRequest, + dict, + ], +) +def test_read_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = result_set.ResultSet() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = result_set.ResultSet.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.read(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, result_set.ResultSet) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_read_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_read" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_read_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_read" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.ReadRequest.pb(spanner.ReadRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = result_set.ResultSet.to_json(result_set.ResultSet()) + req.return_value.content = return_value + + request = spanner.ReadRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = result_set.ResultSet() + post_with_metadata.return_value = result_set.ResultSet(), metadata + + client.read( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_streaming_read_rest_bad_request(request_type=spanner.ReadRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.streaming_read(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.ReadRequest, + dict, + ], +) +def test_streaming_read_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = result_set.PartialResultSet( + chunked_value=True, + resume_token=b"resume_token_blob", + last=True, + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = result_set.PartialResultSet.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + json_return_value = "[{}]".format(json_return_value) + response_value.iter_content = mock.Mock(return_value=iter(json_return_value)) + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.streaming_read(request) + + assert isinstance(response, Iterable) + response = next(response) + + # Establish that the response is the type that we expect. + assert isinstance(response, result_set.PartialResultSet) + assert response.chunked_value is True + assert response.resume_token == b"resume_token_blob" + assert response.last is True + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_streaming_read_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_streaming_read" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_streaming_read_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_streaming_read" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.ReadRequest.pb(spanner.ReadRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = result_set.PartialResultSet.to_json( + result_set.PartialResultSet() + ) + req.return_value.iter_content = mock.Mock(return_value=iter(return_value)) + + request = spanner.ReadRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = result_set.PartialResultSet() + post_with_metadata.return_value = result_set.PartialResultSet(), metadata + + client.streaming_read( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_begin_transaction_rest_bad_request( + request_type=spanner.BeginTransactionRequest, +): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.begin_transaction(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.BeginTransactionRequest, + dict, + ], +) +def test_begin_transaction_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = transaction.Transaction( + id=b"id_blob", + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = transaction.Transaction.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.begin_transaction(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, transaction.Transaction) + assert response.id == b"id_blob" + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_begin_transaction_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_begin_transaction" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_begin_transaction_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_begin_transaction" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.BeginTransactionRequest.pb( + spanner.BeginTransactionRequest() + ) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = transaction.Transaction.to_json(transaction.Transaction()) + req.return_value.content = return_value + + request = spanner.BeginTransactionRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = transaction.Transaction() + post_with_metadata.return_value = transaction.Transaction(), metadata + + client.begin_transaction( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_commit_rest_bad_request(request_type=spanner.CommitRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.commit(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.CommitRequest, + dict, + ], +) +def test_commit_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = commit_response.CommitResponse() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = commit_response.CommitResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.commit(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, commit_response.CommitResponse) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_commit_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_commit" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_commit_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_commit" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.CommitRequest.pb(spanner.CommitRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = commit_response.CommitResponse.to_json( + commit_response.CommitResponse() + ) + req.return_value.content = return_value + + request = spanner.CommitRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = commit_response.CommitResponse() + post_with_metadata.return_value = commit_response.CommitResponse(), metadata + + client.commit( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_rollback_rest_bad_request(request_type=spanner.RollbackRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.rollback(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.RollbackRequest, + dict, + ], +) +def test_rollback_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = None + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + json_return_value = "" + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.rollback(request) + + # Establish that the response is the type that we expect. + assert response is None + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_rollback_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "pre_rollback" + ) as pre: + pre.assert_not_called() + pb_message = spanner.RollbackRequest.pb(spanner.RollbackRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + + request = spanner.RollbackRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + + client.rollback( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + + +def test_partition_query_rest_bad_request(request_type=spanner.PartitionQueryRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.partition_query(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.PartitionQueryRequest, + dict, + ], +) +def test_partition_query_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.PartitionResponse() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.PartitionResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.partition_query(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.PartitionResponse) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_partition_query_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_partition_query" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_partition_query_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_partition_query" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.PartitionQueryRequest.pb(spanner.PartitionQueryRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner.PartitionResponse.to_json(spanner.PartitionResponse()) + req.return_value.content = return_value + + request = spanner.PartitionQueryRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner.PartitionResponse() + post_with_metadata.return_value = spanner.PartitionResponse(), metadata + + client.partition_query( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_partition_read_rest_bad_request(request_type=spanner.PartitionReadRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.partition_read(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.PartitionReadRequest, + dict, + ], +) +def test_partition_read_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.PartitionResponse() + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.PartitionResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + response_value.content = json_return_value.encode("UTF-8") + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.partition_read(request) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.PartitionResponse) + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_partition_read_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_partition_read" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_partition_read_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_partition_read" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.PartitionReadRequest.pb(spanner.PartitionReadRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner.PartitionResponse.to_json(spanner.PartitionResponse()) + req.return_value.content = return_value + + request = spanner.PartitionReadRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner.PartitionResponse() + post_with_metadata.return_value = spanner.PartitionResponse(), metadata + + client.partition_read( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_batch_write_rest_bad_request(request_type=spanner.BatchWriteRequest): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a BadRequest error. + with mock.patch.object(Session, "request") as req, pytest.raises( + core_exceptions.BadRequest + ): + # Wrap the value into a proper Response obj + response_value = mock.Mock() + json_return_value = "" + response_value.json = mock.Mock(return_value={}) + response_value.status_code = 400 + response_value.request = mock.Mock() + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + client.batch_write(request) + + +@pytest.mark.parametrize( + "request_type", + [ + spanner.BatchWriteRequest, + dict, + ], +) +def test_batch_write_rest_call_success(request_type): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + + # send a request that will satisfy transcoding + request_init = { + "session": "projects/sample1/instances/sample2/databases/sample3/sessions/sample4" + } + request = request_type(**request_init) + + # Mock the http request call within the method and fake a response. + with mock.patch.object(type(client.transport._session), "request") as req: + # Designate an appropriate value for the returned response. + return_value = spanner.BatchWriteResponse( + indexes=[752], + ) + + # Wrap the value into a proper Response obj + response_value = mock.Mock() + response_value.status_code = 200 + + # Convert return value to protobuf type + return_value = spanner.BatchWriteResponse.pb(return_value) + json_return_value = json_format.MessageToJson(return_value) + json_return_value = "[{}]".format(json_return_value) + response_value.iter_content = mock.Mock(return_value=iter(json_return_value)) + req.return_value = response_value + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + response = client.batch_write(request) + + assert isinstance(response, Iterable) + response = next(response) + + # Establish that the response is the type that we expect. + assert isinstance(response, spanner.BatchWriteResponse) + assert response.indexes == [752] + + +@pytest.mark.parametrize("null_interceptor", [True, False]) +def test_batch_write_rest_interceptors(null_interceptor): + transport = transports.SpannerRestTransport( + credentials=ga_credentials.AnonymousCredentials(), + interceptor=None if null_interceptor else transports.SpannerRestInterceptor(), + ) + client = SpannerClient(transport=transport) + + with mock.patch.object( + type(client.transport._session), "request" + ) as req, mock.patch.object( + path_template, "transcode" + ) as transcode, mock.patch.object( + transports.SpannerRestInterceptor, "post_batch_write" + ) as post, mock.patch.object( + transports.SpannerRestInterceptor, "post_batch_write_with_metadata" + ) as post_with_metadata, mock.patch.object( + transports.SpannerRestInterceptor, "pre_batch_write" + ) as pre: + pre.assert_not_called() + post.assert_not_called() + post_with_metadata.assert_not_called() + pb_message = spanner.BatchWriteRequest.pb(spanner.BatchWriteRequest()) + transcode.return_value = { + "method": "post", + "uri": "my_uri", + "body": pb_message, + "query_params": pb_message, + } + + req.return_value = mock.Mock() + req.return_value.status_code = 200 + req.return_value.headers = {"header-1": "value-1", "header-2": "value-2"} + return_value = spanner.BatchWriteResponse.to_json(spanner.BatchWriteResponse()) + req.return_value.iter_content = mock.Mock(return_value=iter(return_value)) + + request = spanner.BatchWriteRequest() + metadata = [ + ("key", "val"), + ("cephalopod", "squid"), + ] + pre.return_value = request, metadata + post.return_value = spanner.BatchWriteResponse() + post_with_metadata.return_value = spanner.BatchWriteResponse(), metadata + + client.batch_write( + request, + metadata=[ + ("key", "val"), + ("cephalopod", "squid"), + ], + ) + + pre.assert_called_once() + post.assert_called_once() + post_with_metadata.assert_called_once() + + +def test_initialize_client_w_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + assert client is not None + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_create_session_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.create_session), "__call__") as call: + client.create_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.CreateSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_batch_create_sessions_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.batch_create_sessions), "__call__" + ) as call: + client.batch_create_sessions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BatchCreateSessionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_get_session_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.get_session), "__call__") as call: + client.get_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.GetSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_list_sessions_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.list_sessions), "__call__") as call: + client.list_sessions(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ListSessionsRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_delete_session_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.delete_session), "__call__") as call: + client.delete_session(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.DeleteSessionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_execute_sql_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.execute_sql), "__call__") as call: + client.execute_sql(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteSqlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_execute_streaming_sql_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.execute_streaming_sql), "__call__" + ) as call: + client.execute_streaming_sql(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteSqlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_execute_batch_dml_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.execute_batch_dml), "__call__" + ) as call: + client.execute_batch_dml(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ExecuteBatchDmlRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_read_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.read), "__call__") as call: + client.read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_streaming_read_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.streaming_read), "__call__") as call: + client.streaming_read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.ReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_begin_transaction_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object( + type(client.transport.begin_transaction), "__call__" + ) as call: + client.begin_transaction(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BeginTransactionRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_commit_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.commit), "__call__") as call: + client.commit(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.CommitRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_rollback_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.rollback), "__call__") as call: + client.rollback(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.RollbackRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_partition_query_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.partition_query), "__call__") as call: + client.partition_query(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.PartitionQueryRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_partition_read_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.partition_read), "__call__") as call: + client.partition_read(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.PartitionReadRequest() + + assert args[0] == request_msg + + +# This test is a coverage failsafe to make sure that totally empty calls, +# i.e. request == None and no flattened fields passed, work. +def test_batch_write_empty_call_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + transport="rest", + ) + + # Mock the actual call, and fake the request. + with mock.patch.object(type(client.transport.batch_write), "__call__") as call: + client.batch_write(request=None) + + # Establish that the underlying stub method was called. + call.assert_called() + _, args, _ = call.mock_calls[0] + request_msg = spanner.BatchWriteRequest() + + assert args[0] == request_msg + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, + transports.SpannerGrpcTransport, + ) + + +def test_spanner_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(core_exceptions.DuplicateCredentialArgs): + transport = transports.SpannerTransport( + credentials=ga_credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_spanner_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.spanner_v1.services.spanner.transports.SpannerTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.SpannerTransport( + credentials=ga_credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ( + "create_session", + "batch_create_sessions", + "get_session", + "list_sessions", + "delete_session", + "execute_sql", + "execute_streaming_sql", + "execute_batch_dml", + "read", + "streaming_read", + "begin_transaction", + "commit", + "rollback", + "partition_query", + "partition_read", + "batch_write", + ) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + with pytest.raises(NotImplementedError): + transport.close() + + # Catch all for all remaining methods and properties + remainder = [ + "kind", + ] + for r in remainder: + with pytest.raises(NotImplementedError): + getattr(transport, r)() + + +def test_spanner_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + google.auth, "load_credentials_from_file", autospec=True + ) as load_creds, mock.patch( + "google.cloud.spanner_v1.services.spanner.transports.SpannerTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (ga_credentials.AnonymousCredentials(), None) + transport = transports.SpannerTransport( + credentials_file="credentials.json", + quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.data", + ), + quota_project_id="octopus", + ) + + +def test_spanner_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(google.auth, "default", autospec=True) as adc, mock.patch( + "google.cloud.spanner_v1.services.spanner.transports.SpannerTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport = transports.SpannerTransport() + adc.assert_called_once() + + +def test_spanner_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(google.auth, "default", autospec=True) as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + SpannerClient() + adc.assert_called_once_with( + scopes=None, + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.data", + ), + quota_project_id=None, + ) + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.SpannerGrpcTransport, + transports.SpannerGrpcAsyncIOTransport, + ], +) +def test_spanner_transport_auth_adc(transport_class): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(google.auth, "default", autospec=True) as adc: + adc.return_value = (ga_credentials.AnonymousCredentials(), None) + transport_class(quota_project_id="octopus", scopes=["1", "2"]) + adc.assert_called_once_with( + scopes=["1", "2"], + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.data", + ), + quota_project_id="octopus", + ) + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.SpannerGrpcTransport, + transports.SpannerGrpcAsyncIOTransport, + transports.SpannerRestTransport, + ], +) +def test_spanner_transport_auth_gdch_credentials(transport_class): + host = "https://language.com" + api_audience_tests = [None, "https://language2.com"] + api_audience_expect = [host, "https://language2.com"] + for t, e in zip(api_audience_tests, api_audience_expect): + with mock.patch.object(google.auth, "default", autospec=True) as adc: + gdch_mock = mock.MagicMock() + type(gdch_mock).with_gdch_audience = mock.PropertyMock( + return_value=gdch_mock + ) + adc.return_value = (gdch_mock, None) + transport_class(host=host, api_audience=t) + gdch_mock.with_gdch_audience.assert_called_once_with(e) + + +@pytest.mark.parametrize( + "transport_class,grpc_helpers", + [ + (transports.SpannerGrpcTransport, grpc_helpers), + (transports.SpannerGrpcAsyncIOTransport, grpc_helpers_async), + ], +) +def test_spanner_transport_create_channel(transport_class, grpc_helpers): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object( + google.auth, "default", autospec=True + ) as adc, mock.patch.object( + grpc_helpers, "create_channel", autospec=True + ) as create_channel: + creds = ga_credentials.AnonymousCredentials() + adc.return_value = (creds, None) + transport_class(quota_project_id="octopus", scopes=["1", "2"]) + + create_channel.assert_called_with( + "spanner.googleapis.com:443", + credentials=creds, + credentials_file=None, + quota_project_id="octopus", + default_scopes=( + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/spanner.data", + ), + scopes=["1", "2"], + default_host="spanner.googleapis.com", + ssl_credentials=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + +@pytest.mark.parametrize( + "transport_class", + [transports.SpannerGrpcTransport, transports.SpannerGrpcAsyncIOTransport], +) +def test_spanner_grpc_transport_client_cert_source_for_mtls(transport_class): + cred = ga_credentials.AnonymousCredentials() + + # Check ssl_channel_credentials is used if provided. + with mock.patch.object(transport_class, "create_channel") as mock_create_channel: + mock_ssl_channel_creds = mock.Mock() + transport_class( + host="squid.clam.whelk", + credentials=cred, + ssl_channel_credentials=mock_ssl_channel_creds, + ) + mock_create_channel.assert_called_once_with( + "squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_channel_creds, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + + # Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls + # is used. + with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()): + with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred: + transport_class( + credentials=cred, + client_cert_source_for_mtls=client_cert_source_callback, + ) + expected_cert, expected_key = client_cert_source_callback() + mock_ssl_cred.assert_called_once_with( + certificate_chain=expected_cert, private_key=expected_key + ) + + +def test_spanner_http_transport_client_cert_source_for_mtls(): + cred = ga_credentials.AnonymousCredentials() + with mock.patch( + "google.auth.transport.requests.AuthorizedSession.configure_mtls_channel" + ) as mock_configure_mtls_channel: + transports.SpannerRestTransport( + credentials=cred, client_cert_source_for_mtls=client_cert_source_callback + ) + mock_configure_mtls_channel.assert_called_once_with(client_cert_source_callback) + + +@pytest.mark.parametrize( + "transport_name", + [ + "grpc", + "grpc_asyncio", + "rest", + ], +) +def test_spanner_host_no_port(transport_name): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="spanner.googleapis.com" + ), + transport=transport_name, + ) + assert client.transport._host == ( + "spanner.googleapis.com:443" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com" + ) + + +@pytest.mark.parametrize( + "transport_name", + [ + "grpc", + "grpc_asyncio", + "rest", + ], +) +def test_spanner_host_with_port(transport_name): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="spanner.googleapis.com:8000" + ), + transport=transport_name, + ) + assert client.transport._host == ( + "spanner.googleapis.com:8000" + if transport_name in ["grpc", "grpc_asyncio"] + else "https://spanner.googleapis.com:8000" + ) + + +@pytest.mark.parametrize( + "transport_name", + [ + "rest", + ], +) +def test_spanner_client_transport_session_collision(transport_name): + creds1 = ga_credentials.AnonymousCredentials() + creds2 = ga_credentials.AnonymousCredentials() + client1 = SpannerClient( + credentials=creds1, + transport=transport_name, + ) + client2 = SpannerClient( + credentials=creds2, + transport=transport_name, + ) + session1 = client1.transport.create_session._session + session2 = client2.transport.create_session._session + assert session1 != session2 + session1 = client1.transport.batch_create_sessions._session + session2 = client2.transport.batch_create_sessions._session + assert session1 != session2 + session1 = client1.transport.get_session._session + session2 = client2.transport.get_session._session + assert session1 != session2 + session1 = client1.transport.list_sessions._session + session2 = client2.transport.list_sessions._session + assert session1 != session2 + session1 = client1.transport.delete_session._session + session2 = client2.transport.delete_session._session + assert session1 != session2 + session1 = client1.transport.execute_sql._session + session2 = client2.transport.execute_sql._session + assert session1 != session2 + session1 = client1.transport.execute_streaming_sql._session + session2 = client2.transport.execute_streaming_sql._session + assert session1 != session2 + session1 = client1.transport.execute_batch_dml._session + session2 = client2.transport.execute_batch_dml._session + assert session1 != session2 + session1 = client1.transport.read._session + session2 = client2.transport.read._session + assert session1 != session2 + session1 = client1.transport.streaming_read._session + session2 = client2.transport.streaming_read._session + assert session1 != session2 + session1 = client1.transport.begin_transaction._session + session2 = client2.transport.begin_transaction._session + assert session1 != session2 + session1 = client1.transport.commit._session + session2 = client2.transport.commit._session + assert session1 != session2 + session1 = client1.transport.rollback._session + session2 = client2.transport.rollback._session + assert session1 != session2 + session1 = client1.transport.partition_query._session + session2 = client2.transport.partition_query._session + assert session1 != session2 + session1 = client1.transport.partition_read._session + session2 = client2.transport.partition_read._session + assert session1 != session2 + session1 = client1.transport.batch_write._session + session2 = client2.transport.batch_write._session + assert session1 != session2 + + +def test_spanner_grpc_transport_channel(): + channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials()) + + # Check that channel is used if provided. + transport = transports.SpannerGrpcTransport( + host="squid.clam.whelk", + channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_spanner_grpc_asyncio_transport_channel(): + channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials()) + + # Check that channel is used if provided. + transport = transports.SpannerGrpcAsyncIOTransport( + host="squid.clam.whelk", + channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. +@pytest.mark.parametrize( + "transport_class", + [transports.SpannerGrpcTransport, transports.SpannerGrpcAsyncIOTransport], +) +def test_spanner_transport_channel_mtls_with_client_cert_source(transport_class): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel" + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = ga_credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(google.auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are +# removed from grpc/grpc_asyncio transport constructor. +@pytest.mark.parametrize( + "transport_class", + [transports.SpannerGrpcTransport, transports.SpannerGrpcAsyncIOTransport], +) +def test_spanner_transport_channel_mtls_with_adc(transport_class): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel" + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=None, + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + options=[ + ("grpc.max_send_message_length", -1), + ("grpc.max_receive_message_length", -1), + ], + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_database_path(): + project = "squid" + instance = "clam" + database = "whelk" + expected = "projects/{project}/instances/{instance}/databases/{database}".format( + project=project, + instance=instance, + database=database, + ) + actual = SpannerClient.database_path(project, instance, database) + assert expected == actual + + +def test_parse_database_path(): + expected = { + "project": "octopus", + "instance": "oyster", + "database": "nudibranch", + } + path = SpannerClient.database_path(**expected) + + # Check that the path construction is reversible. + actual = SpannerClient.parse_database_path(path) + assert expected == actual + + +def test_session_path(): + project = "cuttlefish" + instance = "mussel" + database = "winkle" + session = "nautilus" + expected = "projects/{project}/instances/{instance}/databases/{database}/sessions/{session}".format( + project=project, + instance=instance, + database=database, + session=session, + ) + actual = SpannerClient.session_path(project, instance, database, session) + assert expected == actual + + +def test_parse_session_path(): + expected = { + "project": "scallop", + "instance": "abalone", + "database": "squid", + "session": "clam", + } + path = SpannerClient.session_path(**expected) + + # Check that the path construction is reversible. + actual = SpannerClient.parse_session_path(path) + assert expected == actual + + +def test_common_billing_account_path(): + billing_account = "whelk" + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = SpannerClient.common_billing_account_path(billing_account) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "octopus", + } + path = SpannerClient.common_billing_account_path(**expected) + + # Check that the path construction is reversible. + actual = SpannerClient.parse_common_billing_account_path(path) + assert expected == actual + + +def test_common_folder_path(): + folder = "oyster" + expected = "folders/{folder}".format( + folder=folder, + ) + actual = SpannerClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "nudibranch", + } + path = SpannerClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = SpannerClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "cuttlefish" + expected = "organizations/{organization}".format( + organization=organization, + ) + actual = SpannerClient.common_organization_path(organization) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "mussel", + } + path = SpannerClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = SpannerClient.parse_common_organization_path(path) + assert expected == actual + + +def test_common_project_path(): + project = "winkle" + expected = "projects/{project}".format( + project=project, + ) + actual = SpannerClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "nautilus", + } + path = SpannerClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = SpannerClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "scallop" + location = "abalone" + expected = "projects/{project}/locations/{location}".format( + project=project, + location=location, + ) + actual = SpannerClient.common_location_path(project, location) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "squid", + "location": "clam", + } + path = SpannerClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = SpannerClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_with_default_client_info(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.SpannerTransport, "_prep_wrapped_messages" + ) as prep: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), + client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.SpannerTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = SpannerClient.get_transport_class() + transport = transport_class( + credentials=ga_credentials.AnonymousCredentials(), + client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + +def test_transport_close_grpc(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="grpc" + ) + with mock.patch.object( + type(getattr(client.transport, "_grpc_channel")), "close" + ) as close: + with client: + close.assert_not_called() + close.assert_called_once() + + +@pytest.mark.asyncio +async def test_transport_close_grpc_asyncio(): + client = SpannerAsyncClient( + credentials=async_anonymous_credentials(), transport="grpc_asyncio" + ) + with mock.patch.object( + type(getattr(client.transport, "_grpc_channel")), "close" + ) as close: + async with client: + close.assert_not_called() + close.assert_called_once() + + +def test_transport_close_rest(): + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport="rest" + ) + with mock.patch.object( + type(getattr(client.transport, "_session")), "close" + ) as close: + with client: + close.assert_not_called() + close.assert_called_once() + + +def test_client_ctx(): + transports = [ + "rest", + "grpc", + ] + for transport in transports: + client = SpannerClient( + credentials=ga_credentials.AnonymousCredentials(), transport=transport + ) + # Test client calls underlying transport. + with mock.patch.object(type(client.transport), "close") as close: + close.assert_not_called() + with client: + pass + close.assert_called() + + +@pytest.mark.parametrize( + "client_class,transport_class", + [ + (SpannerClient, transports.SpannerGrpcTransport), + (SpannerAsyncClient, transports.SpannerGrpcAsyncIOTransport), + ], +) +def test_api_key_credentials(client_class, transport_class): + with mock.patch.object( + google.auth._default, "get_api_key_credentials", create=True + ) as get_api_key_credentials: + mock_cred = mock.Mock() + get_api_key_credentials.return_value = mock_cred + options = client_options.ClientOptions() + options.api_key = "api_key" + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=mock_cred, + credentials_file=None, + host=client._DEFAULT_ENDPOINT_TEMPLATE.format( + UNIVERSE_DOMAIN=client._DEFAULT_UNIVERSE + ), + scopes=None, + client_cert_source_for_mtls=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + always_use_jwt_access=True, + api_audience=None, + metrics_interceptor=mock.ANY, + ) diff --git a/tests/unit/gapic/v1/test_database_admin_client_v1.py b/tests/unit/gapic/v1/test_database_admin_client_v1.py deleted file mode 100644 index dec787ae89..0000000000 --- a/tests/unit/gapic/v1/test_database_admin_client_v1.py +++ /dev/null @@ -1,842 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.rpc import status_pb2 - -from google.cloud import spanner_admin_database_v1 -from google.cloud.spanner_admin_database_v1.proto import backup_pb2 -from google.cloud.spanner_admin_database_v1.proto import spanner_database_admin_pb2 -from google.iam.v1 import iam_policy_pb2 -from google.iam.v1 import policy_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import empty_pb2 -from google.protobuf import field_mask_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def unary_unary(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestDatabaseAdminClient(object): - def test_create_database(self): - # Setup Expected Response - name = "name3373707" - expected_response = {"name": name} - expected_response = spanner_database_admin_pb2.Database(**expected_response) - operation = operations_pb2.Operation( - name="operations/test_create_database", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - create_statement = "createStatement552974828" - - response = client.create_database(parent, create_statement) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = spanner_database_admin_pb2.CreateDatabaseRequest( - parent=parent, create_statement=create_statement - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_create_database_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_create_database_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - create_statement = "createStatement552974828" - - response = client.create_database(parent, create_statement) - exception = response.exception() - assert exception.errors[0] == error - - def test_get_database(self): - # Setup Expected Response - name_2 = "name2-1052831874" - expected_response = {"name": name_2} - expected_response = spanner_database_admin_pb2.Database(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - name = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - response = client.get_database(name) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_database_admin_pb2.GetDatabaseRequest(name=name) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_get_database_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - name = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - with pytest.raises(CustomException): - client.get_database(name) - - def test_update_database_ddl(self): - # Setup Expected Response - expected_response = {} - expected_response = empty_pb2.Empty(**expected_response) - operation = operations_pb2.Operation( - name="operations/test_update_database_ddl", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - statements = [] - - response = client.update_database_ddl(database, statements) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = spanner_database_admin_pb2.UpdateDatabaseDdlRequest( - database=database, statements=statements - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_update_database_ddl_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_update_database_ddl_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - statements = [] - - response = client.update_database_ddl(database, statements) - exception = response.exception() - assert exception.errors[0] == error - - def test_drop_database(self): - channel = ChannelStub() - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - client.drop_database(database) - - assert len(channel.requests) == 1 - expected_request = spanner_database_admin_pb2.DropDatabaseRequest( - database=database - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_drop_database_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - with pytest.raises(CustomException): - client.drop_database(database) - - def test_get_database_ddl(self): - # Setup Expected Response - expected_response = {} - expected_response = spanner_database_admin_pb2.GetDatabaseDdlResponse( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - response = client.get_database_ddl(database) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_database_admin_pb2.GetDatabaseDdlRequest( - database=database - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_get_database_ddl_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - with pytest.raises(CustomException): - client.get_database_ddl(database) - - def test_set_iam_policy(self): - # Setup Expected Response - version = 351608024 - etag = b"21" - expected_response = {"version": version, "etag": etag} - expected_response = policy_pb2.Policy(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - resource = "resource-341064690" - policy = {} - - response = client.set_iam_policy(resource, policy) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = iam_policy_pb2.SetIamPolicyRequest( - resource=resource, policy=policy - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_set_iam_policy_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - resource = "resource-341064690" - policy = {} - - with pytest.raises(CustomException): - client.set_iam_policy(resource, policy) - - def test_get_iam_policy(self): - # Setup Expected Response - version = 351608024 - etag = b"21" - expected_response = {"version": version, "etag": etag} - expected_response = policy_pb2.Policy(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - resource = "resource-341064690" - - response = client.get_iam_policy(resource) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = iam_policy_pb2.GetIamPolicyRequest(resource=resource) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_get_iam_policy_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - resource = "resource-341064690" - - with pytest.raises(CustomException): - client.get_iam_policy(resource) - - def test_test_iam_permissions(self): - # Setup Expected Response - expected_response = {} - expected_response = iam_policy_pb2.TestIamPermissionsResponse( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - resource = "resource-341064690" - permissions = [] - - response = client.test_iam_permissions(resource, permissions) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = iam_policy_pb2.TestIamPermissionsRequest( - resource=resource, permissions=permissions - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_test_iam_permissions_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - resource = "resource-341064690" - permissions = [] - - with pytest.raises(CustomException): - client.test_iam_permissions(resource, permissions) - - def test_create_backup(self): - # Setup Expected Response - database = "database1789464955" - name = "name3373707" - size_bytes = 1796325715 - expected_response = { - "database": database, - "name": name, - "size_bytes": size_bytes, - } - expected_response = backup_pb2.Backup(**expected_response) - operation = operations_pb2.Operation( - name="operations/test_create_backup", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - backup_id = "backupId1355353272" - backup = {} - - response = client.create_backup(parent, backup_id, backup) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = backup_pb2.CreateBackupRequest( - parent=parent, backup_id=backup_id, backup=backup - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_create_backup_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_create_backup_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - backup_id = "backupId1355353272" - backup = {} - - response = client.create_backup(parent, backup_id, backup) - exception = response.exception() - assert exception.errors[0] == error - - def test_get_backup(self): - # Setup Expected Response - database = "database1789464955" - name_2 = "name2-1052831874" - size_bytes = 1796325715 - expected_response = { - "database": database, - "name": name_2, - "size_bytes": size_bytes, - } - expected_response = backup_pb2.Backup(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - name = client.backup_path("[PROJECT]", "[INSTANCE]", "[BACKUP]") - - response = client.get_backup(name) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = backup_pb2.GetBackupRequest(name=name) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_get_backup_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - name = client.backup_path("[PROJECT]", "[INSTANCE]", "[BACKUP]") - - with pytest.raises(CustomException): - client.get_backup(name) - - def test_update_backup(self): - # Setup Expected Response - database = "database1789464955" - name = "name3373707" - size_bytes = 1796325715 - expected_response = { - "database": database, - "name": name, - "size_bytes": size_bytes, - } - expected_response = backup_pb2.Backup(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - backup = {} - update_mask = {} - - response = client.update_backup(backup, update_mask) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = backup_pb2.UpdateBackupRequest( - backup=backup, update_mask=update_mask - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_update_backup_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - backup = {} - update_mask = {} - - with pytest.raises(CustomException): - client.update_backup(backup, update_mask) - - def test_delete_backup(self): - channel = ChannelStub() - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - name = client.backup_path("[PROJECT]", "[INSTANCE]", "[BACKUP]") - - client.delete_backup(name) - - assert len(channel.requests) == 1 - expected_request = backup_pb2.DeleteBackupRequest(name=name) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_delete_backup_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - name = client.backup_path("[PROJECT]", "[INSTANCE]", "[BACKUP]") - - with pytest.raises(CustomException): - client.delete_backup(name) - - def test_list_backups(self): - # Setup Expected Response - next_page_token = "" - backups_element = {} - backups = [backups_element] - expected_response = {"next_page_token": next_page_token, "backups": backups} - expected_response = backup_pb2.ListBackupsResponse(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - - paged_list_response = client.list_backups(parent) - resources = list(paged_list_response) - assert len(resources) == 1 - - assert expected_response.backups[0] == resources[0] - - assert len(channel.requests) == 1 - expected_request = backup_pb2.ListBackupsRequest(parent=parent) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_list_backups_exception(self): - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - - paged_list_response = client.list_backups(parent) - with pytest.raises(CustomException): - list(paged_list_response) - - def test_restore_database(self): - # Setup Expected Response - name = "name3373707" - expected_response = {"name": name} - expected_response = spanner_database_admin_pb2.Database(**expected_response) - operation = operations_pb2.Operation( - name="operations/test_restore_database", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - database_id = "databaseId816491103" - - response = client.restore_database(parent, database_id) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = spanner_database_admin_pb2.RestoreDatabaseRequest( - parent=parent, database_id=database_id - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_restore_database_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_restore_database_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - database_id = "databaseId816491103" - - response = client.restore_database(parent, database_id) - exception = response.exception() - assert exception.errors[0] == error - - def test_list_database_operations(self): - # Setup Expected Response - next_page_token = "" - operations_element = {} - operations = [operations_element] - expected_response = { - "next_page_token": next_page_token, - "operations": operations, - } - expected_response = spanner_database_admin_pb2.ListDatabaseOperationsResponse( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - - paged_list_response = client.list_database_operations(parent) - resources = list(paged_list_response) - assert len(resources) == 1 - - assert expected_response.operations[0] == resources[0] - - assert len(channel.requests) == 1 - expected_request = spanner_database_admin_pb2.ListDatabaseOperationsRequest( - parent=parent - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_list_database_operations_exception(self): - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - - paged_list_response = client.list_database_operations(parent) - with pytest.raises(CustomException): - list(paged_list_response) - - def test_list_backup_operations(self): - # Setup Expected Response - next_page_token = "" - operations_element = {} - operations = [operations_element] - expected_response = { - "next_page_token": next_page_token, - "operations": operations, - } - expected_response = backup_pb2.ListBackupOperationsResponse(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - - paged_list_response = client.list_backup_operations(parent) - resources = list(paged_list_response) - assert len(resources) == 1 - - assert expected_response.operations[0] == resources[0] - - assert len(channel.requests) == 1 - expected_request = backup_pb2.ListBackupOperationsRequest(parent=parent) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_list_backup_operations_exception(self): - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - - paged_list_response = client.list_backup_operations(parent) - with pytest.raises(CustomException): - list(paged_list_response) - - def test_list_databases(self): - # Setup Expected Response - next_page_token = "" - databases_element = {} - databases = [databases_element] - expected_response = {"next_page_token": next_page_token, "databases": databases} - expected_response = spanner_database_admin_pb2.ListDatabasesResponse( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup Request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - - paged_list_response = client.list_databases(parent) - resources = list(paged_list_response) - assert len(resources) == 1 - - assert expected_response.databases[0] == resources[0] - - assert len(channel.requests) == 1 - expected_request = spanner_database_admin_pb2.ListDatabasesRequest( - parent=parent - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_list_databases_exception(self): - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_database_v1.DatabaseAdminClient() - - # Setup request - parent = client.instance_path("[PROJECT]", "[INSTANCE]") - - paged_list_response = client.list_databases(parent) - with pytest.raises(CustomException): - list(paged_list_response) diff --git a/tests/unit/gapic/v1/test_instance_admin_client_v1.py b/tests/unit/gapic/v1/test_instance_admin_client_v1.py deleted file mode 100644 index da8dfcd8d4..0000000000 --- a/tests/unit/gapic/v1/test_instance_admin_client_v1.py +++ /dev/null @@ -1,538 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.rpc import status_pb2 - -from google.cloud import spanner_admin_instance_v1 -from google.cloud.spanner_admin_instance_v1.proto import spanner_instance_admin_pb2 -from google.iam.v1 import iam_policy_pb2 -from google.iam.v1 import policy_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import empty_pb2 -from google.protobuf import field_mask_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def unary_unary(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestInstanceAdminClient(object): - def test_list_instance_configs(self): - # Setup Expected Response - next_page_token = "" - instance_configs_element = {} - instance_configs = [instance_configs_element] - expected_response = { - "next_page_token": next_page_token, - "instance_configs": instance_configs, - } - expected_response = spanner_instance_admin_pb2.ListInstanceConfigsResponse( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - parent = client.project_path("[PROJECT]") - - paged_list_response = client.list_instance_configs(parent) - resources = list(paged_list_response) - assert len(resources) == 1 - - assert expected_response.instance_configs[0] == resources[0] - - assert len(channel.requests) == 1 - expected_request = spanner_instance_admin_pb2.ListInstanceConfigsRequest( - parent=parent - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_list_instance_configs_exception(self): - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup request - parent = client.project_path("[PROJECT]") - - paged_list_response = client.list_instance_configs(parent) - with pytest.raises(CustomException): - list(paged_list_response) - - def test_get_instance_config(self): - # Setup Expected Response - name_2 = "name2-1052831874" - display_name = "displayName1615086568" - expected_response = {"name": name_2, "display_name": display_name} - expected_response = spanner_instance_admin_pb2.InstanceConfig( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - name = client.instance_config_path("[PROJECT]", "[INSTANCE_CONFIG]") - - response = client.get_instance_config(name) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_instance_admin_pb2.GetInstanceConfigRequest( - name=name - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_get_instance_config_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup request - name = client.instance_config_path("[PROJECT]", "[INSTANCE_CONFIG]") - - with pytest.raises(CustomException): - client.get_instance_config(name) - - def test_list_instances(self): - # Setup Expected Response - next_page_token = "" - instances_element = {} - instances = [instances_element] - expected_response = {"next_page_token": next_page_token, "instances": instances} - expected_response = spanner_instance_admin_pb2.ListInstancesResponse( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - parent = client.project_path("[PROJECT]") - - paged_list_response = client.list_instances(parent) - resources = list(paged_list_response) - assert len(resources) == 1 - - assert expected_response.instances[0] == resources[0] - - assert len(channel.requests) == 1 - expected_request = spanner_instance_admin_pb2.ListInstancesRequest( - parent=parent - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_list_instances_exception(self): - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup request - parent = client.project_path("[PROJECT]") - - paged_list_response = client.list_instances(parent) - with pytest.raises(CustomException): - list(paged_list_response) - - def test_get_instance(self): - # Setup Expected Response - name_2 = "name2-1052831874" - config = "config-1354792126" - display_name = "displayName1615086568" - node_count = 1539922066 - expected_response = { - "name": name_2, - "config": config, - "display_name": display_name, - "node_count": node_count, - } - expected_response = spanner_instance_admin_pb2.Instance(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - name = client.instance_path("[PROJECT]", "[INSTANCE]") - - response = client.get_instance(name) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_instance_admin_pb2.GetInstanceRequest(name=name) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_get_instance_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup request - name = client.instance_path("[PROJECT]", "[INSTANCE]") - - with pytest.raises(CustomException): - client.get_instance(name) - - def test_create_instance(self): - # Setup Expected Response - name = "name3373707" - config = "config-1354792126" - display_name = "displayName1615086568" - node_count = 1539922066 - expected_response = { - "name": name, - "config": config, - "display_name": display_name, - "node_count": node_count, - } - expected_response = spanner_instance_admin_pb2.Instance(**expected_response) - operation = operations_pb2.Operation( - name="operations/test_create_instance", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - parent = client.project_path("[PROJECT]") - instance_id = "instanceId-2101995259" - instance = {} - - response = client.create_instance(parent, instance_id, instance) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = spanner_instance_admin_pb2.CreateInstanceRequest( - parent=parent, instance_id=instance_id, instance=instance - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_create_instance_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_create_instance_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - parent = client.project_path("[PROJECT]") - instance_id = "instanceId-2101995259" - instance = {} - - response = client.create_instance(parent, instance_id, instance) - exception = response.exception() - assert exception.errors[0] == error - - def test_update_instance(self): - # Setup Expected Response - name = "name3373707" - config = "config-1354792126" - display_name = "displayName1615086568" - node_count = 1539922066 - expected_response = { - "name": name, - "config": config, - "display_name": display_name, - "node_count": node_count, - } - expected_response = spanner_instance_admin_pb2.Instance(**expected_response) - operation = operations_pb2.Operation( - name="operations/test_update_instance", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - instance = {} - field_mask = {} - - response = client.update_instance(instance, field_mask) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = spanner_instance_admin_pb2.UpdateInstanceRequest( - instance=instance, field_mask=field_mask - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_update_instance_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_update_instance_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - instance = {} - field_mask = {} - - response = client.update_instance(instance, field_mask) - exception = response.exception() - assert exception.errors[0] == error - - def test_delete_instance(self): - channel = ChannelStub() - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - name = client.instance_path("[PROJECT]", "[INSTANCE]") - - client.delete_instance(name) - - assert len(channel.requests) == 1 - expected_request = spanner_instance_admin_pb2.DeleteInstanceRequest(name=name) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_delete_instance_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup request - name = client.instance_path("[PROJECT]", "[INSTANCE]") - - with pytest.raises(CustomException): - client.delete_instance(name) - - def test_set_iam_policy(self): - # Setup Expected Response - version = 351608024 - etag = b"21" - expected_response = {"version": version, "etag": etag} - expected_response = policy_pb2.Policy(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - resource = "resource-341064690" - policy = {} - - response = client.set_iam_policy(resource, policy) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = iam_policy_pb2.SetIamPolicyRequest( - resource=resource, policy=policy - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_set_iam_policy_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup request - resource = "resource-341064690" - policy = {} - - with pytest.raises(CustomException): - client.set_iam_policy(resource, policy) - - def test_get_iam_policy(self): - # Setup Expected Response - version = 351608024 - etag = b"21" - expected_response = {"version": version, "etag": etag} - expected_response = policy_pb2.Policy(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - resource = "resource-341064690" - - response = client.get_iam_policy(resource) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = iam_policy_pb2.GetIamPolicyRequest(resource=resource) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_get_iam_policy_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup request - resource = "resource-341064690" - - with pytest.raises(CustomException): - client.get_iam_policy(resource) - - def test_test_iam_permissions(self): - # Setup Expected Response - expected_response = {} - expected_response = iam_policy_pb2.TestIamPermissionsResponse( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup Request - resource = "resource-341064690" - permissions = [] - - response = client.test_iam_permissions(resource, permissions) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = iam_policy_pb2.TestIamPermissionsRequest( - resource=resource, permissions=permissions - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_test_iam_permissions_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_admin_instance_v1.InstanceAdminClient() - - # Setup request - resource = "resource-341064690" - permissions = [] - - with pytest.raises(CustomException): - client.test_iam_permissions(resource, permissions) diff --git a/tests/unit/gapic/v1/test_spanner_client_v1.py b/tests/unit/gapic/v1/test_spanner_client_v1.py deleted file mode 100644 index a133902658..0000000000 --- a/tests/unit/gapic/v1/test_spanner_client_v1.py +++ /dev/null @@ -1,722 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.cloud.spanner_v1.gapic import spanner_client as spanner_v1 -from google.cloud.spanner_v1.proto import keys_pb2 -from google.cloud.spanner_v1.proto import result_set_pb2 -from google.cloud.spanner_v1.proto import spanner_pb2 -from google.cloud.spanner_v1.proto import transaction_pb2 -from google.protobuf import empty_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def unary_unary(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - def unary_stream(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestSpannerClient(object): - def test_create_session(self): - # Setup Expected Response - name = "name3373707" - expected_response = {"name": name} - expected_response = spanner_pb2.Session(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - response = client.create_session(database) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.CreateSessionRequest(database=database) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_create_session_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - with pytest.raises(CustomException): - client.create_session(database) - - def test_batch_create_sessions(self): - # Setup Expected Response - expected_response = {} - expected_response = spanner_pb2.BatchCreateSessionsResponse(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - session_count = 185691686 - - response = client.batch_create_sessions(database, session_count) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.BatchCreateSessionsRequest( - database=database, session_count=session_count - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_batch_create_sessions_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - session_count = 185691686 - - with pytest.raises(CustomException): - client.batch_create_sessions(database, session_count) - - def test_get_session(self): - # Setup Expected Response - name_2 = "name2-1052831874" - expected_response = {"name": name_2} - expected_response = spanner_pb2.Session(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - name = client.session_path("[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]") - - response = client.get_session(name) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.GetSessionRequest(name=name) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_get_session_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - name = client.session_path("[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]") - - with pytest.raises(CustomException): - client.get_session(name) - - def test_list_sessions(self): - # Setup Expected Response - next_page_token = "" - sessions_element = {} - sessions = [sessions_element] - expected_response = {"next_page_token": next_page_token, "sessions": sessions} - expected_response = spanner_pb2.ListSessionsResponse(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - paged_list_response = client.list_sessions(database) - resources = list(paged_list_response) - assert len(resources) == 1 - - assert expected_response.sessions[0] == resources[0] - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.ListSessionsRequest(database=database) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_list_sessions_exception(self): - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - database = client.database_path("[PROJECT]", "[INSTANCE]", "[DATABASE]") - - paged_list_response = client.list_sessions(database) - with pytest.raises(CustomException): - list(paged_list_response) - - def test_delete_session(self): - channel = ChannelStub() - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - name = client.session_path("[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]") - - client.delete_session(name) - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.DeleteSessionRequest(name=name) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_delete_session_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - name = client.session_path("[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]") - - with pytest.raises(CustomException): - client.delete_session(name) - - def test_execute_sql(self): - # Setup Expected Response - expected_response = {} - expected_response = result_set_pb2.ResultSet(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - sql = "sql114126" - - response = client.execute_sql(session, sql) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.ExecuteSqlRequest(session=session, sql=sql) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_execute_sql_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - sql = "sql114126" - - with pytest.raises(CustomException): - client.execute_sql(session, sql) - - def test_execute_streaming_sql(self): - # Setup Expected Response - chunked_value = True - resume_token = b"103" - expected_response = { - "chunked_value": chunked_value, - "resume_token": resume_token, - } - expected_response = result_set_pb2.PartialResultSet(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[iter([expected_response])]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - sql = "sql114126" - - response = client.execute_streaming_sql(session, sql) - resources = list(response) - assert len(resources) == 1 - assert expected_response == resources[0] - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.ExecuteSqlRequest(session=session, sql=sql) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_execute_streaming_sql_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - sql = "sql114126" - - with pytest.raises(CustomException): - client.execute_streaming_sql(session, sql) - - def test_execute_batch_dml(self): - # Setup Expected Response - expected_response = {} - expected_response = spanner_pb2.ExecuteBatchDmlResponse(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - transaction = {} - statements = [] - seqno = 109325920 - - response = client.execute_batch_dml(session, transaction, statements, seqno) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.ExecuteBatchDmlRequest( - session=session, transaction=transaction, statements=statements, seqno=seqno - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_execute_batch_dml_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - transaction = {} - statements = [] - seqno = 109325920 - - with pytest.raises(CustomException): - client.execute_batch_dml(session, transaction, statements, seqno) - - def test_read(self): - # Setup Expected Response - expected_response = {} - expected_response = result_set_pb2.ResultSet(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - table = "table110115790" - columns = [] - key_set = {} - - response = client.read(session, table, columns, key_set) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.ReadRequest( - session=session, table=table, columns=columns, key_set=key_set - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_read_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - table = "table110115790" - columns = [] - key_set = {} - - with pytest.raises(CustomException): - client.read(session, table, columns, key_set) - - def test_streaming_read(self): - # Setup Expected Response - chunked_value = True - resume_token = b"103" - expected_response = { - "chunked_value": chunked_value, - "resume_token": resume_token, - } - expected_response = result_set_pb2.PartialResultSet(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[iter([expected_response])]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - table = "table110115790" - columns = [] - key_set = {} - - response = client.streaming_read(session, table, columns, key_set) - resources = list(response) - assert len(resources) == 1 - assert expected_response == resources[0] - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.ReadRequest( - session=session, table=table, columns=columns, key_set=key_set - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_streaming_read_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - table = "table110115790" - columns = [] - key_set = {} - - with pytest.raises(CustomException): - client.streaming_read(session, table, columns, key_set) - - def test_begin_transaction(self): - # Setup Expected Response - id_ = b"27" - expected_response = {"id": id_} - expected_response = transaction_pb2.Transaction(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - options_ = {} - - response = client.begin_transaction(session, options_) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.BeginTransactionRequest( - session=session, options=options_ - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_begin_transaction_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - options_ = {} - - with pytest.raises(CustomException): - client.begin_transaction(session, options_) - - def test_commit(self): - # Setup Expected Response - expected_response = {} - expected_response = spanner_pb2.CommitResponse(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - - response = client.commit(session) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.CommitRequest(session=session) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_commit_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - - with pytest.raises(CustomException): - client.commit(session) - - def test_rollback(self): - channel = ChannelStub() - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - transaction_id = b"28" - - client.rollback(session, transaction_id) - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.RollbackRequest( - session=session, transaction_id=transaction_id - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_rollback_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - transaction_id = b"28" - - with pytest.raises(CustomException): - client.rollback(session, transaction_id) - - def test_partition_query(self): - # Setup Expected Response - expected_response = {} - expected_response = spanner_pb2.PartitionResponse(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - sql = "sql114126" - - response = client.partition_query(session, sql) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.PartitionQueryRequest(session=session, sql=sql) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_partition_query_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - sql = "sql114126" - - with pytest.raises(CustomException): - client.partition_query(session, sql) - - def test_partition_read(self): - # Setup Expected Response - expected_response = {} - expected_response = spanner_pb2.PartitionResponse(**expected_response) - - # Mock the API response - channel = ChannelStub(responses=[expected_response]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup Request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - table = "table110115790" - key_set = {} - - response = client.partition_read(session, table, key_set) - assert expected_response == response - - assert len(channel.requests) == 1 - expected_request = spanner_pb2.PartitionReadRequest( - session=session, table=table, key_set=key_set - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_partition_read_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = spanner_v1.SpannerClient() - - # Setup request - session = client.session_path( - "[PROJECT]", "[INSTANCE]", "[DATABASE]", "[SESSION]" - ) - table = "table110115790" - key_set = {} - - with pytest.raises(CustomException): - client.partition_read(session, table, key_set) diff --git a/tests/unit/spanner_dbapi/__init__.py b/tests/unit/spanner_dbapi/__init__.py new file mode 100644 index 0000000000..377df12f71 --- /dev/null +++ b/tests/unit/spanner_dbapi/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/tests/unit/spanner_dbapi/test__helpers.py b/tests/unit/spanner_dbapi/test__helpers.py new file mode 100644 index 0000000000..01302707b5 --- /dev/null +++ b/tests/unit/spanner_dbapi/test__helpers.py @@ -0,0 +1,65 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Cloud Spanner DB-API Connection class unit tests.""" + +import unittest + + +class TestColumnInfo(unittest.TestCase): + def test_ctor(self): + from google.cloud.spanner_dbapi.cursor import ColumnInfo + + name = "col-name" + type_code = 8 + display_size = 5 + internal_size = 10 + precision = 3 + scale = None + null_ok = False + + cols = ColumnInfo( + name, type_code, display_size, internal_size, precision, scale, null_ok + ) + + self.assertEqual(cols.name, name) + self.assertEqual(cols.type_code, type_code) + self.assertEqual(cols.display_size, display_size) + self.assertEqual(cols.internal_size, internal_size) + self.assertEqual(cols.precision, precision) + self.assertEqual(cols.scale, scale) + self.assertEqual(cols.null_ok, null_ok) + self.assertEqual( + cols.fields, + (name, type_code, display_size, internal_size, precision, scale, null_ok), + ) + + def test___get_item__(self): + from google.cloud.spanner_dbapi.cursor import ColumnInfo + + fields = ("col-name", 8, 5, 10, 3, None, False) + cols = ColumnInfo(*fields) + + for i in range(0, 7): + self.assertEqual(cols[i], fields[i]) + + def test___str__(self): + from google.cloud.spanner_dbapi.cursor import ColumnInfo + + cols = ColumnInfo("col-name", 8, None, 10, 3, None, False) + + self.assertEqual( + str(cols), + "ColumnInfo(name='col-name', type_code=8, internal_size=10, precision='3')", + ) diff --git a/tests/unit/spanner_dbapi/test_batch_dml_executor.py b/tests/unit/spanner_dbapi/test_batch_dml_executor.py new file mode 100644 index 0000000000..3dc387bcb6 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_batch_dml_executor.py @@ -0,0 +1,54 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest +from unittest import mock + +from google.cloud.spanner_dbapi import ProgrammingError +from google.cloud.spanner_dbapi.batch_dml_executor import BatchDmlExecutor +from google.cloud.spanner_dbapi.parsed_statement import ( + ParsedStatement, + Statement, + StatementType, +) + + +class TestBatchDmlExecutor(unittest.TestCase): + @mock.patch("google.cloud.spanner_dbapi.cursor.Cursor") + def setUp(self, mock_cursor): + self._under_test = BatchDmlExecutor(mock_cursor) + + def test_execute_statement_non_dml_statement_type(self): + parsed_statement = ParsedStatement(StatementType.QUERY, Statement("sql")) + + with self.assertRaises(ProgrammingError): + self._under_test.execute_statement(parsed_statement) + + def test_execute_statement_insert_statement_type(self): + statement = Statement("sql") + + self._under_test.execute_statement( + ParsedStatement(StatementType.INSERT, statement) + ) + + self.assertEqual(self._under_test._statements, [statement]) + + def test_execute_statement_update_statement_type(self): + statement = Statement("sql") + + self._under_test.execute_statement( + ParsedStatement(StatementType.UPDATE, statement) + ) + + self.assertEqual(self._under_test._statements, [statement]) diff --git a/tests/unit/spanner_dbapi/test_checksum.py b/tests/unit/spanner_dbapi/test_checksum.py new file mode 100644 index 0000000000..a90d0da370 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_checksum.py @@ -0,0 +1,71 @@ +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest + + +class Test_compare_checksums(unittest.TestCase): + def test_equal(self): + from google.cloud.spanner_dbapi.checksum import _compare_checksums + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + + original = ResultsChecksum() + original.consume_result(5) + + retried = ResultsChecksum() + retried.consume_result(5) + + self.assertIsNone(_compare_checksums(original, retried)) + + def test_less_results(self): + from google.cloud.spanner_dbapi.checksum import _compare_checksums + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.exceptions import RetryAborted + + original = ResultsChecksum() + original.consume_result(5) + + retried = ResultsChecksum() + + with self.assertRaises(RetryAborted): + _compare_checksums(original, retried) + + def test_more_results(self): + from google.cloud.spanner_dbapi.checksum import _compare_checksums + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.exceptions import RetryAborted + + original = ResultsChecksum() + original.consume_result(5) + + retried = ResultsChecksum() + retried.consume_result(5) + retried.consume_result(2) + + with self.assertRaises(RetryAborted): + _compare_checksums(original, retried) + + def test_mismatch(self): + from google.cloud.spanner_dbapi.checksum import _compare_checksums + from google.cloud.spanner_dbapi.checksum import ResultsChecksum + from google.cloud.spanner_dbapi.exceptions import RetryAborted + + original = ResultsChecksum() + original.consume_result(5) + + retried = ResultsChecksum() + retried.consume_result(2) + + with self.assertRaises(RetryAborted): + _compare_checksums(original, retried) diff --git a/tests/unit/spanner_dbapi/test_client_side_statement_executor.py b/tests/unit/spanner_dbapi/test_client_side_statement_executor.py new file mode 100644 index 0000000000..888f81e830 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_client_side_statement_executor.py @@ -0,0 +1,54 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest + +from google.cloud.spanner_dbapi.client_side_statement_executor import ( + _get_isolation_level, +) +from google.cloud.spanner_dbapi.parse_utils import classify_statement +from google.cloud.spanner_v1 import TransactionOptions + + +class TestParseUtils(unittest.TestCase): + def test_get_isolation_level(self): + self.assertIsNone(_get_isolation_level(classify_statement("begin"))) + self.assertEqual( + TransactionOptions.IsolationLevel.SERIALIZABLE, + _get_isolation_level( + classify_statement("begin isolation level serializable") + ), + ) + self.assertEqual( + TransactionOptions.IsolationLevel.SERIALIZABLE, + _get_isolation_level( + classify_statement( + "begin transaction isolation level serializable " + ) + ), + ) + self.assertEqual( + TransactionOptions.IsolationLevel.REPEATABLE_READ, + _get_isolation_level( + classify_statement("begin isolation level repeatable read") + ), + ) + self.assertEqual( + TransactionOptions.IsolationLevel.REPEATABLE_READ, + _get_isolation_level( + classify_statement( + "begin transaction isolation level repeatable read " + ) + ), + ) diff --git a/tests/unit/spanner_dbapi/test_connect.py b/tests/unit/spanner_dbapi/test_connect.py new file mode 100644 index 0000000000..5fd2b74a17 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_connect.py @@ -0,0 +1,153 @@ +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""connect() module function unit tests.""" + +import unittest +from unittest import mock + +from google.auth.credentials import AnonymousCredentials + +from tests._builders import build_scoped_credentials + +INSTANCE = "test-instance" +DATABASE = "test-database" +PROJECT = "test-project" +USER_AGENT = "user-agent" + + +@mock.patch("google.cloud.spanner_v1.Client") +class Test_connect(unittest.TestCase): + def test_w_implicit(self, mock_client): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_dbapi import Connection + + client = mock_client.return_value + instance = client.instance.return_value + database = instance.database.return_value + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + self.assertIsInstance(connection, Connection) + + self.assertIs(connection.instance, instance) + client.instance.assert_called_once_with(INSTANCE) + mock_client.assert_called_once_with( + project=mock.ANY, + credentials=mock.ANY, + client_info=mock.ANY, + client_options=mock.ANY, + route_to_leader_enabled=True, + ) + + self.assertIs(connection.database, database) + instance.database.assert_called_once_with( + DATABASE, pool=None, database_role=None, logger=None + ) + # Database constructs its own pool + self.assertIsNotNone(connection.database._pool) + self.assertTrue(connection.instance._client.route_to_leader_enabled) + + def test_w_explicit(self, mock_client): + from google.cloud.spanner_v1.pool import AbstractSessionPool + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_dbapi import Connection + from google.cloud.spanner_dbapi.version import PY_VERSION + + credentials = build_scoped_credentials() + pool = mock.create_autospec(AbstractSessionPool) + client = mock_client.return_value + instance = client.instance.return_value + database = instance.database.return_value + role = "some_role" + + connection = connect( + INSTANCE, + DATABASE, + PROJECT, + credentials, + pool=pool, + database_role=role, + user_agent=USER_AGENT, + route_to_leader_enabled=False, + ) + + self.assertIsInstance(connection, Connection) + + mock_client.assert_called_once_with( + project=PROJECT, + credentials=credentials, + client_info=mock.ANY, + client_options=mock.ANY, + route_to_leader_enabled=False, + ) + client_info = mock_client.call_args_list[0][1]["client_info"] + self.assertEqual(client_info.user_agent, USER_AGENT) + self.assertEqual(client_info.python_version, PY_VERSION) + + self.assertIs(connection.instance, instance) + client.instance.assert_called_once_with(INSTANCE) + + self.assertIs(connection.database, database) + instance.database.assert_called_once_with( + DATABASE, pool=pool, database_role=role, logger=None + ) + + def test_w_credential_file_path(self, mock_client): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_dbapi import Connection + from google.cloud.spanner_dbapi.version import PY_VERSION + + credentials_path = "dummy/file/path.json" + + connection = connect( + INSTANCE, + DATABASE, + PROJECT, + credentials=credentials_path, + user_agent=USER_AGENT, + ) + + self.assertIsInstance(connection, Connection) + + factory = mock_client.from_service_account_json + factory.assert_called_once_with( + credentials_path, + project=PROJECT, + client_info=mock.ANY, + route_to_leader_enabled=True, + ) + client_info = factory.call_args_list[0][1]["client_info"] + self.assertEqual(client_info.user_agent, USER_AGENT) + self.assertEqual(client_info.python_version, PY_VERSION) + + def test_with_kwargs(self, mock_client): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_dbapi import Connection + + client = mock_client.return_value + instance = client.instance.return_value + database = instance.database.return_value + self.assertIsNotNone(database) + + connection = connect(INSTANCE, DATABASE, ignore_transaction_warnings=True) + + self.assertIsInstance(connection, Connection) + self.assertTrue(connection._ignore_transaction_warnings) diff --git a/tests/unit/spanner_dbapi/test_connection.py b/tests/unit/spanner_dbapi/test_connection.py new file mode 100644 index 0000000000..6e8159425f --- /dev/null +++ b/tests/unit/spanner_dbapi/test_connection.py @@ -0,0 +1,909 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Cloud Spanner DB-API Connection class unit tests.""" + +import datetime +import mock +import unittest +import warnings +import pytest +from google.auth.credentials import AnonymousCredentials + +from google.cloud.spanner_admin_database_v1 import DatabaseDialect +from google.cloud.spanner_dbapi.batch_dml_executor import BatchMode +from google.cloud.spanner_dbapi.exceptions import ( + InterfaceError, + OperationalError, + ProgrammingError, +) +from google.cloud.spanner_dbapi import Connection +from google.cloud.spanner_dbapi.connection import CLIENT_TRANSACTION_NOT_STARTED_WARNING +from google.cloud.spanner_dbapi.parsed_statement import ( + ParsedStatement, + StatementType, + Statement, + ClientSideStatementType, + AutocommitDmlMode, +) +from google.cloud.spanner_v1.database_sessions_manager import TransactionType +from tests._builders import build_connection, build_session + +PROJECT = "test-project" +INSTANCE = "test-instance" +DATABASE = "test-database" +USER_AGENT = "user-agent" + + +class TestConnection(unittest.TestCase): + def setUp(self): + self._under_test = self._make_connection() + + def _get_client_info(self): + from google.api_core.gapic_v1.client_info import ClientInfo + + return ClientInfo(user_agent=USER_AGENT) + + def _make_connection( + self, database_dialect=DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED, **kwargs + ): + from google.cloud.spanner_v1.instance import Instance + from google.cloud.spanner_v1.client import Client + + # We don't need a real Client object to test the constructor + client = Client( + project="test", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + instance = Instance(INSTANCE, client=client) + database = instance.database(DATABASE, database_dialect=database_dialect) + return Connection(instance, database, **kwargs) + + @mock.patch("google.cloud.spanner_dbapi.connection.Connection.commit") + def test_autocommit_setter_transaction_not_started(self, mock_commit): + connection = self._make_connection() + + connection.autocommit = True + + mock_commit.assert_not_called() + self.assertTrue(connection._autocommit) + + connection.autocommit = False + mock_commit.assert_not_called() + self.assertFalse(connection._autocommit) + + @mock.patch("google.cloud.spanner_dbapi.connection.Connection.commit") + def test_autocommit_setter_transaction_started(self, mock_commit): + connection = self._make_connection() + connection._spanner_transaction_started = True + + connection.autocommit = True + + mock_commit.assert_called_once() + self.assertTrue(connection._autocommit) + + def test_property_database(self): + from google.cloud.spanner_v1.database import Database + + connection = self._make_connection() + self.assertIsInstance(connection.database, Database) + self.assertEqual(connection.database, connection._database) + + def test_property_instance(self): + from google.cloud.spanner_v1.instance import Instance + + connection = self._make_connection() + self.assertIsInstance(connection.instance, Instance) + self.assertEqual(connection.instance, connection._instance) + + def test_property_current_schema_google_sql_dialect(self): + from google.cloud.spanner_v1.database import Database + + connection = self._make_connection( + database_dialect=DatabaseDialect.GOOGLE_STANDARD_SQL + ) + self.assertIsInstance(connection.database, Database) + self.assertEqual(connection.current_schema, "") + + def test_property_current_schema_postgres_sql_dialect(self): + from google.cloud.spanner_v1.database import Database + + connection = self._make_connection(database_dialect=DatabaseDialect.POSTGRESQL) + self.assertIsInstance(connection.database, Database) + self.assertEqual(connection.current_schema, "public") + + def test_read_only_connection(self): + connection = self._make_connection(read_only=True) + self.assertTrue(connection.read_only) + + connection._spanner_transaction_started = True + with self.assertRaisesRegex( + ValueError, + "Connection read/write mode can't be changed while a transaction is in progress. " + "Commit or rollback the current transaction and try again.", + ): + connection.read_only = False + + # Verify that we can set the value to the same value as it already has. + connection.read_only = True + self.assertTrue(connection.read_only) + + connection._spanner_transaction_started = False + connection.read_only = False + self.assertFalse(connection.read_only) + + def test__session_checkout_read_only(self): + connection = build_connection(read_only=True) + database = connection._database + sessions_manager = database._sessions_manager + + expected_session = build_session(database=database) + sessions_manager.get_session = mock.MagicMock(return_value=expected_session) + + actual_session = connection._session_checkout() + + self.assertEqual(actual_session, expected_session) + sessions_manager.get_session.assert_called_once_with(TransactionType.READ_ONLY) + + def test__session_checkout_read_write(self): + connection = build_connection(read_only=False) + database = connection._database + sessions_manager = database._sessions_manager + + expected_session = build_session(database=database) + sessions_manager.get_session = mock.MagicMock(return_value=expected_session) + + actual_session = connection._session_checkout() + + self.assertEqual(actual_session, expected_session) + sessions_manager.get_session.assert_called_once_with(TransactionType.READ_WRITE) + + def test_session_checkout_database_error(self): + connection = Connection(INSTANCE) + + with pytest.raises(ValueError): + connection._session_checkout() + + def test__release_session(self): + connection = build_connection() + sessions_manager = connection._database._sessions_manager + + session = connection._session = build_session(database=connection._database) + put_session = sessions_manager.put_session = mock.MagicMock() + + connection._release_session() + + put_session.assert_called_once_with(session) + + def test_release_session_database_error(self): + connection = Connection(INSTANCE) + connection._session = "session" + with pytest.raises(ValueError): + connection._release_session() + + def test_transaction_checkout(self): + connection = Connection(INSTANCE, DATABASE) + mock_checkout = mock.MagicMock(autospec=True) + connection._session_checkout = mock_checkout + + connection.transaction_checkout() + + mock_checkout.assert_called_once_with() + + mock_transaction = mock.MagicMock() + connection._transaction = mock_transaction + connection._spanner_transaction_started = True + + self.assertEqual(connection.transaction_checkout(), mock_transaction) + + connection._autocommit = True + self.assertIsNone(connection.transaction_checkout()) + + def test_snapshot_checkout(self): + connection = build_connection(read_only=True) + connection.autocommit = False + + session_checkout = mock.Mock(wraps=connection._session_checkout) + release_session = mock.Mock(wraps=connection._release_session) + connection._session_checkout = session_checkout + connection._release_session = release_session + + snapshot = connection.snapshot_checkout() + session_checkout.assert_called_once() + + self.assertEqual(snapshot, connection.snapshot_checkout()) + + connection.commit() + self.assertIsNotNone(connection._snapshot) + release_session.assert_called_once() + + connection.snapshot_checkout() + self.assertIsNotNone(connection._snapshot) + + connection.rollback() + self.assertIsNotNone(connection._snapshot) + self.assertEqual(release_session.call_count, 2) + + connection.autocommit = True + self.assertIsNone(connection.snapshot_checkout()) + + def test_close(self): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_dbapi import InterfaceError + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + self.assertFalse(connection.is_closed) + + connection.close() + + self.assertTrue(connection.is_closed) + + with self.assertRaises(InterfaceError): + connection.cursor() + + mock_transaction = mock.MagicMock() + connection._transaction = mock_transaction + connection._spanner_transaction_started = True + + mock_rollback = mock.MagicMock() + mock_transaction.rollback = mock_rollback + + connection.close() + + mock_rollback.assert_called_once_with() + + connection._transaction = mock.MagicMock() + connection._own_pool = False + connection.close() + + self.assertTrue(connection.is_closed) + + @mock.patch.object(warnings, "warn") + def test_commit_with_spanner_transaction_not_started(self, mock_warn): + self._under_test._spanner_transaction_started = False + + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection._release_session" + ) as mock_release: + self._under_test.commit() + + mock_release.assert_called() + + def test_commit(self): + self._under_test._transaction = mock_transaction = mock.MagicMock() + self._under_test._spanner_transaction_started = True + mock_transaction.commit = mock_commit = mock.MagicMock() + transaction_helper = self._under_test._transaction_helper + transaction_helper._statement_result_details_list = [{}, {}] + + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection._release_session" + ) as mock_release: + self._under_test.commit() + + mock_commit.assert_called_once_with() + mock_release.assert_called_once_with() + self.assertEqual(len(transaction_helper._statement_result_details_list), 0) + + @mock.patch.object(warnings, "warn") + def test_commit_in_autocommit_mode(self, mock_warn): + self._under_test._autocommit = True + + self._under_test.commit() + + mock_warn.assert_called_once_with( + CLIENT_TRANSACTION_NOT_STARTED_WARNING, UserWarning, stacklevel=2 + ) + + @mock.patch.object(warnings, "warn") + def test_commit_in_autocommit_mode_with_ignore_warnings(self, mock_warn): + conn = self._make_connection( + DatabaseDialect.DATABASE_DIALECT_UNSPECIFIED, + ignore_transaction_warnings=True, + ) + assert conn._ignore_transaction_warnings + conn._autocommit = True + + conn.commit() + + assert not mock_warn.warn.called + + def test_commit_database_error(self): + from google.cloud.spanner_dbapi import Connection + + connection = Connection(INSTANCE) + + with pytest.raises(ValueError): + connection.commit() + + @mock.patch.object(warnings, "warn") + def test_rollback_spanner_transaction_not_started(self, mock_warn): + self._under_test._spanner_transaction_started = False + + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection._release_session" + ) as mock_release: + self._under_test.rollback() + + mock_release.assert_called() + + @mock.patch.object(warnings, "warn") + def test_rollback(self, mock_warn): + mock_transaction = mock.MagicMock() + self._under_test._spanner_transaction_started = True + self._under_test._transaction = mock_transaction + mock_rollback = mock.MagicMock() + mock_transaction.rollback = mock_rollback + transaction_helper = self._under_test._transaction_helper + transaction_helper._statement_result_details_list = [{}, {}] + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection._release_session" + ) as mock_release: + self._under_test.rollback() + + self.assertEqual(len(transaction_helper._statement_result_details_list), 0) + mock_rollback.assert_called_once_with() + mock_release.assert_called_once_with() + + @mock.patch.object(warnings, "warn") + def test_rollback_in_autocommit_mode(self, mock_warn): + self._under_test._autocommit = True + + self._under_test.rollback() + + mock_warn.assert_called_once_with( + CLIENT_TRANSACTION_NOT_STARTED_WARNING, UserWarning, stacklevel=2 + ) + + def test_start_batch_dml_batch_mode_active(self): + self._under_test._batch_mode = BatchMode.DML + cursor = self._under_test.cursor() + + with self.assertRaises(ProgrammingError): + self._under_test.start_batch_dml(cursor) + + def test_start_batch_dml_connection_read_only(self): + self._under_test.read_only = True + cursor = self._under_test.cursor() + + with self.assertRaises(ProgrammingError): + self._under_test.start_batch_dml(cursor) + + def test_start_batch_dml(self): + cursor = self._under_test.cursor() + + self._under_test.start_batch_dml(cursor) + + self.assertEqual(self._under_test._batch_mode, BatchMode.DML) + + def test_execute_batch_dml_batch_mode_inactive(self): + self._under_test._batch_mode = BatchMode.NONE + + with self.assertRaises(ProgrammingError): + self._under_test.execute_batch_dml_statement( + ParsedStatement(StatementType.UPDATE, Statement("sql")) + ) + + @mock.patch( + "google.cloud.spanner_dbapi.batch_dml_executor.BatchDmlExecutor", autospec=True + ) + def test_execute_batch_dml(self, mock_batch_dml_executor): + self._under_test._batch_mode = BatchMode.DML + self._under_test._batch_dml_executor = mock_batch_dml_executor + + parsed_statement = ParsedStatement(StatementType.UPDATE, Statement("sql")) + self._under_test.execute_batch_dml_statement(parsed_statement) + + mock_batch_dml_executor.execute_statement.assert_called_once_with( + parsed_statement + ) + + @mock.patch( + "google.cloud.spanner_dbapi.batch_dml_executor.BatchDmlExecutor", autospec=True + ) + def test_run_batch_batch_mode_inactive(self, mock_batch_dml_executor): + self._under_test._batch_mode = BatchMode.NONE + self._under_test._batch_dml_executor = mock_batch_dml_executor + + with self.assertRaises(ProgrammingError): + self._under_test.run_batch() + + @mock.patch( + "google.cloud.spanner_dbapi.batch_dml_executor.BatchDmlExecutor", autospec=True + ) + def test_run_batch(self, mock_batch_dml_executor): + self._under_test._batch_mode = BatchMode.DML + self._under_test._batch_dml_executor = mock_batch_dml_executor + + self._under_test.run_batch() + + mock_batch_dml_executor.run_batch_dml.assert_called_once_with() + self.assertEqual(self._under_test._batch_mode, BatchMode.NONE) + self.assertEqual(self._under_test._batch_dml_executor, None) + + @mock.patch( + "google.cloud.spanner_dbapi.batch_dml_executor.BatchDmlExecutor", autospec=True + ) + def test_abort_batch_batch_mode_inactive(self, mock_batch_dml_executor): + self._under_test._batch_mode = BatchMode.NONE + self._under_test._batch_dml_executor = mock_batch_dml_executor + + with self.assertRaises(ProgrammingError): + self._under_test.abort_batch() + + @mock.patch( + "google.cloud.spanner_dbapi.batch_dml_executor.BatchDmlExecutor", autospec=True + ) + def test_abort_dml_batch(self, mock_batch_dml_executor): + self._under_test._batch_mode = BatchMode.DML + self._under_test._batch_dml_executor = mock_batch_dml_executor + + self._under_test.abort_batch() + + self.assertEqual(self._under_test._batch_mode, BatchMode.NONE) + self.assertEqual(self._under_test._batch_dml_executor, None) + + def test_set_autocommit_dml_mode_with_autocommit_false(self): + self._under_test.autocommit = False + parsed_statement = ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("sql"), + ClientSideStatementType.SET_AUTOCOMMIT_DML_MODE, + ["PARTITIONED_NON_ATOMIC"], + ) + + with self.assertRaises(ProgrammingError): + self._under_test._set_autocommit_dml_mode(parsed_statement) + + def test_set_autocommit_dml_mode_with_readonly(self): + self._under_test.autocommit = True + self._under_test.read_only = True + parsed_statement = ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("sql"), + ClientSideStatementType.SET_AUTOCOMMIT_DML_MODE, + ["PARTITIONED_NON_ATOMIC"], + ) + + with self.assertRaises(ProgrammingError): + self._under_test._set_autocommit_dml_mode(parsed_statement) + + def test_set_autocommit_dml_mode_with_batch_mode(self): + self._under_test.autocommit = True + parsed_statement = ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("sql"), + ClientSideStatementType.SET_AUTOCOMMIT_DML_MODE, + ["PARTITIONED_NON_ATOMIC"], + ) + + self._under_test._set_autocommit_dml_mode(parsed_statement) + + assert ( + self._under_test.autocommit_dml_mode + == AutocommitDmlMode.PARTITIONED_NON_ATOMIC + ) + + def test_set_autocommit_dml_mode(self): + self._under_test.autocommit = True + parsed_statement = ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("sql"), + ClientSideStatementType.SET_AUTOCOMMIT_DML_MODE, + ["PARTITIONED_NON_ATOMIC"], + ) + + self._under_test._set_autocommit_dml_mode(parsed_statement) + assert ( + self._under_test.autocommit_dml_mode + == AutocommitDmlMode.PARTITIONED_NON_ATOMIC + ) + + @mock.patch("google.cloud.spanner_v1.database.Database", autospec=True) + def test_run_prior_DDL_statements(self, mock_database): + from google.cloud.spanner_dbapi import Connection, InterfaceError + + connection = Connection(INSTANCE, mock_database) + + connection.run_prior_DDL_statements() + mock_database.update_ddl.assert_not_called() + + ddl = ["ddl"] + connection._ddl_statements = ddl + + connection.run_prior_DDL_statements() + mock_database.update_ddl.assert_called_once_with(ddl) + + connection.is_closed = True + + with self.assertRaises(InterfaceError): + connection.run_prior_DDL_statements() + + def test_run_prior_DDL_statements_database_error(self): + from google.cloud.spanner_dbapi import Connection + + connection = Connection(INSTANCE) + with pytest.raises(ValueError): + connection.run_prior_DDL_statements() + + def test_as_context_manager(self): + connection = self._make_connection() + with connection as conn: + self.assertEqual(conn, connection) + + self.assertTrue(connection.is_closed) + + def test_begin_cursor_closed(self): + self._under_test.close() + + with self.assertRaises(InterfaceError): + self._under_test.begin() + + self.assertEqual(self._under_test._transaction_begin_marked, False) + + def test_begin_transaction_begin_marked(self): + self._under_test._transaction_begin_marked = True + + with self.assertRaises(OperationalError): + self._under_test.begin() + + def test_begin_transaction_started(self): + self._under_test._spanner_transaction_started = True + + with self.assertRaises(OperationalError): + self._under_test.begin() + + self.assertEqual(self._under_test._transaction_begin_marked, False) + + def test_begin(self): + self._under_test.begin() + + self.assertEqual(self._under_test._transaction_begin_marked, True) + + def test_validate_ok(self): + connection = self._make_connection() + + # mock snapshot context manager + snapshot_obj = mock.Mock() + snapshot_obj.execute_sql = mock.Mock(return_value=[[1]]) + + snapshot_ctx = mock.Mock() + snapshot_ctx.__enter__ = mock.Mock(return_value=snapshot_obj) + snapshot_ctx.__exit__ = exit_ctx_func + snapshot_method = mock.Mock(return_value=snapshot_ctx) + + connection.database.snapshot = snapshot_method + + connection.validate() + snapshot_obj.execute_sql.assert_called_once_with("SELECT 1") + + def test_validate_fail(self): + from google.cloud.spanner_dbapi.exceptions import OperationalError + + connection = self._make_connection() + + # mock snapshot context manager + snapshot_obj = mock.Mock() + snapshot_obj.execute_sql = mock.Mock(return_value=[[3]]) + + snapshot_ctx = mock.Mock() + snapshot_ctx.__enter__ = mock.Mock(return_value=snapshot_obj) + snapshot_ctx.__exit__ = exit_ctx_func + snapshot_method = mock.Mock(return_value=snapshot_ctx) + + connection.database.snapshot = snapshot_method + + with self.assertRaises(OperationalError): + connection.validate() + + snapshot_obj.execute_sql.assert_called_once_with("SELECT 1") + + def test_validate_error(self): + from google.cloud.exceptions import NotFound + + connection = self._make_connection() + + # mock snapshot context manager + snapshot_obj = mock.Mock() + snapshot_obj.execute_sql = mock.Mock(side_effect=NotFound("Not found")) + + snapshot_ctx = mock.Mock() + snapshot_ctx.__enter__ = mock.Mock(return_value=snapshot_obj) + snapshot_ctx.__exit__ = exit_ctx_func + snapshot_method = mock.Mock(return_value=snapshot_ctx) + + connection.database.snapshot = snapshot_method + + with self.assertRaises(NotFound): + connection.validate() + + snapshot_obj.execute_sql.assert_called_once_with("SELECT 1") + + def test_validate_database_error(self): + from google.cloud.spanner_dbapi import Connection + + connection = Connection(INSTANCE) + + with pytest.raises(ValueError): + connection.validate() + + def test_validate_closed(self): + from google.cloud.spanner_dbapi.exceptions import InterfaceError + + connection = self._make_connection() + connection.close() + + with self.assertRaises(InterfaceError): + connection.validate() + + def test_staleness_invalid_value(self): + """Check that `staleness` property accepts only correct values.""" + connection = self._make_connection() + + # incorrect staleness type + with self.assertRaises(ValueError): + connection.staleness = {"something": 4} + + # no expected staleness types + with self.assertRaises(ValueError): + connection.staleness = {} + + def test_staleness_inside_transaction(self): + """ + Check that it's impossible to change the `staleness` + option if a transaction is in progress. + """ + connection = self._make_connection() + connection._spanner_transaction_started = True + connection._transaction = mock.Mock() + + with self.assertRaises(ValueError): + connection.staleness = {"read_timestamp": datetime.datetime(2021, 9, 21)} + + def test_staleness_inside_transaction_same_value(self): + """ + Verify that setting `staleness` to the same value in a transaction is allowed. + """ + connection = self._make_connection() + connection.staleness = {"read_timestamp": datetime.datetime(2021, 9, 21)} + connection._spanner_transaction_started = True + connection._transaction = mock.Mock() + + connection.staleness = {"read_timestamp": datetime.datetime(2021, 9, 21)} + self.assertEqual( + connection.staleness, {"read_timestamp": datetime.datetime(2021, 9, 21)} + ) + + def test_staleness_multi_use(self): + """ + Check that `staleness` option is correctly + sent to the `Snapshot()` constructor. + + READ_ONLY, NOT AUTOCOMMIT + """ + timestamp = datetime.datetime(2021, 9, 20) + + connection = self._make_connection() + connection._session = "session" + connection.read_only = True + connection.staleness = {"read_timestamp": timestamp} + + with mock.patch( + "google.cloud.spanner_dbapi.connection.Snapshot" + ) as snapshot_mock: + connection.snapshot_checkout() + + snapshot_mock.assert_called_with( + "session", multi_use=True, read_timestamp=timestamp + ) + + @mock.patch("google.cloud.spanner_dbapi.cursor.PeekIterator") + def test_staleness_single_use_autocommit(self, MockedPeekIterator): + """ + Check that `staleness` option is correctly + sent to the snapshot context manager. + + NOT READ_ONLY, AUTOCOMMIT + """ + timestamp = datetime.datetime(2021, 9, 20) + + connection = self._make_connection() + connection._session_checkout = mock.MagicMock(autospec=True) + + connection.autocommit = True + connection.staleness = {"read_timestamp": timestamp} + + # mock snapshot context manager + snapshot_obj = mock.Mock() + _result_set = mock.Mock() + snapshot_obj.execute_sql.return_value = _result_set + _result_set.stats = None + + snapshot_ctx = mock.Mock() + snapshot_ctx.__enter__ = mock.Mock(return_value=snapshot_obj) + snapshot_ctx.__exit__ = exit_ctx_func + snapshot_method = mock.Mock(return_value=snapshot_ctx) + + connection.database.snapshot = snapshot_method + + cursor = connection.cursor() + cursor.execute("SELECT 1") + + connection.database.snapshot.assert_called_with(read_timestamp=timestamp) + + @mock.patch("google.cloud.spanner_dbapi.cursor.PeekIterator") + def test_staleness_single_use_readonly_autocommit(self, MockedPeekIterator): + """ + Check that `staleness` option is correctly sent to the + snapshot context manager while in `autocommit` mode. + + READ_ONLY, AUTOCOMMIT + """ + timestamp = datetime.datetime(2021, 9, 20) + + connection = self._make_connection() + connection.autocommit = True + connection.read_only = True + connection._session_checkout = mock.MagicMock(autospec=True) + + connection.staleness = {"read_timestamp": timestamp} + + # mock snapshot context manager + snapshot_obj = mock.Mock() + _result_set = mock.Mock() + _result_set.stats = None + + snapshot_obj.execute_sql.return_value = _result_set + + snapshot_ctx = mock.Mock() + snapshot_ctx.__enter__ = mock.Mock(return_value=snapshot_obj) + snapshot_ctx.__exit__ = exit_ctx_func + snapshot_method = mock.Mock(return_value=snapshot_ctx) + + connection.database.snapshot = snapshot_method + + cursor = connection.cursor() + cursor.execute("SELECT 1") + + connection.database.snapshot.assert_called_with(read_timestamp=timestamp) + + def test_request_priority(self): + from google.cloud.spanner_dbapi.parsed_statement import Statement + from google.cloud.spanner_v1 import RequestOptions + + sql = "SELECT 1" + params = [] + param_types = {} + priority = 2 + + connection = self._make_connection() + connection._spanner_transaction_started = True + connection._transaction = mock.Mock() + connection._transaction.execute_sql = mock.Mock() + + connection.request_priority = priority + + req_opts = RequestOptions(priority=priority) + + connection.run_statement(Statement(sql, params, param_types)) + + connection._transaction.execute_sql.assert_called_with( + sql, params, param_types=param_types, request_options=req_opts + ) + assert connection.request_priority is None + + # check that priority is applied for only one request + connection.run_statement(Statement(sql, params, param_types)) + + connection._transaction.execute_sql.assert_called_with( + sql, params, param_types=param_types, request_options=None + ) + + def test_custom_client_connection(self): + from google.cloud.spanner_dbapi import connect + + client = _Client() + connection = connect("test-instance", "test-database", client=client) + self.assertTrue(connection.instance._client == client) + + def test_custom_database_role(self): + from google.cloud.spanner_dbapi import connect + + role = "some_role" + connection = connect( + "test-instance", + "test-database", + project="test-project", + database_role=role, + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + self.assertEqual(connection.database.database_role, role) + + def test_invalid_custom_client_connection(self): + from google.cloud.spanner_dbapi import connect + + client = _Client() + with pytest.raises(ValueError): + connect( + "test-instance", + "test-database", + project="invalid_project", + client=client, + ) + + def test_connection_wo_database(self): + from google.cloud.spanner_dbapi import connect + + connection = connect( + "test-instance", + credentials=AnonymousCredentials(), + project="test-project", + client_options={"api_endpoint": "none"}, + ) + self.assertTrue(connection.database is None) + + +def exit_ctx_func(self, exc_type, exc_value, traceback): + """Context __exit__ method mock.""" + pass + + +class _Client(object): + def __init__(self, project="project_id"): + self.project = project + self.project_name = "projects/" + self.project + + def instance(self, instance_id="instance_id"): + return _Instance(name=instance_id, client=self) + + +class _Instance(object): + def __init__(self, name="instance_id", client=None): + self.name = name + self._client = client + + def database( + self, + database_id="database_id", + pool=None, + database_dialect=DatabaseDialect.GOOGLE_STANDARD_SQL, + database_role=None, + logger=None, + ): + return _Database(database_id, pool, database_dialect, database_role, logger) + + +class _Database(object): + def __init__( + self, + database_id="database_id", + pool=None, + database_dialect=DatabaseDialect.GOOGLE_STANDARD_SQL, + database_role=None, + logger=None, + ): + self.name = database_id + self.pool = pool + self.database_dialect = database_dialect + self.database_role = database_role + self.logger = logger diff --git a/tests/unit/spanner_dbapi/test_cursor.py b/tests/unit/spanner_dbapi/test_cursor.py new file mode 100644 index 0000000000..b96e8c1444 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_cursor.py @@ -0,0 +1,1248 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Cursor() class unit tests.""" +from unittest import mock +import sys +import unittest + +from google.auth.credentials import AnonymousCredentials +from google.rpc.code_pb2 import ABORTED + +from google.cloud.spanner_dbapi.parsed_statement import ( + ParsedStatement, + StatementType, + Statement, +) +from google.api_core.exceptions import Aborted +from google.cloud.spanner_dbapi.connection import connect + + +class TestCursor(unittest.TestCase): + INSTANCE = "test-instance" + DATABASE = "test-database" + + def _get_target_class(self): + from google.cloud.spanner_dbapi import Cursor + + return Cursor + + def _make_one(self, *args, **kwargs): + return self._get_target_class()(*args, **kwargs) + + def _make_connection(self, *args, **kwargs): + from google.cloud.spanner_dbapi import Connection + + return Connection(*args, **kwargs) + + def _transaction_mock(self, mock_response=[]): + from google.rpc.code_pb2 import OK + + transaction = mock.Mock() + transaction.batch_update = mock.Mock( + return_value=[mock.Mock(code=OK), mock_response] + ) + return transaction + + def test_property_connection(self): + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + self.assertEqual(cursor.connection, connection) + + def test_property_description(self): + from google.cloud.spanner_dbapi._helpers import ColumnInfo + + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + + self.assertIsNone(cursor.description) + cursor._result_set = res_set = mock.MagicMock() + res_set.metadata.row_type.fields = [mock.MagicMock()] + self.assertIsNotNone(cursor.description) + self.assertIsInstance(cursor.description[0], ColumnInfo) + + def test_property_rowcount(self): + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + + self.assertEqual(cursor.rowcount, None) + + def test_callproc(self): + from google.cloud.spanner_dbapi.exceptions import InterfaceError + + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + cursor._is_closed = True + with self.assertRaises(InterfaceError): + cursor.callproc(procname=None) + + @mock.patch("google.cloud.spanner_v1.Client") + def test_close(self, mock_client): + from google.cloud.spanner_dbapi import connect, InterfaceError + + connection = connect(self.INSTANCE, self.DATABASE) + + cursor = connection.cursor() + self.assertFalse(cursor.is_closed) + + cursor.close() + + self.assertTrue(cursor.is_closed) + + with self.assertRaises(InterfaceError): + cursor.execute("SELECT * FROM database") + + def test_do_execute_update(self): + from google.cloud.spanner_v1 import ResultSetStats + + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + transaction = mock.MagicMock() + result_set = mock.MagicMock() + result_set.stats = ResultSetStats(row_count_exact=1234) + + transaction.execute_sql.return_value = result_set + cursor._do_execute_update_in_autocommit( + transaction=transaction, + sql="SELECT * WHERE true", + params={}, + ) + + self.assertEqual(cursor._result_set, result_set) + self.assertEqual(cursor.rowcount, 1234) + + def test_do_batch_update(self): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_v1.param_types import INT64 + from google.cloud.spanner_v1.types.spanner import Session + + sql = "DELETE FROM table WHERE col1 = %s" + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + connection.autocommit = True + transaction = self._transaction_mock(mock_response=[1, 1, 1]) + cursor = connection.cursor() + + with mock.patch( + "google.cloud.spanner_v1.services.spanner.client.SpannerClient.create_session", + return_value=Session(), + ): + with mock.patch( + "google.cloud.spanner_v1.session.Session.transaction", + return_value=transaction, + ): + cursor.executemany(sql, [(1,), (2,), (3,)]) + + transaction.batch_update.assert_called_once_with( + [ + ("DELETE FROM table WHERE col1 = @a0", {"a0": 1}, {"a0": INT64}), + ("DELETE FROM table WHERE col1 = @a0", {"a0": 2}, {"a0": INT64}), + ("DELETE FROM table WHERE col1 = @a0", {"a0": 3}, {"a0": INT64}), + ], + last_statement=True, + ) + self.assertEqual(cursor._row_count, 3) + + def test_execute_programming_error(self): + from google.cloud.spanner_dbapi.exceptions import ProgrammingError + + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + cursor.connection = None + with self.assertRaises(ProgrammingError): + cursor.execute(sql="") + + def test_execute_attribute_error(self): + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + + with self.assertRaises(AttributeError): + cursor.execute(sql="SELECT 1") + + def test_execute_database_error(self): + connection = self._make_connection(self.INSTANCE) + cursor = self._make_one(connection) + + with self.assertRaises(ValueError): + cursor.execute(sql="SELECT 1") + + def test_execute_autocommit_off(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor.connection._autocommit = False + cursor.connection.transaction_checkout = mock.MagicMock(autospec=True) + + cursor.execute("sql") + self.assertIsInstance(cursor._result_set, mock.MagicMock) + + def test_execute_insert_statement_autocommit_off(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor.connection._autocommit = False + cursor.connection.transaction_checkout = mock.MagicMock(autospec=True) + + sql = "INSERT INTO django_migrations (app, name, applied) VALUES (%s, %s, %s)" + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.UPDATE, Statement(sql)), + ): + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.run_statement", + return_value=(mock.MagicMock()), + ): + cursor.execute(sql) + self.assertIsInstance(cursor._result_set, mock.MagicMock) + + def test_execute_statement(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + + sql = "sql" + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + side_effect=[ + ParsedStatement(StatementType.DDL, Statement(sql)), + ParsedStatement(StatementType.UPDATE, Statement(sql)), + ], + ) as mockclassify_statement: + with self.assertRaises(ValueError): + cursor.execute(sql=sql) + mockclassify_statement.assert_called_with(sql) + self.assertEqual(mockclassify_statement.call_count, 2) + self.assertEqual(cursor.connection._ddl_statements, []) + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.DDL, Statement(sql)), + ) as mockclassify_statement: + sql = "sql" + cursor.execute(sql=sql) + mockclassify_statement.assert_called_with(sql) + self.assertEqual(mockclassify_statement.call_count, 2) + self.assertEqual(cursor.connection._ddl_statements, [sql]) + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ): + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor._handle_DQL", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ) as mock_handle_ddl: + connection.autocommit = True + sql = "sql" + cursor.execute(sql=sql) + mock_handle_ddl.assert_called_once_with(sql, None) + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.UPDATE, Statement(sql)), + ): + cursor.connection._database = mock_db = mock.MagicMock() + mock_db.run_in_transaction = mock_run_in = mock.MagicMock() + cursor.execute(sql="sql") + mock_run_in.assert_called_once_with( + cursor._do_execute_update_in_autocommit, "sql", None + ) + + def test_execute_statement_with_cursor_not_in_retry_mode(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + sql = "sql" + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ): + cursor.execute(sql=sql) + + transaction_helper_mock.add_execute_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_not_called() + + def test_executemany_query_statement_with_cursor_not_in_retry_mode(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + sql = "sql" + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ): + cursor.executemany(operation=sql, seq_of_params=[]) + + transaction_helper_mock.add_execute_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_not_called() + + def test_executemany_dml_statement_with_cursor_not_in_retry_mode(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + sql = "sql" + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.INSERT, Statement(sql)), + ): + cursor.executemany(operation=sql, seq_of_params=[]) + + transaction_helper_mock.add_execute_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_not_called() + + def test_execute_statement_with_cursor_in_retry_mode(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor._in_retry_mode = True + sql = "sql" + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ): + cursor.execute(sql=sql) + + transaction_helper_mock.add_execute_statement_for_retry.assert_not_called() + transaction_helper_mock.retry_transaction.assert_not_called() + + def test_executemany_statement_with_cursor_in_retry_mode(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor._in_retry_mode = True + sql = "sql" + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ): + cursor.executemany(operation=sql, seq_of_params=[]) + + transaction_helper_mock.add_execute_statement_for_retry.assert_not_called() + transaction_helper_mock.retry_transaction.assert_not_called() + + @mock.patch("google.cloud.spanner_dbapi.cursor.PeekIterator") + def test_execute_statement_aborted_with_cursor_not_in_retry_mode( + self, mock_peek_iterator + ): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + sql = "sql" + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ): + connection.run_statement = mock.Mock( + side_effect=(Aborted("Aborted"), None), + ) + cursor.execute(sql=sql) + + transaction_helper_mock.add_execute_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_called_once() + + def test_execute_statement_aborted_with_cursor_in_retry_mode(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor._in_retry_mode = True + sql = "sql" + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ): + connection.run_statement = mock.Mock( + side_effect=Aborted("Aborted"), + ) + with self.assertRaises(Aborted): + cursor.execute(sql=sql) + + transaction_helper_mock.add_execute_statement_for_retry.assert_not_called() + transaction_helper_mock.retry_transaction.assert_not_called() + + def test_execute_statement_exception_with_cursor_not_in_retry_mode(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + sql = "sql" + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + return_value=ParsedStatement(StatementType.QUERY, Statement(sql)), + ): + connection.run_statement = mock.Mock( + side_effect=(Exception("Exception"), None), + ) + with self.assertRaises(Exception): + cursor.execute(sql=sql) + + transaction_helper_mock.add_execute_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_not_called() + + def test_execute_integrity_error(self): + from google.api_core import exceptions + from google.cloud.spanner_dbapi.exceptions import IntegrityError + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + side_effect=exceptions.AlreadyExists("message"), + ): + with self.assertRaises(IntegrityError): + cursor.execute(sql="sql") + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + side_effect=exceptions.FailedPrecondition("message"), + ): + with self.assertRaises(IntegrityError): + cursor.execute(sql="sql") + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + side_effect=exceptions.OutOfRange("message"), + ): + with self.assertRaises(IntegrityError): + cursor.execute("sql") + + def test_execute_invalid_argument(self): + from google.api_core import exceptions + from google.cloud.spanner_dbapi.exceptions import ProgrammingError + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + side_effect=exceptions.InvalidArgument("message"), + ): + with self.assertRaises(ProgrammingError): + cursor.execute(sql="sql") + + def test_execute_internal_server_error(self): + from google.api_core import exceptions + from google.cloud.spanner_dbapi.exceptions import OperationalError + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + + with mock.patch( + "google.cloud.spanner_dbapi.parse_utils.classify_statement", + side_effect=exceptions.InternalServerError("message"), + ): + with self.assertRaises(OperationalError): + cursor.execute(sql="sql") + + @mock.patch("google.cloud.spanner_v1.Client") + def test_executemany_on_closed_cursor(self, mock_client): + from google.cloud.spanner_dbapi import InterfaceError + from google.cloud.spanner_dbapi import connect + + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + cursor.close() + + with self.assertRaises(InterfaceError): + cursor.executemany("""SELECT * FROM table1 WHERE "col1" = @a1""", ()) + + @mock.patch("google.cloud.spanner_v1.Client") + def test_executemany_DLL(self, mock_client): + from google.cloud.spanner_dbapi import connect, ProgrammingError + + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + + with self.assertRaises(ProgrammingError): + cursor.executemany("""DROP DATABASE database_name""", ()) + + def test_executemany_client_statement(self): + from google.cloud.spanner_dbapi import connect, ProgrammingError + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + cursor = connection.cursor() + + with self.assertRaises(ProgrammingError) as error: + cursor.executemany("""COMMIT TRANSACTION""", ()) + self.assertEqual( + str(error.exception), + "Executing the following operation: COMMIT TRANSACTION, with executemany() method is not allowed.", + ) + + @mock.patch("google.cloud.spanner_v1.Client") + def test_executemany(self, mock_client): + from google.cloud.spanner_dbapi import connect + + operation = """SELECT * FROM table1 WHERE "col1" = @a1""" + params_seq = ((1,), (2,)) + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + cursor = connection.cursor() + cursor._result_set = [1, 2, 3] + cursor._itr = iter([1, 2, 3]) + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor._execute" + ) as execute_mock: + cursor.executemany(operation, params_seq) + + execute_mock.assert_has_calls( + (mock.call(operation, (1,), True), mock.call(operation, (2,), True)) + ) + + def test_executemany_delete_batch_autocommit(self): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_v1.param_types import INT64 + from google.cloud.spanner_v1.types.spanner import Session + + sql = "DELETE FROM table WHERE col1 = %s" + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + connection.autocommit = True + transaction = self._transaction_mock() + cursor = connection.cursor() + + with mock.patch( + "google.cloud.spanner_v1.services.spanner.client.SpannerClient.create_session", + return_value=Session(), + ): + with mock.patch( + "google.cloud.spanner_v1.session.Session.transaction", + return_value=transaction, + ): + cursor.executemany(sql, [(1,), (2,), (3,)]) + + transaction.batch_update.assert_called_once_with( + [ + ("DELETE FROM table WHERE col1 = @a0", {"a0": 1}, {"a0": INT64}), + ("DELETE FROM table WHERE col1 = @a0", {"a0": 2}, {"a0": INT64}), + ("DELETE FROM table WHERE col1 = @a0", {"a0": 3}, {"a0": INT64}), + ], + last_statement=True, + ) + + def test_executemany_update_batch_autocommit(self): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_v1.param_types import INT64, STRING + from google.cloud.spanner_v1.types.spanner import Session + + sql = "UPDATE table SET col1 = %s WHERE col2 = %s" + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + connection.autocommit = True + transaction = self._transaction_mock() + cursor = connection.cursor() + + with mock.patch( + "google.cloud.spanner_v1.services.spanner.client.SpannerClient.create_session", + return_value=Session(), + ): + with mock.patch( + "google.cloud.spanner_v1.session.Session.transaction", + return_value=transaction, + ): + cursor.executemany(sql, [(1, "a"), (2, "b"), (3, "c")]) + + transaction.batch_update.assert_called_once_with( + [ + ( + "UPDATE table SET col1 = @a0 WHERE col2 = @a1", + {"a0": 1, "a1": "a"}, + {"a0": INT64, "a1": STRING}, + ), + ( + "UPDATE table SET col1 = @a0 WHERE col2 = @a1", + {"a0": 2, "a1": "b"}, + {"a0": INT64, "a1": STRING}, + ), + ( + "UPDATE table SET col1 = @a0 WHERE col2 = @a1", + {"a0": 3, "a1": "c"}, + {"a0": INT64, "a1": STRING}, + ), + ], + last_statement=True, + ) + + def test_executemany_insert_batch_non_autocommit(self): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_v1.param_types import INT64 + from google.cloud.spanner_v1.types.spanner import Session + + sql = """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (%s, %s, %s, %s)""" + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + transaction = self._transaction_mock() + + cursor = connection.cursor() + with mock.patch( + "google.cloud.spanner_v1.services.spanner.client.SpannerClient.create_session", + return_value=Session(), + ): + with mock.patch( + "google.cloud.spanner_v1.session.Session.transaction", + return_value=transaction, + ): + cursor.executemany(sql, [(1, 2, 3, 4), (5, 6, 7, 8)]) + + transaction.batch_update.assert_called_once_with( + [ + ( + """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (@a0, @a1, @a2, @a3)""", + {"a0": 1, "a1": 2, "a2": 3, "a3": 4}, + {"a0": INT64, "a1": INT64, "a2": INT64, "a3": INT64}, + ), + ( + """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (@a0, @a1, @a2, @a3)""", + {"a0": 5, "a1": 6, "a2": 7, "a3": 8}, + {"a0": INT64, "a1": INT64, "a2": INT64, "a3": INT64}, + ), + ] + ) + + def test_executemany_insert_batch_autocommit(self): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_v1.param_types import INT64 + from google.cloud.spanner_v1.types.spanner import Session + + sql = """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (%s, %s, %s, %s)""" + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + connection.autocommit = True + + transaction = self._transaction_mock() + transaction.commit = mock.Mock() + + cursor = connection.cursor() + with mock.patch( + "google.cloud.spanner_v1.services.spanner.client.SpannerClient.create_session", + return_value=Session(), + ): + with mock.patch( + "google.cloud.spanner_v1.session.Session.transaction", + return_value=transaction, + ): + cursor.executemany(sql, [(1, 2, 3, 4), (5, 6, 7, 8)]) + + transaction.batch_update.assert_called_once_with( + [ + ( + """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (@a0, @a1, @a2, @a3)""", + {"a0": 1, "a1": 2, "a2": 3, "a3": 4}, + {"a0": INT64, "a1": INT64, "a2": INT64, "a3": INT64}, + ), + ( + """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (@a0, @a1, @a2, @a3)""", + {"a0": 5, "a1": 6, "a2": 7, "a3": 8}, + {"a0": INT64, "a1": INT64, "a2": INT64, "a3": INT64}, + ), + ], + last_statement=True, + ) + transaction.commit.assert_called_once() + + def test_executemany_insert_batch_failed(self): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_dbapi.exceptions import OperationalError + from google.cloud.spanner_v1.types.spanner import Session + from google.rpc.code_pb2 import UNKNOWN + + sql = """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (%s, %s, %s, %s)""" + err_details = "Details here" + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + connection.autocommit = True + cursor = connection.cursor() + + transaction = mock.Mock() + transaction.batch_update = mock.Mock( + return_value=(mock.Mock(code=UNKNOWN, message=err_details), []) + ) + + with mock.patch( + "google.cloud.spanner_v1.services.spanner.client.SpannerClient.create_session", + return_value=Session(), + ): + with mock.patch( + "google.cloud.spanner_v1.session.Session.transaction", + return_value=transaction, + ): + with self.assertRaisesRegex(OperationalError, err_details): + cursor.executemany(sql, [(1, 2, 3, 4), (5, 6, 7, 8)]) + + def test_executemany_insert_batch_aborted(self): + from google.cloud.spanner_dbapi import connect + from google.cloud.spanner_v1.param_types import INT64 + + sql = """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (%s, %s, %s, %s)""" + args = [(1, 2, 3, 4), (5, 6, 7, 8)] + err_details = "Aborted details here" + + connection = connect( + "test-instance", + "test-database", + project="test-project", + credentials=AnonymousCredentials(), + client_options={"api_endpoint": "none"}, + ) + + transaction1 = mock.Mock() + transaction1.batch_update = mock.Mock( + side_effect=[(mock.Mock(code=ABORTED, message=err_details), [])] + ) + + transaction2 = self._transaction_mock() + + connection.transaction_checkout = mock.Mock( + side_effect=[transaction1, transaction2] + ) + + cursor = connection.cursor() + cursor.executemany(sql, args) + + transaction1.batch_update.assert_called_with( + [ + ( + """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (@a0, @a1, @a2, @a3)""", + {"a0": 1, "a1": 2, "a2": 3, "a3": 4}, + {"a0": INT64, "a1": INT64, "a2": INT64, "a3": INT64}, + ), + ( + """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (@a0, @a1, @a2, @a3)""", + {"a0": 5, "a1": 6, "a2": 7, "a3": 8}, + {"a0": INT64, "a1": INT64, "a2": INT64, "a3": INT64}, + ), + ] + ) + transaction2.batch_update.assert_called_with( + [ + ( + """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (@a0, @a1, @a2, @a3)""", + {"a0": 1, "a1": 2, "a2": 3, "a3": 4}, + {"a0": INT64, "a1": INT64, "a2": INT64, "a3": INT64}, + ), + ( + """INSERT INTO table (col1, "col2", `col3`, `"col4"`) VALUES (@a0, @a1, @a2, @a3)""", + {"a0": 5, "a1": 6, "a2": 7, "a3": 8}, + {"a0": INT64, "a1": INT64, "a2": INT64, "a3": INT64}, + ), + ] + ) + + @mock.patch("google.cloud.spanner_v1.Client") + def test_executemany_database_error(self, mock_client): + from google.cloud.spanner_dbapi import connect + + connection = connect("test-instance") + cursor = connection.cursor() + + with self.assertRaises(ValueError): + cursor.executemany("""SELECT * FROM table1 WHERE "col1" = @a1""", ()) + + @unittest.skipIf( + sys.version_info[0] < 3, "Python 2 has an outdated iterator definition" + ) + def test_fetchone(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor._parsed_statement = mock.Mock() + lst = [1, 2, 3] + cursor._itr = iter(lst) + for i in range(len(lst)): + self.assertEqual(cursor.fetchone(), lst[i]) + self.assertIsNone(cursor.fetchone()) + + @unittest.skipIf( + sys.version_info[0] < 3, "Python 2 has an outdated iterator definition" + ) + def test_fetchone_w_autocommit(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.autocommit = True + cursor = self._make_one(connection) + lst = [1, 2, 3] + cursor._itr = iter(lst) + for i in range(len(lst)): + self.assertEqual(cursor.fetchone(), lst[i]) + self.assertIsNone(cursor.fetchone()) + + def test_fetchmany(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor._parsed_statement = mock.Mock() + lst = [(1,), (2,), (3,)] + cursor._itr = iter(lst) + + self.assertEqual(cursor.fetchmany(), [lst[0]]) + + result = cursor.fetchmany(len(lst)) + self.assertEqual(result, lst[1:]) + + def test_fetchmany_w_autocommit(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.autocommit = True + cursor = self._make_one(connection) + lst = [(1,), (2,), (3,)] + cursor._itr = iter(lst) + + self.assertEqual(cursor.fetchmany(), [lst[0]]) + + result = cursor.fetchmany(len(lst)) + self.assertEqual(result, lst[1:]) + + def test_fetchall(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor._parsed_statement = mock.Mock() + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + lst = [(1,), (2,), (3,)] + cursor._itr = iter(lst) + self.assertEqual(cursor.fetchall(), lst) + + transaction_helper_mock.add_fetch_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_not_called() + + def test_fetchall_w_autocommit(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.autocommit = True + cursor = self._make_one(connection) + lst = [(1,), (2,), (3,)] + cursor._itr = iter(lst) + self.assertEqual(cursor.fetchall(), lst) + + def test_nextset(self): + from google.cloud.spanner_dbapi import exceptions + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor.close() + with self.assertRaises(exceptions.InterfaceError): + cursor.nextset() + + def test_setinputsizes(self): + from google.cloud.spanner_dbapi import exceptions + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor.close() + with self.assertRaises(exceptions.InterfaceError): + cursor.setinputsizes(sizes=None) + + def test_setoutputsize(self): + from google.cloud.spanner_dbapi import exceptions + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + cursor = self._make_one(connection) + cursor.close() + with self.assertRaises(exceptions.InterfaceError): + cursor.setoutputsize(size=None) + + @mock.patch("google.cloud.spanner_dbapi.cursor.PeekIterator") + def test_handle_dql(self, MockedPeekIterator): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.database.snapshot.return_value.__enter__.return_value = ( + mock_snapshot + ) = mock.MagicMock() + cursor = self._make_one(connection) + + _result_set = mock.Mock() + mock_snapshot.execute_sql.return_value = _result_set + cursor._handle_DQL("sql", params=None) + self.assertEqual(cursor._result_set, _result_set) + self.assertEqual(cursor._itr, MockedPeekIterator()) + self.assertEqual(cursor._row_count, None) + + @mock.patch("google.cloud.spanner_dbapi.cursor.PeekIterator") + def test_handle_dql_priority(self, MockedPeekIterator): + from google.cloud.spanner_v1 import RequestOptions + + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.database.snapshot.return_value.__enter__.return_value = ( + mock_snapshot + ) = mock.MagicMock() + connection.request_priority = 1 + + cursor = self._make_one(connection) + + sql = "sql" + _result_set = mock.Mock() + mock_snapshot.execute_sql.return_value = _result_set + cursor._handle_DQL(sql, params=None) + self.assertEqual(cursor._result_set, _result_set) + self.assertEqual(cursor._itr, MockedPeekIterator()) + self.assertEqual(cursor._row_count, None) + mock_snapshot.execute_sql.assert_called_with( + sql, None, None, request_options=RequestOptions(priority=1) + ) + + def test_handle_dql_database_error(self): + connection = self._make_connection(self.INSTANCE) + cursor = self._make_one(connection) + + with self.assertRaises(ValueError): + cursor._handle_DQL("sql", params=None) + + def test_context(self): + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + with cursor as c: + self.assertEqual(c, cursor) + + self.assertTrue(c.is_closed) + + def test_next(self): + from google.cloud.spanner_dbapi import exceptions + + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + with self.assertRaises(exceptions.ProgrammingError): + cursor.__next__() + + lst = [(1,), (2,), (3,)] + cursor._itr = iter(lst) + i = 0 + for c in cursor._itr: + self.assertEqual(c, lst[i]) + i += 1 + + def test_iter(self): + from google.cloud.spanner_dbapi import exceptions + + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + with self.assertRaises(exceptions.ProgrammingError): + _ = iter(cursor) + + iterator = iter([(1,), (2,), (3,)]) + cursor._itr = iterator + self.assertEqual(iter(cursor), iterator) + + def test_list_tables(self): + from google.cloud.spanner_dbapi import _helpers + from google.cloud.spanner_v1 import param_types + + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + + table_list = ["table1", "table2", "table3"] + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.run_sql_in_snapshot", + return_value=table_list, + ) as mock_run_sql: + cursor.list_tables() + mock_run_sql.assert_called_once_with( + sql=_helpers.SQL_LIST_TABLES_AND_VIEWS, + params={"table_schema": ""}, + param_types={"table_schema": param_types.STRING}, + ) + + def test_run_sql_in_snapshot(self): + connection = self._make_connection(self.INSTANCE, mock.MagicMock()) + connection.database.snapshot.return_value.__enter__.return_value = ( + mock_snapshot + ) = mock.MagicMock() + cursor = self._make_one(connection) + + results = 1, 2, 3 + mock_snapshot.execute_sql.return_value = results + self.assertEqual(cursor.run_sql_in_snapshot("sql"), list(results)) + + def test_run_sql_in_snapshot_database_error(self): + connection = self._make_connection(self.INSTANCE) + cursor = self._make_one(connection) + + with self.assertRaises(ValueError): + cursor.run_sql_in_snapshot("sql") + + def test_get_table_column_schema(self): + from google.cloud.spanner_dbapi.cursor import ColumnDetails + from google.cloud.spanner_dbapi import _helpers + from google.cloud.spanner_v1 import param_types + + connection = self._make_connection(self.INSTANCE, self.DATABASE) + cursor = self._make_one(connection) + + column_name = "column1" + is_nullable = "YES" + spanner_type = "spanner_type" + rows = [(column_name, is_nullable, spanner_type)] + expected = {column_name: ColumnDetails(null_ok=True, spanner_type=spanner_type)} + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.run_sql_in_snapshot", + return_value=rows, + ) as mock_run_sql: + table_name = "table1" + result = cursor.get_table_column_schema(table_name=table_name) + mock_run_sql.assert_called_once_with( + sql=_helpers.SQL_GET_TABLE_COLUMN_SCHEMA, + params={"schema_name": "", "table_name": table_name}, + param_types={ + "schema_name": param_types.STRING, + "table_name": param_types.STRING, + }, + ) + self.assertEqual(result, expected) + + @mock.patch("google.cloud.spanner_v1.Client") + def test_peek_iterator_aborted(self, mock_client): + """ + Checking that an Aborted exception is retried in case it happened + while streaming the first element with a PeekIterator. + """ + from google.api_core.exceptions import Aborted + from google.cloud.spanner_dbapi.connection import connect + + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + with mock.patch( + "google.cloud.spanner_dbapi.utils.PeekIterator.__init__", + side_effect=(Aborted("Aborted"), None), + ): + with mock.patch( + "google.cloud.spanner_dbapi.transaction_helper.TransactionRetryHelper.retry_transaction" + ) as retry_mock: + with mock.patch( + "google.cloud.spanner_dbapi.connection.Connection.run_statement", + return_value=(1, 2, 3), + ): + cursor.execute("SELECT * FROM table_name") + + retry_mock.assert_called_with() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_fetchone_aborted_with_cursor_not_in_retry_mode(self, mock_client): + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__next__", + side_effect=(Aborted("Aborted"), iter([])), + ): + cursor.fetchone() + + transaction_helper_mock.add_fetch_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_called_once() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_fetchone_aborted_with_cursor_in_retry_mode(self, mock_client): + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + cursor._in_retry_mode = True + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__next__", + side_effect=(Aborted("Aborted"), iter([])), + ): + cursor.fetchone() + + transaction_helper_mock.add_fetch_statement_for_retry.assert_not_called() + transaction_helper_mock.retry_transaction.assert_not_called() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_fetchall_aborted_with_cursor_not_in_retry_mode(self, mock_client): + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__iter__", + side_effect=(Aborted("Aborted"), iter([])), + ): + cursor.fetchall() + + transaction_helper_mock.add_fetch_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_called_once() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_fetchall_aborted_with_cursor_in_retry_mode(self, mock_client): + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + cursor._in_retry_mode = True + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__iter__", + side_effect=(Aborted("Aborted"), iter([])), + ): + cursor.fetchall() + + transaction_helper_mock.add_fetch_statement_for_retry.assert_not_called() + transaction_helper_mock.retry_transaction.assert_not_called() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_fetchmany_aborted_with_cursor_not_in_retry_mode(self, mock_client): + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__next__", + side_effect=(Aborted("Aborted"), iter([])), + ): + cursor.fetchmany() + + transaction_helper_mock.add_fetch_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_called_once() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_fetchmany_aborted_with_cursor_in_retry_mode(self, mock_client): + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + cursor._in_retry_mode = True + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__next__", + side_effect=(Aborted("Aborted"), iter([])), + ): + cursor.fetchmany() + + transaction_helper_mock.add_fetch_statement_for_retry.assert_not_called() + transaction_helper_mock.retry_transaction.assert_not_called() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_fetch_exception_with_cursor_not_in_retry_mode(self, mock_client): + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__iter__", + side_effect=Exception("Exception"), + ): + cursor.fetchall() + + transaction_helper_mock.add_fetch_statement_for_retry.assert_called_once() + transaction_helper_mock.retry_transaction.assert_not_called() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_fetch_exception_with_cursor_in_retry_mode(self, mock_client): + connection = connect("test-instance", "test-database") + cursor = connection.cursor() + cursor._in_retry_mode = True + transaction_helper_mock = cursor.transaction_helper = mock.Mock() + + with mock.patch( + "google.cloud.spanner_dbapi.cursor.Cursor.__next__", + side_effect=Exception("Exception"), + ): + cursor.fetchmany() + + transaction_helper_mock.add_fetch_statement_for_retry.assert_not_called() + transaction_helper_mock.retry_transaction.assert_not_called() + + @mock.patch("google.cloud.spanner_v1.Client") + def test_ddls_with_semicolon(self, mock_client): + """ + Check that one script with several DDL statements separated + with semicolons is splitted into several DDLs. + """ + from google.cloud.spanner_dbapi.connection import connect + + EXP_DDLS = [ + "CREATE TABLE table_name (row_id INT64) PRIMARY KEY ()", + "DROP INDEX index_name", + ( + "CREATE TABLE papers (" + "\n id INT64," + "\n authors ARRAY," + '\n author_list STRING(MAX) AS (ARRAY_TO_STRING(authors, ";")) stored' + ") PRIMARY KEY (id)" + ), + "DROP TABLE table_name", + ] + + connection = connect("test-instance", "test-database") + + cursor = connection.cursor() + cursor.execute( + "CREATE TABLE table_name (row_id INT64) PRIMARY KEY ();" + "DROP INDEX index_name;\n" + "CREATE TABLE papers (" + "\n id INT64," + "\n authors ARRAY," + '\n author_list STRING(MAX) AS (ARRAY_TO_STRING(authors, ";")) stored' + ") PRIMARY KEY (id);" + "DROP TABLE table_name;", + ) + + self.assertEqual(connection._ddl_statements, EXP_DDLS) diff --git a/tests/unit/spanner_dbapi/test_globals.py b/tests/unit/spanner_dbapi/test_globals.py new file mode 100644 index 0000000000..2960862ec3 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_globals.py @@ -0,0 +1,28 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest + + +class TestDBAPIGlobals(unittest.TestCase): + def test_apilevel(self): + from google.cloud.spanner_dbapi import apilevel + from google.cloud.spanner_dbapi import paramstyle + from google.cloud.spanner_dbapi import threadsafety + + self.assertEqual(apilevel, "2.0", "We implement PEP-0249 version 2.0") + self.assertEqual(paramstyle, "format", "Cloud Spanner uses @param") + self.assertEqual( + threadsafety, 1, "Threads may share module but not connections" + ) diff --git a/tests/unit/spanner_dbapi/test_parse_utils.py b/tests/unit/spanner_dbapi/test_parse_utils.py new file mode 100644 index 0000000000..f63dbb78e4 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_parse_utils.py @@ -0,0 +1,388 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import unittest + +from google.cloud.spanner_dbapi.parsed_statement import ( + StatementType, + ParsedStatement, + Statement, + ClientSideStatementType, +) +from google.cloud.spanner_v1 import param_types +from google.cloud.spanner_v1 import JsonObject +from google.cloud.spanner_dbapi.parse_utils import classify_statement + + +class TestParseUtils(unittest.TestCase): + skip_condition = sys.version_info[0] < 3 + skip_message = "Subtests are not supported in Python 2" + + def test_classify_stmt(self): + cases = ( + ("SELECT 1", StatementType.QUERY), + ("SELECT s.SongName FROM Songs AS s", StatementType.QUERY), + ("(SELECT s.SongName FROM Songs AS s)", StatementType.QUERY), + ( + "WITH sq AS (SELECT SchoolID FROM Roster) SELECT * from sq", + StatementType.QUERY, + ), + ( + "GRAPH FinGraph MATCH (n) RETURN LABELS(n) AS label, n.id", + StatementType.QUERY, + ), + ("FROM Produce |> WHERE item != 'bananas'", StatementType.QUERY), + ( + "CREATE TABLE django_content_type (id STRING(64) NOT NULL, name STRING(100) " + "NOT NULL, app_label STRING(100) NOT NULL, model STRING(100) NOT NULL) PRIMARY KEY(id)", + StatementType.DDL, + ), + ( + "CREATE INDEX SongsBySingerAlbumSongNameDesc ON " + "Songs(SingerId, AlbumId, SongName DESC), INTERLEAVE IN Albums", + StatementType.DDL, + ), + ("CREATE INDEX SongsBySongName ON Songs(SongName)", StatementType.DDL), + ( + "CREATE INDEX AlbumsByAlbumTitle2 ON Albums(AlbumTitle) STORING (MarketingBudget)", + StatementType.DDL, + ), + ("CREATE ROLE parent", StatementType.DDL), + ("commit", StatementType.CLIENT_SIDE), + ("begin", StatementType.CLIENT_SIDE), + ("start", StatementType.CLIENT_SIDE), + ("begin isolation level serializable", StatementType.CLIENT_SIDE), + ("start isolation level serializable", StatementType.CLIENT_SIDE), + ("begin isolation level repeatable read", StatementType.CLIENT_SIDE), + ("start isolation level repeatable read", StatementType.CLIENT_SIDE), + ("begin transaction", StatementType.CLIENT_SIDE), + ("start transaction", StatementType.CLIENT_SIDE), + ( + "begin transaction isolation level serializable", + StatementType.CLIENT_SIDE, + ), + ( + "start transaction isolation level serializable", + StatementType.CLIENT_SIDE, + ), + ( + "begin transaction isolation level repeatable read", + StatementType.CLIENT_SIDE, + ), + ( + "start transaction isolation level repeatable read", + StatementType.CLIENT_SIDE, + ), + ("rollback", StatementType.CLIENT_SIDE), + (" commit TRANSACTION ", StatementType.CLIENT_SIDE), + (" rollback TRANSACTION ", StatementType.CLIENT_SIDE), + (" SHOW VARIABLE COMMIT_TIMESTAMP ", StatementType.CLIENT_SIDE), + ("SHOW VARIABLE READ_TIMESTAMP", StatementType.CLIENT_SIDE), + ("GRANT SELECT ON TABLE Singers TO ROLE parent", StatementType.DDL), + ("REVOKE SELECT ON TABLE Singers TO ROLE parent", StatementType.DDL), + ("GRANT ROLE parent TO ROLE child", StatementType.DDL), + ("INSERT INTO table (col1) VALUES (1)", StatementType.INSERT), + ("INSERT table (col1) VALUES (1)", StatementType.INSERT), + ("INSERT OR UPDATE table (col1) VALUES (1)", StatementType.INSERT), + ("INSERT OR IGNORE table (col1) VALUES (1)", StatementType.INSERT), + ("UPDATE table SET col1 = 1 WHERE col1 = NULL", StatementType.UPDATE), + ("delete from table WHERE col1 = 2", StatementType.UPDATE), + ("delete from table WHERE col1 in (select 1)", StatementType.UPDATE), + ("dlete from table where col1 = 2", StatementType.UNKNOWN), + ("udpate table set col2=1 where col1 = 2", StatementType.UNKNOWN), + ("begin foo", StatementType.UNKNOWN), + ("begin transaction foo", StatementType.UNKNOWN), + ("begin transaction isolation level", StatementType.UNKNOWN), + ("begin transaction repeatable read", StatementType.UNKNOWN), + ( + "begin transaction isolation level repeatable read foo", + StatementType.UNKNOWN, + ), + ( + "begin transaction isolation level unspecified", + StatementType.UNKNOWN, + ), + ("commit foo", StatementType.UNKNOWN), + ("commit transaction foo", StatementType.UNKNOWN), + ("rollback foo", StatementType.UNKNOWN), + ("rollback transaction foo", StatementType.UNKNOWN), + ("show variable", StatementType.UNKNOWN), + ("show variable read_timestamp foo", StatementType.UNKNOWN), + ("INSERTs INTO table (col1) VALUES (1)", StatementType.UNKNOWN), + ("UPDATEs table SET col1 = 1 WHERE col1 = NULL", StatementType.UNKNOWN), + ("DELETEs from table WHERE col1 = 2", StatementType.UNKNOWN), + ) + + for query, want_class in cases: + self.assertEqual( + classify_statement(query).statement_type, want_class, query + ) + + def test_begin_isolation_level(self): + parsed_statement = classify_statement("begin") + self.assertEqual( + parsed_statement, + ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("begin"), + ClientSideStatementType.BEGIN, + [], + ), + ) + parsed_statement = classify_statement("begin isolation level serializable") + self.assertEqual( + parsed_statement, + ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("begin isolation level serializable"), + ClientSideStatementType.BEGIN, + ["serializable"], + ), + ) + parsed_statement = classify_statement("begin isolation level repeatable read") + self.assertEqual( + parsed_statement, + ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("begin isolation level repeatable read"), + ClientSideStatementType.BEGIN, + ["repeatable read"], + ), + ) + parsed_statement = classify_statement( + "begin isolation level repeatable read " + ) + self.assertEqual( + parsed_statement, + ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("begin isolation level repeatable read"), + ClientSideStatementType.BEGIN, + ["repeatable read"], + ), + ) + + def test_partition_query_classify_stmt(self): + parsed_statement = classify_statement( + " PARTITION SELECT s.SongName FROM Songs AS s " + ) + self.assertEqual( + parsed_statement, + ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("PARTITION SELECT s.SongName FROM Songs AS s"), + ClientSideStatementType.PARTITION_QUERY, + ["SELECT s.SongName FROM Songs AS s"], + ), + ) + + def test_run_partition_classify_stmt(self): + parsed_statement = classify_statement(" RUN PARTITION bj2bjb2j2bj2ebbh ") + self.assertEqual( + parsed_statement, + ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("RUN PARTITION bj2bjb2j2bj2ebbh"), + ClientSideStatementType.RUN_PARTITION, + ["bj2bjb2j2bj2ebbh"], + ), + ) + + def test_run_partitioned_query_classify_stmt(self): + parsed_statement = classify_statement( + " RUN PARTITIONED QUERY SELECT s.SongName FROM Songs AS s " + ) + self.assertEqual( + parsed_statement, + ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("RUN PARTITIONED QUERY SELECT s.SongName FROM Songs AS s"), + ClientSideStatementType.RUN_PARTITIONED_QUERY, + ["SELECT s.SongName FROM Songs AS s"], + ), + ) + + def test_set_autocommit_dml_mode_stmt(self): + parsed_statement = classify_statement( + " set autocommit_dml_mode = PARTITIONED_NON_ATOMIC " + ) + self.assertEqual( + parsed_statement, + ParsedStatement( + StatementType.CLIENT_SIDE, + Statement("set autocommit_dml_mode = PARTITIONED_NON_ATOMIC"), + ClientSideStatementType.SET_AUTOCOMMIT_DML_MODE, + ["PARTITIONED_NON_ATOMIC"], + ), + ) + + @unittest.skipIf(skip_condition, skip_message) + def test_sql_pyformat_args_to_spanner(self): + from google.cloud.spanner_dbapi.parse_utils import sql_pyformat_args_to_spanner + + cases = [ + ( + ( + "SELECT * from t WHERE f1=%s, f2 = %s, f3=%s", + (10, "abc", "y**$22l3f"), + ), + ( + "SELECT * from t WHERE f1=@a0, f2 = @a1, f3=@a2", + {"a0": 10, "a1": "abc", "a2": "y**$22l3f"}, + ), + ), + ( + ( + "INSERT INTO t (f1, f2, f2) VALUES (%s, %s, %s)", + ("app", "name", "applied"), + ), + ( + "INSERT INTO t (f1, f2, f2) VALUES (@a0, @a1, @a2)", + {"a0": "app", "a1": "name", "a2": "applied"}, + ), + ), + ( + ( + "INSERT INTO t (f1, f2, f2) VALUES (%(f1)s, %(f2)s, %(f3)s)", + {"f1": "app", "f2": "name", "f3": "applied"}, + ), + ( + "INSERT INTO t (f1, f2, f2) VALUES (@a0, @a1, @a2)", + {"a0": "app", "a1": "name", "a2": "applied"}, + ), + ), + ( + # Intentionally using a dict with more keys than will be resolved. + ("SELECT * from t WHERE f1=%(f1)s", {"f1": "app", "f2": "name"}), + ("SELECT * from t WHERE f1=@a0", {"a0": "app"}), + ), + ( + # No args to replace, we MUST return the original params dict + # since it might be useful to pass to the next user. + ("SELECT * from t WHERE id=10", {"f1": "app", "f2": "name"}), + ("SELECT * from t WHERE id=10", {"f1": "app", "f2": "name"}), + ), + ] + for (sql_in, params), sql_want in cases: + with self.subTest(sql=sql_in): + got_sql, got_named_args = sql_pyformat_args_to_spanner(sql_in, params) + want_sql, want_named_args = sql_want + self.assertEqual(got_sql, want_sql, "SQL does not match") + self.assertEqual( + got_named_args, want_named_args, "Named args do not match" + ) + + @unittest.skipIf(skip_condition, skip_message) + def test_sql_pyformat_args_to_spanner_invalid(self): + from google.cloud.spanner_dbapi import exceptions + from google.cloud.spanner_dbapi.parse_utils import sql_pyformat_args_to_spanner + + cases = [ + ( + "SELECT * from t WHERE f1=%s, f2 = %s, f3=%s, extra=%s", + (10, "abc", "y**$22l3f"), + ) + ] + for sql, params in cases: + with self.subTest(sql=sql): + self.assertRaisesRegex( + exceptions.Error, + "pyformat_args mismatch", + lambda: sql_pyformat_args_to_spanner(sql, params), + ) + + @unittest.skipIf(skip_condition, skip_message) + def test_get_param_types(self): + import datetime + import decimal + + from google.cloud.spanner_dbapi.parse_utils import ( + DateStr, + TimestampStr, + get_param_types, + ) + + params = { + "a1": 10, + "b1": "string", + # Note: We only want a value and not a type for this. + # Instead, we let Spanner infer the correct type (FLOAT64 or FLOAT32) + "c1": 10.39, + "d1": TimestampStr("2005-08-30T01:01:01.000001Z"), + "e1": DateStr("2019-12-05"), + "f1": True, + "g1": datetime.datetime(2011, 9, 1, 13, 20, 30), + "h1": datetime.date(2011, 9, 1), + "i1": b"bytes", + "j1": None, + "k1": decimal.Decimal("3.194387483193242e+19"), + "l1": JsonObject({"key": "value"}), + } + want_types = { + "a1": param_types.INT64, + "b1": param_types.STRING, + "d1": param_types.TIMESTAMP, + "e1": param_types.DATE, + "f1": param_types.BOOL, + "g1": param_types.TIMESTAMP, + "h1": param_types.DATE, + "i1": param_types.BYTES, + "k1": param_types.NUMERIC, + "l1": param_types.JSON, + } + got_types = get_param_types(params) + self.assertEqual(got_types, want_types) + + def test_get_param_types_none(self): + from google.cloud.spanner_dbapi.parse_utils import get_param_types + + self.assertEqual(get_param_types(None), None) + + @unittest.skipIf(skip_condition, skip_message) + def test_ensure_where_clause(self): + from google.cloud.spanner_dbapi.parse_utils import ensure_where_clause + + cases = ( + "UPDATE a SET a.b=10 FROM articles a JOIN d c ON a.ai = c.ai WHERE c.ci = 1", + "UPDATE T SET A = 1 WHERE C1 = 1 AND C2 = 2", + "UPDATE T SET r=r*0.9 WHERE id IN (SELECT id FROM items WHERE r / w >= 1.3 AND q > 100)", + ) + err_cases = ( + "UPDATE (SELECT * FROM A JOIN c ON ai.id = c.id WHERE cl.ci = 1) SET d=5", + "DELETE * FROM TABLE", + ) + for sql in cases: + with self.subTest(sql=sql): + ensure_where_clause(sql) + + for sql in err_cases: + with self.subTest(sql=sql): + self.assertEqual(ensure_where_clause(sql), sql + " WHERE 1=1") + + @unittest.skipIf(skip_condition, skip_message) + def test_escape_name(self): + from google.cloud.spanner_dbapi.parse_utils import escape_name + + cases = ( + ("SELECT", "`SELECT`"), + ("dashed-value", "`dashed-value`"), + ("with space", "`with space`"), + ("name", "name"), + ("", ""), + ) + for name, want in cases: + with self.subTest(name=name): + got = escape_name(name) + self.assertEqual(got, want) diff --git a/tests/unit/spanner_dbapi/test_parser.py b/tests/unit/spanner_dbapi/test_parser.py new file mode 100644 index 0000000000..25f51591c2 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_parser.py @@ -0,0 +1,277 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import unittest + + +class TestParser(unittest.TestCase): + skip_condition = sys.version_info[0] < 3 + skip_message = "Subtests are not supported in Python 2" + + @unittest.skipIf(skip_condition, skip_message) + def test_func(self): + from google.cloud.spanner_dbapi.parser import FUNC + from google.cloud.spanner_dbapi.parser import a_args + from google.cloud.spanner_dbapi.parser import expect + from google.cloud.spanner_dbapi.parser import func + from google.cloud.spanner_dbapi.parser import pyfmt_str + + cases = [ + ("_91())", ")", func("_91", a_args([]))), + ("_a()", "", func("_a", a_args([]))), + ("___()", "", func("___", a_args([]))), + ("abc()", "", func("abc", a_args([]))), + ( + "AF112(%s, LOWER(%s, %s), rand(%s, %s, TAN(%s, %s)))", + "", + func( + "AF112", + a_args( + [ + pyfmt_str, + func("LOWER", a_args([pyfmt_str, pyfmt_str])), + func( + "rand", + a_args( + [ + pyfmt_str, + pyfmt_str, + func("TAN", a_args([pyfmt_str, pyfmt_str])), + ] + ), + ), + ] + ), + ), + ), + ] + + for text, want_unconsumed, want_parsed in cases: + with self.subTest(text=text): + got_unconsumed, got_parsed = expect(text, FUNC) + self.assertEqual(got_parsed, want_parsed) + self.assertEqual(got_unconsumed, want_unconsumed) + + @unittest.skipIf(skip_condition, skip_message) + def test_func_fail(self): + from google.cloud.spanner_dbapi.exceptions import ProgrammingError + from google.cloud.spanner_dbapi.parser import FUNC + from google.cloud.spanner_dbapi.parser import expect + + cases = [ + ("", "FUNC: `` does not begin with `a-zA-z` nor a `_`"), + ("91", "FUNC: `91` does not begin with `a-zA-z` nor a `_`"), + ("_91", "supposed to begin with `\\(`"), + ("_91(", "supposed to end with `\\)`"), + ("_.()", "supposed to begin with `\\(`"), + ("_a.b()", "supposed to begin with `\\(`"), + ] + + for text, wantException in cases: + with self.subTest(text=text): + self.assertRaisesRegex( + ProgrammingError, wantException, lambda: expect(text, FUNC) + ) + + def test_func_eq(self): + from google.cloud.spanner_dbapi.parser import func + + func1 = func("func1", None) + func2 = func("func2", None) + self.assertFalse(func1 == object) + self.assertFalse(func1 == func2) + func2.name = func1.name + func1.args = 0 + func2.args = "0" + self.assertFalse(func1 == func2) + func1.args = [0] + func2.args = [0, 0] + self.assertFalse(func1 == func2) + func2.args = func1.args + self.assertTrue(func1 == func2) + + @unittest.skipIf(skip_condition, skip_message) + def test_a_args(self): + from google.cloud.spanner_dbapi.parser import ARGS + from google.cloud.spanner_dbapi.parser import a_args + from google.cloud.spanner_dbapi.parser import expect + from google.cloud.spanner_dbapi.parser import func + from google.cloud.spanner_dbapi.parser import pyfmt_str + + cases = [ + ("()", "", a_args([])), + ("(%s)", "", a_args([pyfmt_str])), + ("(%s,)", "", a_args([pyfmt_str])), + ("(%s),", ",", a_args([pyfmt_str])), + ( + "(%s,%s, f1(%s, %s))", + "", + a_args( + [pyfmt_str, pyfmt_str, func("f1", a_args([pyfmt_str, pyfmt_str]))] + ), + ), + ] + + for text, want_unconsumed, want_parsed in cases: + with self.subTest(text=text): + got_unconsumed, got_parsed = expect(text, ARGS) + self.assertEqual(got_parsed, want_parsed) + self.assertEqual(got_unconsumed, want_unconsumed) + + @unittest.skipIf(skip_condition, skip_message) + def test_a_args_fail(self): + from google.cloud.spanner_dbapi.exceptions import ProgrammingError + from google.cloud.spanner_dbapi.parser import ARGS + from google.cloud.spanner_dbapi.parser import expect + + cases = [ + ("", "ARGS: supposed to begin with `\\(`"), + ("(", "ARGS: supposed to end with `\\)`"), + (")", "ARGS: supposed to begin with `\\(`"), + ("(%s,%s, f1(%s, %s), %s", "ARGS: supposed to end with `\\)`"), + ] + + for text, wantException in cases: + with self.subTest(text=text): + self.assertRaisesRegex( + ProgrammingError, wantException, lambda: expect(text, ARGS) + ) + + def test_a_args_has_expr(self): + from google.cloud.spanner_dbapi.parser import a_args + + self.assertFalse(a_args([]).has_expr()) + self.assertTrue(a_args([[0]]).has_expr()) + + def test_a_args_eq(self): + from google.cloud.spanner_dbapi.parser import a_args + + a1 = a_args([0]) + self.assertFalse(a1 == object()) + a2 = a_args([0, 0]) + self.assertFalse(a1 == a2) + a1.argv = [0, 1] + self.assertFalse(a1 == a2) + a2.argv = [0, 1] + self.assertTrue(a1 == a2) + + def test_a_args_homogeneous(self): + from google.cloud.spanner_dbapi.parser import a_args + from google.cloud.spanner_dbapi.parser import terminal + + a_obj = a_args([a_args([terminal(10**i)]) for i in range(10)]) + self.assertTrue(a_obj.homogenous()) + + a_obj = a_args([a_args([[object()]]) for _ in range(10)]) + self.assertFalse(a_obj.homogenous()) + + def test_a_args__is_equal_length(self): + from google.cloud.spanner_dbapi.parser import a_args + + a_obj = a_args([]) + self.assertTrue(a_obj._is_equal_length()) + + @unittest.skipIf(skip_condition, "Python 2 has an outdated iterator definition") + @unittest.skipIf( + skip_condition, "Python 2 does not support 0-argument super() calls" + ) + def test_values(self): + from google.cloud.spanner_dbapi.parser import a_args + from google.cloud.spanner_dbapi.parser import terminal + from google.cloud.spanner_dbapi.parser import values + + a_obj = a_args([a_args([terminal(10**i)]) for i in range(10)]) + self.assertEqual(str(values(a_obj)), "VALUES%s" % str(a_obj)) + + def test_expect(self): + from google.cloud.spanner_dbapi.parser import ARGS + from google.cloud.spanner_dbapi.parser import expect + from google.cloud.spanner_dbapi import exceptions + + with self.assertRaises(exceptions.ProgrammingError): + expect(word="", token=ARGS) + with self.assertRaises(exceptions.ProgrammingError): + expect(word="ABC", token=ARGS) + with self.assertRaises(exceptions.ProgrammingError): + expect(word="(", token=ARGS) + + with self.assertRaises(exceptions.ProgrammingError): + expect(word="", token="ABC") + + @unittest.skipIf(skip_condition, skip_message) + def test_expect_values(self): + from google.cloud.spanner_dbapi.parser import VALUES + from google.cloud.spanner_dbapi.parser import a_args + from google.cloud.spanner_dbapi.parser import expect + from google.cloud.spanner_dbapi.parser import func + from google.cloud.spanner_dbapi.parser import pyfmt_str + from google.cloud.spanner_dbapi.parser import values + + cases = [ + ("VALUES ()", "", values([a_args([])])), + ("VALUES", "", values([])), + ("VALUES(%s)", "", values([a_args([pyfmt_str])])), + (" VALUES (%s) ", "", values([a_args([pyfmt_str])])), + ("VALUES(%s, %s)", "", values([a_args([pyfmt_str, pyfmt_str])])), + ( + "VALUES(%s, %s, LOWER(%s, %s))", + "", + values( + [ + a_args( + [ + pyfmt_str, + pyfmt_str, + func("LOWER", a_args([pyfmt_str, pyfmt_str])), + ] + ) + ] + ), + ), + ( + "VALUES (UPPER(%s)), (%s)", + "", + values( + [a_args([func("UPPER", a_args([pyfmt_str]))]), a_args([pyfmt_str])] + ), + ), + ] + + for text, want_unconsumed, want_parsed in cases: + with self.subTest(text=text): + got_unconsumed, got_parsed = expect(text, VALUES) + self.assertEqual(got_parsed, want_parsed) + self.assertEqual(got_unconsumed, want_unconsumed) + + @unittest.skipIf(skip_condition, skip_message) + def test_expect_values_fail(self): + from google.cloud.spanner_dbapi.exceptions import ProgrammingError + from google.cloud.spanner_dbapi.parser import VALUES + from google.cloud.spanner_dbapi.parser import expect + + cases = [ + ("", "VALUES: `` does not start with VALUES"), + ( + "VALUES(%s, %s, (%s, %s))", + "FUNC: `\\(%s, %s\\)\\)` does not begin with `a-zA-z` nor a `_`", + ), + ("VALUES(%s),,", "ARGS: supposed to begin with `\\(` in `,`"), + ] + + for text, wantException in cases: + with self.subTest(text=text): + self.assertRaisesRegex( + ProgrammingError, wantException, lambda: expect(text, VALUES) + ) diff --git a/tests/unit/spanner_dbapi/test_transaction_helper.py b/tests/unit/spanner_dbapi/test_transaction_helper.py new file mode 100644 index 0000000000..958fca0ce6 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_transaction_helper.py @@ -0,0 +1,621 @@ +# Copyright 2023 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import unittest +from unittest import mock + +from google.cloud.spanner_dbapi.exceptions import ( + RetryAborted, +) +from google.cloud.spanner_dbapi.checksum import ResultsChecksum +from google.cloud.spanner_dbapi.parsed_statement import ParsedStatement, StatementType +from google.api_core.exceptions import Aborted + +from google.cloud.spanner_dbapi.transaction_helper import ( + TransactionRetryHelper, + ExecuteStatement, + CursorStatementType, + FetchStatement, + ResultType, +) + + +def _get_checksum(row): + checksum = ResultsChecksum() + checksum.consume_result(row) + return checksum + + +SQL = "SELECT 1" +ARGS = [] + + +class TestTransactionHelper(unittest.TestCase): + @mock.patch("google.cloud.spanner_dbapi.cursor.Cursor") + @mock.patch("google.cloud.spanner_dbapi.connection.Connection") + def setUp(self, mock_connection, mock_cursor): + self._under_test = TransactionRetryHelper(mock_connection) + self._mock_cursor = mock_cursor + + def test_retry_transaction_execute(self): + """ + Test retrying a transaction with an execute statement works. + """ + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.NONE, + result_details=None, + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor().execute = mock.Mock() + + self._under_test.retry_transaction() + + run_mock.assert_called_with(SQL, ARGS) + + def test_retry_transaction_dml_execute(self): + """ + Test retrying a transaction with an execute DML statement works. + """ + update_count = 3 + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.ROW_COUNT, + result_details=update_count, + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor = mock.Mock() + run_mock().rowcount = update_count + + self._under_test.retry_transaction() + + run_mock().execute.assert_called_with(SQL, ARGS) + + def test_retry_transaction_dml_execute_exception(self): + """ + Test retrying a transaction with an execute DML statement with different + row update count than original throws RetryAborted exception. + """ + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.ROW_COUNT, + result_details=2, + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor = mock.Mock() + run_mock().rowcount = 3 + + with self.assertRaises(RetryAborted): + self._under_test.retry_transaction() + + run_mock().execute.assert_called_with(SQL, ARGS) + + def test_retry_transaction_execute_many(self): + """ + Test retrying a transaction with an executemany on Query statement works. + """ + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE_MANY, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.NONE, + result_details=None, + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor().executemany = mock.Mock() + + self._under_test.retry_transaction() + + run_mock.assert_called_with(SQL, ARGS) + + def test_retry_transaction_dml_execute_many(self): + """ + Test retrying a transaction with an executemany on DML statement works. + """ + update_count = 3 + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE_MANY, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.ROW_COUNT, + result_details=update_count, + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor = mock.Mock() + run_mock().rowcount = update_count + + self._under_test.retry_transaction() + + run_mock().executemany.assert_called_with(SQL, ARGS) + + def test_retry_transaction_dml_executemany_exception(self): + """ + Test retrying a transaction with an executemany DML statement with different + row update count than original throws RetryAborted exception. + """ + rows_inserted = [3, 4] + self._mock_cursor._batch_dml_rows_count = rows_inserted + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE_MANY, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.BATCH_DML_ROWS_COUNT, + result_details=rows_inserted, + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor = mock.Mock() + run_mock()._batch_dml_rows_count = [4, 3] + + with self.assertRaises(RetryAborted): + self._under_test.retry_transaction() + + run_mock().executemany.assert_called_with(SQL, ARGS) + + def test_retry_transaction_fetchall(self): + """ + Test retrying a transaction on a fetchall statement works. + """ + result_row = ("field1", "field2") + fetch_statement = FetchStatement( + cursor=self._mock_cursor, + statement_type=CursorStatementType.FETCH_ALL, + result_type=ResultType.CHECKSUM, + result_details=_get_checksum(result_row), + ) + self._under_test._statement_result_details_list.append(fetch_statement) + run_mock = self._under_test._connection.cursor().fetchall = mock.Mock() + run_mock.return_value = [result_row] + + self._under_test.retry_transaction() + + run_mock.assert_called_with() + + def test_retry_transaction_fetchall_exception(self): + """ + Test retrying a transaction on a fetchall statement throws exception + when results is different from original in retry. + """ + result_row = ("field1", "field2") + fetch_statement = FetchStatement( + cursor=self._mock_cursor, + statement_type=CursorStatementType.FETCH_ALL, + result_type=ResultType.CHECKSUM, + result_details=_get_checksum(result_row), + ) + self._under_test._statement_result_details_list.append(fetch_statement) + run_mock = self._under_test._connection.cursor().fetchall = mock.Mock() + retried_result_row = "field3" + run_mock.return_value = [retried_result_row] + + with self.assertRaises(RetryAborted): + self._under_test.retry_transaction() + + run_mock.assert_called_with() + + def test_retry_transaction_fetchmany(self): + """ + Test retrying a transaction on a fetchmany statement works. + """ + result_row = ("field1", "field2") + fetch_statement = FetchStatement( + cursor=self._mock_cursor, + statement_type=CursorStatementType.FETCH_MANY, + result_type=ResultType.CHECKSUM, + result_details=_get_checksum(result_row), + size=1, + ) + self._under_test._statement_result_details_list.append(fetch_statement) + run_mock = self._under_test._connection.cursor().fetchmany = mock.Mock() + run_mock.return_value = [result_row] + + self._under_test.retry_transaction() + + run_mock.assert_called_with(1) + + def test_retry_transaction_fetchmany_exception(self): + """ + Test retrying a transaction on a fetchmany statement throws exception + when results is different from original in retry. + """ + result_row = ("field1", "field2") + fetch_statement = FetchStatement( + cursor=self._mock_cursor, + statement_type=CursorStatementType.FETCH_MANY, + result_type=ResultType.CHECKSUM, + result_details=_get_checksum(result_row), + size=1, + ) + self._under_test._statement_result_details_list.append(fetch_statement) + run_mock = self._under_test._connection.cursor().fetchmany = mock.Mock() + retried_result_row = "field3" + run_mock.return_value = [retried_result_row] + + with self.assertRaises(RetryAborted): + self._under_test.retry_transaction() + + run_mock.assert_called_with(1) + + def test_retry_transaction_same_exception(self): + """ + Test retrying a transaction with statement throwing same exception in + retry works. + """ + exception = Exception("Test") + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.EXCEPTION, + result_details=exception, + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor().execute = mock.Mock() + run_mock.side_effect = exception + + self._under_test.retry_transaction() + + run_mock.assert_called_with(SQL, ARGS) + + def test_retry_transaction_different_exception(self): + """ + Test retrying a transaction with statement throwing different exception + in retry results in RetryAborted exception. + """ + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.EXCEPTION, + result_details=Exception("Test"), + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor().execute = mock.Mock() + run_mock.side_effect = Exception("Test2") + + with self.assertRaises(RetryAborted): + self._under_test.retry_transaction() + + run_mock.assert_called_with(SQL, ARGS) + + def test_retry_transaction_aborted_retry(self): + """ + Check that in case of a retried transaction aborted, + it will be retried once again. + """ + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.NONE, + result_details=None, + ) + self._under_test._statement_result_details_list.append(execute_statement) + run_mock = self._under_test._connection.cursor().execute = mock.Mock() + metadata_mock = mock.Mock() + metadata_mock.trailing_metadata.return_value = {} + run_mock.side_effect = [ + Aborted("Aborted", errors=[metadata_mock]), + None, + ] + + self._under_test.retry_transaction(default_retry_delay=0) + + run_mock.assert_has_calls( + ( + mock.call(SQL, ARGS), + mock.call(SQL, ARGS), + ) + ) + + def test_add_execute_statement_for_retry(self): + """ + Test add_execute_statement_for_retry method works + """ + self._mock_cursor._parsed_statement = ParsedStatement( + statement_type=StatementType.INSERT, statement=None + ) + + sql = "INSERT INTO Table" + rows_inserted = 3 + self._mock_cursor.rowcount = rows_inserted + self._mock_cursor._batch_dml_rows_count = None + self._under_test.add_execute_statement_for_retry( + self._mock_cursor, sql, [], None, False + ) + + expected_statement_result_details = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=sql, + args=[], + result_type=ResultType.ROW_COUNT, + result_details=rows_inserted, + ) + self.assertEqual( + self._under_test._last_statement_details_per_cursor, + {self._mock_cursor: expected_statement_result_details}, + ) + self.assertEqual( + self._under_test._statement_result_details_list, + [expected_statement_result_details], + ) + + def test_add_execute_statement_for_retry_with_exception(self): + """ + Test add_execute_statement_for_retry method with exception + """ + self._mock_cursor._parsed_statement = ParsedStatement( + statement_type=StatementType.INSERT, statement=None + ) + self._mock_cursor.rowcount = -1 + + sql = "INSERT INTO Table" + exception = Exception("Test") + self._under_test.add_execute_statement_for_retry( + self._mock_cursor, sql, [], exception, False + ) + + expected_statement_result_details = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=sql, + args=[], + result_type=ResultType.EXCEPTION, + result_details=exception, + ) + self.assertEqual( + self._under_test._last_statement_details_per_cursor, + {self._mock_cursor: expected_statement_result_details}, + ) + self.assertEqual( + self._under_test._statement_result_details_list, + [expected_statement_result_details], + ) + + def test_add_execute_statement_for_retry_query_statement(self): + """ + Test add_execute_statement_for_retry method works for non DML statement + """ + self._mock_cursor._parsed_statement = ParsedStatement( + statement_type=StatementType.QUERY, statement=None + ) + self._mock_cursor._row_count = None + self._mock_cursor._batch_dml_rows_count = None + + sql = "SELECT 1" + self._under_test.add_execute_statement_for_retry( + self._mock_cursor, sql, [], None, False + ) + + expected_statement_result_details = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=sql, + args=[], + result_type=ResultType.NONE, + result_details=None, + ) + self.assertEqual( + self._under_test._last_statement_details_per_cursor, + {self._mock_cursor: expected_statement_result_details}, + ) + self.assertEqual( + self._under_test._statement_result_details_list, + [expected_statement_result_details], + ) + + def test_add_execute_many_statement_for_retry(self): + """ + Test add_execute_statement_for_retry method works for executemany + """ + self._mock_cursor._parsed_statement = ParsedStatement( + statement_type=StatementType.INSERT, statement=None + ) + + sql = "INSERT INTO Table" + rows_inserted = [3, 4] + self._mock_cursor._batch_dml_rows_count = rows_inserted + self._under_test.add_execute_statement_for_retry( + self._mock_cursor, sql, [], None, True + ) + + expected_statement_result_details = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE_MANY, + cursor=self._mock_cursor, + sql=sql, + args=[], + result_type=ResultType.BATCH_DML_ROWS_COUNT, + result_details=rows_inserted, + ) + self.assertEqual( + self._under_test._last_statement_details_per_cursor, + {self._mock_cursor: expected_statement_result_details}, + ) + self.assertEqual( + self._under_test._statement_result_details_list, + [expected_statement_result_details], + ) + + def test_add_fetch_statement_for_retry(self): + """ + Test add_fetch_statement_for_retry method when last_statement_result_details is a + Fetch statement + """ + result_row = ("field1", "field2") + result_checksum = _get_checksum(result_row) + original_checksum_digest = result_checksum.checksum.digest() + last_statement_result_details = FetchStatement( + statement_type=CursorStatementType.FETCH_MANY, + cursor=self._mock_cursor, + result_type=ResultType.CHECKSUM, + result_details=result_checksum, + size=1, + ) + self._under_test._last_statement_details_per_cursor = { + self._mock_cursor: last_statement_result_details + } + new_rows = [("field3", "field4"), ("field5", "field6")] + + self._under_test.add_fetch_statement_for_retry( + self._mock_cursor, new_rows, None, False + ) + + updated_last_statement_result_details = ( + self._under_test._last_statement_details_per_cursor.get(self._mock_cursor) + ) + self.assertEqual( + updated_last_statement_result_details.size, + 3, + ) + self.assertNotEqual( + updated_last_statement_result_details.result_details.checksum.digest(), + original_checksum_digest, + ) + + def test_add_fetch_statement_for_retry_with_exception(self): + """ + Test add_fetch_statement_for_retry method with exception + """ + result_row = ("field1", "field2") + fetch_statement = FetchStatement( + statement_type=CursorStatementType.FETCH_MANY, + cursor=self._mock_cursor, + result_type=ResultType.CHECKSUM, + result_details=_get_checksum(result_row), + size=1, + ) + self._under_test._last_statement_details_per_cursor = { + self._mock_cursor: fetch_statement + } + exception = Exception("Test") + + self._under_test.add_fetch_statement_for_retry( + self._mock_cursor, [], exception, False + ) + + self.assertEqual( + self._under_test._last_statement_details_per_cursor.get(self._mock_cursor), + FetchStatement( + statement_type=CursorStatementType.FETCH_MANY, + cursor=self._mock_cursor, + result_type=ResultType.EXCEPTION, + result_details=exception, + size=1, + ), + ) + + def test_add_fetch_statement_for_retry_last_statement_not_exists(self): + """ + Test add_fetch_statement_for_retry method when last_statement_result_details + doesn't exists + """ + row = ("field3", "field4") + + self._under_test.add_fetch_statement_for_retry( + self._mock_cursor, [row], None, False + ) + + expected_statement = FetchStatement( + statement_type=CursorStatementType.FETCH_MANY, + cursor=self._mock_cursor, + result_type=ResultType.CHECKSUM, + result_details=_get_checksum(row), + size=1, + ) + self.assertEqual( + self._under_test._last_statement_details_per_cursor, + {self._mock_cursor: expected_statement}, + ) + self.assertEqual( + self._under_test._statement_result_details_list, + [expected_statement], + ) + + def test_add_fetch_statement_for_retry_fetch_all_statement(self): + """ + Test add_fetch_statement_for_retry method for fetchall statement + """ + row = ("field3", "field4") + + self._under_test.add_fetch_statement_for_retry( + self._mock_cursor, [row], None, True + ) + + expected_statement = FetchStatement( + statement_type=CursorStatementType.FETCH_ALL, + cursor=self._mock_cursor, + result_type=ResultType.CHECKSUM, + result_details=_get_checksum(row), + ) + self.assertEqual( + self._under_test._last_statement_details_per_cursor, + {self._mock_cursor: expected_statement}, + ) + self.assertEqual( + self._under_test._statement_result_details_list, + [expected_statement], + ) + + def test_add_fetch_statement_for_retry_when_last_statement_is_not_fetch(self): + """ + Test add_fetch_statement_for_retry method when last statement is not + a fetch type of statement + """ + execute_statement = ExecuteStatement( + statement_type=CursorStatementType.EXECUTE, + cursor=self._mock_cursor, + sql=SQL, + args=ARGS, + result_type=ResultType.ROW_COUNT, + result_details=2, + ) + self._under_test._last_statement_details_per_cursor = { + self._mock_cursor: execute_statement + } + self._under_test._statement_result_details_list.append(execute_statement) + row = ("field3", "field4") + + self._under_test.add_fetch_statement_for_retry( + self._mock_cursor, [row], None, False + ) + + expected_fetch_statement = FetchStatement( + statement_type=CursorStatementType.FETCH_MANY, + cursor=self._mock_cursor, + result_type=ResultType.CHECKSUM, + result_details=_get_checksum(row), + size=1, + ) + self.assertEqual( + self._under_test._last_statement_details_per_cursor, + {self._mock_cursor: expected_fetch_statement}, + ) + self.assertEqual( + self._under_test._statement_result_details_list, + [execute_statement, expected_fetch_statement], + ) diff --git a/tests/unit/spanner_dbapi/test_types.py b/tests/unit/spanner_dbapi/test_types.py new file mode 100644 index 0000000000..375dc31853 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_types.py @@ -0,0 +1,70 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest + +from time import timezone + + +class TestTypes(unittest.TestCase): + TICKS = 1572822862.9782631 + timezone # Sun 03 Nov 2019 23:14:22 UTC + + def test__date_from_ticks(self): + import datetime + + from google.cloud.spanner_dbapi import types + + actual = types._date_from_ticks(self.TICKS) + expected = datetime.date(2019, 11, 3) + + self.assertEqual(actual, expected) + + def test__time_from_ticks(self): + import datetime + + from google.cloud.spanner_dbapi import types + + actual = types._time_from_ticks(self.TICKS) + expected = datetime.time(23, 14, 22) + + self.assertEqual(actual, expected) + + def test__timestamp_from_ticks(self): + import datetime + + from google.cloud.spanner_dbapi import types + + actual = types._timestamp_from_ticks(self.TICKS) + expected = datetime.datetime(2019, 11, 3, 23, 14, 22) + + self.assertEqual(actual, expected) + + def test_type_equal(self): + from google.cloud.spanner_dbapi import types + + self.assertEqual(types.BINARY, "TYPE_CODE_UNSPECIFIED") + self.assertEqual(types.BINARY, "BYTES") + self.assertEqual(types.BINARY, "ARRAY") + self.assertEqual(types.BINARY, "STRUCT") + self.assertNotEqual(types.BINARY, "STRING") + + self.assertEqual(types.NUMBER, "BOOL") + self.assertEqual(types.NUMBER, "INT64") + self.assertEqual(types.NUMBER, "FLOAT64") + self.assertEqual(types.NUMBER, "NUMERIC") + self.assertNotEqual(types.NUMBER, "STRING") + + self.assertEqual(types.DATETIME, "TIMESTAMP") + self.assertEqual(types.DATETIME, "DATE") + self.assertNotEqual(types.DATETIME, "STRING") diff --git a/tests/unit/spanner_dbapi/test_utils.py b/tests/unit/spanner_dbapi/test_utils.py new file mode 100644 index 0000000000..fadbca1a09 --- /dev/null +++ b/tests/unit/spanner_dbapi/test_utils.py @@ -0,0 +1,102 @@ +# Copyright 2020 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import unittest + + +class TestUtils(unittest.TestCase): + skip_condition = sys.version_info[0] < 3 + skip_message = "Subtests are not supported in Python 2" + + @unittest.skipIf(skip_condition, skip_message) + def test_PeekIterator(self): + from google.cloud.spanner_dbapi.utils import PeekIterator + + cases = [ + ("list", [1, 2, 3, 4, 6, 7], [1, 2, 3, 4, 6, 7]), + ("iter_from_list", iter([1, 2, 3, 4, 6, 7]), [1, 2, 3, 4, 6, 7]), + ("tuple", ("a", 12, 0xFF), ["a", 12, 0xFF]), + ("iter_from_tuple", iter(("a", 12, 0xFF)), ["a", 12, 0xFF]), + ("no_args", (), []), + ] + + for name, data_in, expected in cases: + with self.subTest(name=name): + pitr = PeekIterator(data_in) + actual = list(pitr) + self.assertEqual(actual, expected) + + @unittest.skipIf(skip_condition, "Python 2 has an outdated iterator definition") + def test_peekIterator_list_rows_converted_to_tuples(self): + from google.cloud.spanner_dbapi.utils import PeekIterator + + # Cloud Spanner returns results in lists e.g. [result]. + # PeekIterator is used by BaseCursor in its fetch* methods. + # This test ensures that anything passed into PeekIterator + # will be returned as a tuple. + pit = PeekIterator([["a"], ["b"], ["c"], ["d"], ["e"]]) + got = list(pit) + want = [("a",), ("b",), ("c",), ("d",), ("e",)] + self.assertEqual(got, want, "Rows of type list must be returned as tuples") + + seventeen = PeekIterator([[17]]) + self.assertEqual(list(seventeen), [(17,)]) + + pit = PeekIterator([["%", "%d"]]) + self.assertEqual(next(pit), ("%", "%d")) + + pit = PeekIterator([("Clark", "Kent")]) + self.assertEqual(next(pit), ("Clark", "Kent")) + + @unittest.skipIf(skip_condition, "Python 2 has an outdated iterator definition") + def test_peekIterator_nonlist_rows_unconverted(self): + from google.cloud.spanner_dbapi.utils import PeekIterator + + pi = PeekIterator(["a", "b", "c", "d", "e"]) + got = list(pi) + want = ["a", "b", "c", "d", "e"] + self.assertEqual(got, want, "Values should be returned unchanged") + + @unittest.skipIf(skip_condition, skip_message) + def test_backtick_unicode(self): + from google.cloud.spanner_dbapi.utils import backtick_unicode + + cases = [ + ("SELECT (1) as foo WHERE 1=1", "SELECT (1) as foo WHERE 1=1"), + ("SELECT (1) as föö", "SELECT (1) as `föö`"), + ("SELECT (1) as `föö`", "SELECT (1) as `föö`"), + ("SELECT (1) as `föö` `umläut", "SELECT (1) as `föö` `umläut"), + ("SELECT (1) as `föö", "SELECT (1) as `föö"), + ] + for sql, want in cases: + with self.subTest(sql=sql): + got = backtick_unicode(sql) + self.assertEqual(got, want) + + @unittest.skipIf(skip_condition, skip_message) + def test_StreamedManyResultSets(self): + from google.cloud.spanner_dbapi.utils import StreamedManyResultSets + + cases = [ + ("iter_from_list", iter([1, 2, 3, 4, 6, 7]), [1, 2, 3, 4, 6, 7]), + ("iter_from_tuple", iter(("a", 12, 0xFF)), ["a", 12, 0xFF]), + ] + + for name, data_in, expected in cases: + with self.subTest(name=name): + stream_result = StreamedManyResultSets() + stream_result._iterators.append(data_in) + actual = list(stream_result) + self.assertEqual(actual, expected) diff --git a/tests/unit/test__helpers.py b/tests/unit/test__helpers.py index b2f2c7d5e7..6f77d002cd 100644 --- a/tests/unit/test__helpers.py +++ b/tests/unit/test__helpers.py @@ -14,6 +14,9 @@ import unittest +import mock + +from google.cloud.spanner_v1 import TransactionOptions class Test_merge_query_options(unittest.TestCase): @@ -28,16 +31,22 @@ def test_base_none_and_merge_none(self): self.assertIsNone(result) def test_base_dict_and_merge_none(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest - base = {"optimizer_version": "2"} + base = { + "optimizer_version": "2", + "optimizer_statistics_package": "auto_20191128_14_47_22UTC", + } merge = None - expected = ExecuteSqlRequest.QueryOptions(optimizer_version="2") + expected = ExecuteSqlRequest.QueryOptions( + optimizer_version="2", + optimizer_statistics_package="auto_20191128_14_47_22UTC", + ) result = self._callFUT(base, merge) self.assertEqual(result, expected) def test_base_empty_and_merge_empty(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest base = ExecuteSqlRequest.QueryOptions() merge = ExecuteSqlRequest.QueryOptions() @@ -45,15 +54,18 @@ def test_base_empty_and_merge_empty(self): self.assertIsNone(result) def test_base_none_merge_object(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest base = None - merge = ExecuteSqlRequest.QueryOptions(optimizer_version="3") + merge = ExecuteSqlRequest.QueryOptions( + optimizer_version="3", + optimizer_statistics_package="auto_20191128_14_47_22UTC", + ) result = self._callFUT(base, merge) self.assertEqual(result, merge) def test_base_none_merge_dict(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest base = None merge = {"optimizer_version": "3"} @@ -62,11 +74,17 @@ def test_base_none_merge_dict(self): self.assertEqual(result, expected) def test_base_object_merge_dict(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest - base = ExecuteSqlRequest.QueryOptions(optimizer_version="1") + base = ExecuteSqlRequest.QueryOptions( + optimizer_version="1", + optimizer_statistics_package="auto_20191128_14_47_22UTC", + ) merge = {"optimizer_version": "3"} - expected = ExecuteSqlRequest.QueryOptions(optimizer_version="3") + expected = ExecuteSqlRequest.QueryOptions( + optimizer_version="3", + optimizer_statistics_package="auto_20191128_14_47_22UTC", + ) result = self._callFUT(base, merge) self.assertEqual(result, expected) @@ -98,7 +116,7 @@ def test_w_invalid_bytes(self): def test_w_explicit_unicode(self): from google.protobuf.struct_pb2 import Value - TEXT = u"TEXT" + TEXT = "TEXT" value_pb = self._callFUT(TEXT) self.assertIsInstance(value_pb, Value) self.assertEqual(value_pb.string_value, TEXT) @@ -107,21 +125,21 @@ def test_w_list(self): from google.protobuf.struct_pb2 import Value from google.protobuf.struct_pb2 import ListValue - value_pb = self._callFUT([u"a", u"b", u"c"]) + value_pb = self._callFUT(["a", "b", "c"]) self.assertIsInstance(value_pb, Value) self.assertIsInstance(value_pb.list_value, ListValue) values = value_pb.list_value.values - self.assertEqual([value.string_value for value in values], [u"a", u"b", u"c"]) + self.assertEqual([value.string_value for value in values], ["a", "b", "c"]) def test_w_tuple(self): from google.protobuf.struct_pb2 import Value from google.protobuf.struct_pb2 import ListValue - value_pb = self._callFUT((u"a", u"b", u"c")) + value_pb = self._callFUT(("a", "b", "c")) self.assertIsInstance(value_pb, Value) self.assertIsInstance(value_pb.list_value, ListValue) values = value_pb.list_value.values - self.assertEqual([value.string_value for value in values], [u"a", u"b", u"c"]) + self.assertEqual([value.string_value for value in values], ["a", "b", "c"]) def test_w_bool(self): from google.protobuf.struct_pb2 import Value @@ -131,11 +149,9 @@ def test_w_bool(self): self.assertEqual(value_pb.bool_value, True) def test_w_int(self): - import six from google.protobuf.struct_pb2 import Value - for int_type in six.integer_types: # include 'long' on Python 2 - value_pb = self._callFUT(int_type(42)) + value_pb = self._callFUT(42) self.assertIsInstance(value_pb, Value) self.assertEqual(value_pb.string_value, "42") @@ -176,17 +192,38 @@ def test_w_date(self): self.assertIsInstance(value_pb, Value) self.assertEqual(value_pb.string_value, today.isoformat()) + def test_w_date_pre1000ad(self): + import datetime + from google.protobuf.struct_pb2 import Value + + when = datetime.date(800, 2, 25) + value_pb = self._callFUT(when) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, "0800-02-25") + def test_w_timestamp_w_nanos(self): - import pytz + import datetime + from google.protobuf.struct_pb2 import Value + from google.api_core import datetime_helpers + + when = datetime_helpers.DatetimeWithNanoseconds( + 2016, 12, 20, 21, 13, 47, nanosecond=123456789, tzinfo=datetime.timezone.utc + ) + value_pb = self._callFUT(when) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, "2016-12-20T21:13:47.123456789Z") + + def test_w_timestamp_w_nanos_pre1000ad(self): + import datetime from google.protobuf.struct_pb2 import Value from google.api_core import datetime_helpers when = datetime_helpers.DatetimeWithNanoseconds( - 2016, 12, 20, 21, 13, 47, nanosecond=123456789, tzinfo=pytz.UTC + 850, 12, 20, 21, 13, 47, nanosecond=123456789, tzinfo=datetime.timezone.utc ) value_pb = self._callFUT(when) self.assertIsInstance(value_pb, Value) - self.assertEqual(value_pb.string_value, when.rfc3339()) + self.assertEqual(value_pb.string_value, "0850-12-20T21:13:47.123456789Z") def test_w_listvalue(self): from google.protobuf.struct_pb2 import Value @@ -199,19 +236,144 @@ def test_w_listvalue(self): def test_w_datetime(self): import datetime - import pytz from google.protobuf.struct_pb2 import Value - from google.api_core import datetime_helpers - now = datetime.datetime.utcnow().replace(tzinfo=pytz.UTC) - value_pb = self._callFUT(now) + when = datetime.datetime(2021, 2, 8, 0, 0, 0, tzinfo=datetime.timezone.utc) + value_pb = self._callFUT(when) self.assertIsInstance(value_pb, Value) - self.assertEqual(value_pb.string_value, datetime_helpers.to_rfc3339(now)) + self.assertEqual(value_pb.string_value, "2021-02-08T00:00:00.000000Z") + + def test_w_datetime_pre1000ad(self): + import datetime + from google.protobuf.struct_pb2 import Value + + when = datetime.datetime(916, 2, 8, 0, 0, 0, tzinfo=datetime.timezone.utc) + value_pb = self._callFUT(when) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, "0916-02-08T00:00:00.000000Z") + + def test_w_timestamp_w_tz(self): + import datetime + from google.protobuf.struct_pb2 import Value + + zone = datetime.timezone(datetime.timedelta(hours=+1), name="CET") + when = datetime.datetime(2021, 2, 8, 0, 0, 0, tzinfo=zone) + value_pb = self._callFUT(when) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, "2021-02-07T23:00:00.000000Z") + + def test_w_timestamp_w_tz_pre1000ad(self): + import datetime + from google.protobuf.struct_pb2 import Value + + zone = datetime.timezone(datetime.timedelta(hours=+1), name="CET") + when = datetime.datetime(721, 2, 8, 0, 0, 0, tzinfo=zone) + value_pb = self._callFUT(when) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, "0721-02-07T23:00:00.000000Z") def test_w_unknown_type(self): with self.assertRaises(ValueError): self._callFUT(object()) + def test_w_numeric_precision_and_scale_valid(self): + import decimal + from google.protobuf.struct_pb2 import Value + + cases = [ + decimal.Decimal("42"), + decimal.Decimal("9.9999999999999999999999999999999999999E+28"), + decimal.Decimal("-9.9999999999999999999999999999999999999E+28"), + decimal.Decimal("99999999999999999999999999999.999999999"), + decimal.Decimal("1E+28"), + decimal.Decimal("1E-9"), + ] + for value in cases: + with self.subTest(value=value): + value_pb = self._callFUT(value) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, str(value)) + + def test_w_numeric_precision_and_scale_invalid(self): + import decimal + from google.cloud.spanner_v1._helpers import ( + NUMERIC_MAX_SCALE_ERR_MSG, + NUMERIC_MAX_PRECISION_ERR_MSG, + ) + + max_precision_error_msg = NUMERIC_MAX_PRECISION_ERR_MSG.format("30") + max_scale_error_msg = NUMERIC_MAX_SCALE_ERR_MSG.format("10") + + cases = [ + ( + decimal.Decimal("9.9999999999999999999999999999999999999E+29"), + max_precision_error_msg, + ), + ( + decimal.Decimal("-9.9999999999999999999999999999999999999E+29"), + max_precision_error_msg, + ), + ( + decimal.Decimal("999999999999999999999999999999.99999999"), + max_precision_error_msg, + ), + ( + decimal.Decimal("-999999999999999999999999999999.99999999"), + max_precision_error_msg, + ), + ( + decimal.Decimal("999999999999999999999999999999"), + max_precision_error_msg, + ), + (decimal.Decimal("1E+29"), max_precision_error_msg), + (decimal.Decimal("1E-10"), max_scale_error_msg), + ] + + for value, err_msg in cases: + with self.subTest(value=value, err_msg=err_msg): + self.assertRaisesRegex( + ValueError, + err_msg, + lambda: self._callFUT(value), + ) + + def test_w_json(self): + import json + from google.protobuf.struct_pb2 import Value + + value = json.dumps( + {"id": 27863, "Name": "Anamika"}, sort_keys=True, separators=(",", ":") + ) + value_pb = self._callFUT(value) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, value) + + def test_w_json_None(self): + from google.cloud.spanner_v1 import JsonObject + + value = JsonObject(None) + value_pb = self._callFUT(value) + self.assertTrue(value_pb.HasField("null_value")) + + def test_w_proto_message(self): + from google.protobuf.struct_pb2 import Value + import base64 + from .testdata import singer_pb2 + + singer_info = singer_pb2.SingerInfo() + expected = Value(string_value=base64.b64encode(singer_info.SerializeToString())) + value_pb = self._callFUT(singer_info) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb, expected) + + def test_w_proto_enum(self): + from google.protobuf.struct_pb2 import Value + from .testdata import singer_pb2 + + value_pb = self._callFUT(singer_pb2.Genre.ROCK) + self.assertIsInstance(value_pb, Value) + self.assertEqual(value_pb.string_value, "3") + class Test_make_list_value_pb(unittest.TestCase): def _callFUT(self, *args, **kw): @@ -229,7 +391,7 @@ def test_empty(self): def test_w_single_value(self): from google.protobuf.struct_pb2 import ListValue - VALUE = u"value" + VALUE = "value" result = self._callFUT(values=[VALUE]) self.assertIsInstance(result, ListValue) self.assertEqual(len(result.values), 1) @@ -238,7 +400,7 @@ def test_w_single_value(self): def test_w_multiple_values(self): from google.protobuf.struct_pb2 import ListValue - VALUE_1 = u"value" + VALUE_1 = "value" VALUE_2 = 42 result = self._callFUT(values=[VALUE_1, VALUE_2]) self.assertIsInstance(result, ListValue) @@ -271,7 +433,7 @@ def test_w_single_values(self): def test_w_multiple_values(self): from google.protobuf.struct_pb2 import ListValue - values = [[0, u"A"], [1, u"B"]] + values = [[0, "A"], [1, "B"]] result = self._callFUT(values=values) self.assertEqual(len(result), len(values)) for found, expected in zip(result, values): @@ -289,158 +451,294 @@ def _callFUT(self, *args, **kw): def test_w_null(self): from google.protobuf.struct_pb2 import Value, NULL_VALUE - from google.cloud.spanner_v1.proto.type_pb2 import Type, STRING + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode - field_type = Type(code=STRING) + field_type = Type(code=TypeCode.STRING) + field_name = "null_column" value_pb = Value(null_value=NULL_VALUE) - self.assertEqual(self._callFUT(value_pb, field_type), None) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), None) def test_w_string(self): from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type, STRING + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode - VALUE = u"Value" - field_type = Type(code=STRING) + VALUE = "Value" + field_type = Type(code=TypeCode.STRING) + field_name = "string_column" value_pb = Value(string_value=VALUE) - self.assertEqual(self._callFUT(value_pb, field_type), VALUE) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) def test_w_bytes(self): from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type, BYTES + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode VALUE = b"Value" - field_type = Type(code=BYTES) + field_type = Type(code=TypeCode.BYTES) + field_name = "bytes_column" value_pb = Value(string_value=VALUE) - self.assertEqual(self._callFUT(value_pb, field_type), VALUE) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) def test_w_bool(self): from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type, BOOL + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode VALUE = True - field_type = Type(code=BOOL) + field_type = Type(code=TypeCode.BOOL) + field_name = "bool_column" value_pb = Value(bool_value=VALUE) - self.assertEqual(self._callFUT(value_pb, field_type), VALUE) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) def test_w_int(self): from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type, INT64 + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode VALUE = 12345 - field_type = Type(code=INT64) + field_type = Type(code=TypeCode.INT64) + field_name = "int_column" value_pb = Value(string_value=str(VALUE)) - self.assertEqual(self._callFUT(value_pb, field_type), VALUE) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) def test_w_float(self): from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type, FLOAT64 + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + + VALUE = 3.14159 + field_type = Type(code=TypeCode.FLOAT64) + field_name = "float_column" + value_pb = Value(number_value=VALUE) + + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) + + def test_w_float_str(self): + from google.protobuf.struct_pb2 import Value + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + + VALUE = "3.14159" + field_type = Type(code=TypeCode.FLOAT64) + field_name = "float_str_column" + value_pb = Value(string_value=VALUE) + expected_value = 3.14159 + + self.assertEqual( + self._callFUT(value_pb, field_type, field_name), expected_value + ) + + def test_w_float32(self): + from google.cloud.spanner_v1 import Type, TypeCode + from google.protobuf.struct_pb2 import Value VALUE = 3.14159 - field_type = Type(code=FLOAT64) + field_type = Type(code=TypeCode.FLOAT32) + field_name = "float32_column" value_pb = Value(number_value=VALUE) - self.assertEqual(self._callFUT(value_pb, field_type), VALUE) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) + + def test_w_float32_str(self): + from google.cloud.spanner_v1 import Type, TypeCode + from google.protobuf.struct_pb2 import Value + + VALUE = "3.14159" + field_type = Type(code=TypeCode.FLOAT32) + field_name = "float32_str_column" + value_pb = Value(string_value=VALUE) + expected_value = 3.14159 + + self.assertEqual( + self._callFUT(value_pb, field_type, field_name), expected_value + ) def test_w_date(self): import datetime from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type, DATE + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode VALUE = datetime.date.today() - field_type = Type(code=DATE) + field_type = Type(code=TypeCode.DATE) + field_name = "date_column" value_pb = Value(string_value=VALUE.isoformat()) - self.assertEqual(self._callFUT(value_pb, field_type), VALUE) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) def test_w_timestamp_wo_nanos(self): - import pytz + import datetime from google.protobuf.struct_pb2 import Value from google.api_core import datetime_helpers - from google.cloud.spanner_v1.proto.type_pb2 import TIMESTAMP - from google.cloud.spanner_v1.proto.type_pb2 import Type + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode value = datetime_helpers.DatetimeWithNanoseconds( - 2016, 12, 20, 21, 13, 47, microsecond=123456, tzinfo=pytz.UTC + 2016, 12, 20, 21, 13, 47, microsecond=123456, tzinfo=datetime.timezone.utc ) - field_type = Type(code=TIMESTAMP) + field_type = Type(code=TypeCode.TIMESTAMP) + field_name = "nanos_column" value_pb = Value(string_value=datetime_helpers.to_rfc3339(value)) - parsed = self._callFUT(value_pb, field_type) + parsed = self._callFUT(value_pb, field_type, field_name) self.assertIsInstance(parsed, datetime_helpers.DatetimeWithNanoseconds) self.assertEqual(parsed, value) def test_w_timestamp_w_nanos(self): - import pytz + import datetime from google.protobuf.struct_pb2 import Value from google.api_core import datetime_helpers - from google.cloud.spanner_v1.proto.type_pb2 import TIMESTAMP - from google.cloud.spanner_v1.proto.type_pb2 import Type + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode value = datetime_helpers.DatetimeWithNanoseconds( - 2016, 12, 20, 21, 13, 47, nanosecond=123456789, tzinfo=pytz.UTC + 2016, 12, 20, 21, 13, 47, nanosecond=123456789, tzinfo=datetime.timezone.utc ) - field_type = Type(code=TIMESTAMP) + field_type = Type(code=TypeCode.TIMESTAMP) + field_name = "timestamp_column" value_pb = Value(string_value=datetime_helpers.to_rfc3339(value)) - parsed = self._callFUT(value_pb, field_type) + parsed = self._callFUT(value_pb, field_type, field_name) self.assertIsInstance(parsed, datetime_helpers.DatetimeWithNanoseconds) self.assertEqual(parsed, value) def test_w_array_empty(self): - from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type, ARRAY, INT64 + from google.protobuf.struct_pb2 import Value, ListValue + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode - field_type = Type(code=ARRAY, array_element_type=Type(code=INT64)) - value_pb = Value() + field_type = Type( + code=TypeCode.ARRAY, array_element_type=Type(code=TypeCode.INT64) + ) + field_name = "array_empty_column" + value_pb = Value(list_value=ListValue(values=[])) - self.assertEqual(self._callFUT(value_pb, field_type), []) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), []) def test_w_array_non_empty(self): from google.protobuf.struct_pb2 import Value, ListValue - from google.cloud.spanner_v1.proto.type_pb2 import Type, ARRAY, INT64 + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode - field_type = Type(code=ARRAY, array_element_type=Type(code=INT64)) + field_type = Type( + code=TypeCode.ARRAY, array_element_type=Type(code=TypeCode.INT64) + ) + field_name = "array_non_empty_column" VALUES = [32, 19, 5] values_pb = ListValue( values=[Value(string_value=str(value)) for value in VALUES] ) value_pb = Value(list_value=values_pb) - self.assertEqual(self._callFUT(value_pb, field_type), VALUES) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUES) def test_w_struct(self): from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type, StructType - from google.cloud.spanner_v1.proto.type_pb2 import STRUCT, STRING, INT64 + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import StructType + from google.cloud.spanner_v1 import TypeCode from google.cloud.spanner_v1._helpers import _make_list_value_pb - VALUES = [u"phred", 32] + VALUES = ["phred", 32] struct_type_pb = StructType( fields=[ - StructType.Field(name="name", type=Type(code=STRING)), - StructType.Field(name="age", type=Type(code=INT64)), + StructType.Field(name="name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="age", type_=Type(code=TypeCode.INT64)), ] ) - field_type = Type(code=STRUCT, struct_type=struct_type_pb) + field_type = Type(code=TypeCode.STRUCT, struct_type=struct_type_pb) + field_name = "struct_column" value_pb = Value(list_value=_make_list_value_pb(VALUES)) - self.assertEqual(self._callFUT(value_pb, field_type), VALUES) + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUES) + + def test_w_numeric(self): + import decimal + from google.protobuf.struct_pb2 import Value + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + + VALUE = decimal.Decimal("99999999999999999999999999999.999999999") + field_type = Type(code=TypeCode.NUMERIC) + field_name = "numeric_column" + value_pb = Value(string_value=str(VALUE)) + + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) + + def test_w_json(self): + import json + from google.protobuf.struct_pb2 import Value + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + + VALUE = {"id": 27863, "Name": "Anamika"} + str_repr = json.dumps(VALUE, sort_keys=True, separators=(",", ":")) + + field_type = Type(code=TypeCode.JSON) + field_name = "json_column" + value_pb = Value(string_value=str_repr) + + self.assertEqual(self._callFUT(value_pb, field_type, field_name), VALUE) + + VALUE = None + str_repr = json.dumps(VALUE, sort_keys=True, separators=(",", ":")) + + field_type = Type(code=TypeCode.JSON) + value_pb = Value(string_value=str_repr) + + self.assertEqual(self._callFUT(value_pb, field_type, field_name), {}) def test_w_unknown_type(self): from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.type_pb2 import Type - from google.cloud.spanner_v1.proto.type_pb2 import TYPE_CODE_UNSPECIFIED + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode - field_type = Type(code=TYPE_CODE_UNSPECIFIED) + field_type = Type(code=TypeCode.TYPE_CODE_UNSPECIFIED) + field_name = "unknown_column" value_pb = Value(string_value="Borked") with self.assertRaises(ValueError): - self._callFUT(value_pb, field_type) + self._callFUT(value_pb, field_type, field_name) + + def test_w_proto_message(self): + from google.protobuf.struct_pb2 import Value + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + import base64 + from .testdata import singer_pb2 + + VALUE = singer_pb2.SingerInfo() + field_type = Type(code=TypeCode.PROTO) + field_name = "proto_message_column" + value_pb = Value(string_value=base64.b64encode(VALUE.SerializeToString())) + column_info = {"proto_message_column": singer_pb2.SingerInfo()} + + self.assertEqual( + self._callFUT(value_pb, field_type, field_name, column_info), VALUE + ) + + def test_w_proto_enum(self): + from google.protobuf.struct_pb2 import Value + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + from .testdata import singer_pb2 + + VALUE = "ROCK" + field_type = Type(code=TypeCode.ENUM) + field_name = "proto_enum_column" + value_pb = Value(string_value=str(singer_pb2.Genre.ROCK)) + column_info = {"proto_enum_column": singer_pb2.Genre} + + self.assertEqual( + self._callFUT(value_pb, field_type, field_name, column_info), VALUE + ) class Test_parse_list_value_pbs(unittest.TestCase): @@ -450,28 +748,30 @@ def _callFUT(self, *args, **kw): return _parse_list_value_pbs(*args, **kw) def test_empty(self): - from google.cloud.spanner_v1.proto.type_pb2 import Type, StructType - from google.cloud.spanner_v1.proto.type_pb2 import STRING, INT64 + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import StructType + from google.cloud.spanner_v1 import TypeCode struct_type_pb = StructType( fields=[ - StructType.Field(name="name", type=Type(code=STRING)), - StructType.Field(name="age", type=Type(code=INT64)), + StructType.Field(name="name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="age", type_=Type(code=TypeCode.INT64)), ] ) self.assertEqual(self._callFUT(rows=[], row_type=struct_type_pb), []) def test_non_empty(self): - from google.cloud.spanner_v1.proto.type_pb2 import Type, StructType - from google.cloud.spanner_v1.proto.type_pb2 import STRING, INT64 + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import StructType + from google.cloud.spanner_v1 import TypeCode from google.cloud.spanner_v1._helpers import _make_list_value_pbs - VALUES = [[u"phred", 32], [u"bharney", 31]] + VALUES = [["phred", 32], ["bharney", 31]] struct_type_pb = StructType( fields=[ - StructType.Field(name="name", type=Type(code=STRING)), - StructType.Field(name="age", type=Type(code=INT64)), + StructType.Field(name="name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="age", type_=Type(code=TypeCode.INT64)), ] ) values_pbs = _make_list_value_pbs(VALUES) @@ -506,3 +806,788 @@ def test(self): prefix = "prefix" metadata = self._call_fut(prefix) self.assertEqual(metadata, [("google-cloud-resource-prefix", prefix)]) + + +class Test_retry(unittest.TestCase): + class test_class: + def test_fxn(self): + return True + + def test_retry_on_error(self): + from google.api_core.exceptions import InternalServerError, NotFound + from google.cloud.spanner_v1._helpers import _retry + import functools + + test_api = mock.create_autospec(self.test_class) + test_api.test_fxn.side_effect = [ + InternalServerError("testing"), + NotFound("testing"), + True, + ] + + _retry(functools.partial(test_api.test_fxn), delay=0) + + self.assertEqual(test_api.test_fxn.call_count, 3) + + def test_retry_allowed_exceptions(self): + from google.api_core.exceptions import InternalServerError, NotFound + from google.cloud.spanner_v1._helpers import _retry + import functools + + test_api = mock.create_autospec(self.test_class) + test_api.test_fxn.side_effect = [ + NotFound("testing"), + InternalServerError("testing"), + True, + ] + + with self.assertRaises(InternalServerError): + _retry( + functools.partial(test_api.test_fxn), + allowed_exceptions={NotFound: None}, + delay=0, + ) + + self.assertEqual(test_api.test_fxn.call_count, 2) + + def test_retry_count(self): + from google.api_core.exceptions import InternalServerError + from google.cloud.spanner_v1._helpers import _retry + import functools + + test_api = mock.create_autospec(self.test_class) + test_api.test_fxn.side_effect = [ + InternalServerError("testing"), + InternalServerError("testing"), + ] + + with self.assertRaises(InternalServerError): + _retry(functools.partial(test_api.test_fxn), retry_count=1, delay=0) + + self.assertEqual(test_api.test_fxn.call_count, 2) + + def test_check_rst_stream_error(self): + from google.api_core.exceptions import InternalServerError + from google.cloud.spanner_v1._helpers import _retry, _check_rst_stream_error + import functools + + test_api = mock.create_autospec(self.test_class) + test_api.test_fxn.side_effect = [ + InternalServerError("Received unexpected EOS on DATA frame from server"), + InternalServerError("RST_STREAM"), + True, + ] + + _retry( + functools.partial(test_api.test_fxn), + allowed_exceptions={InternalServerError: _check_rst_stream_error}, + delay=0, + ) + + self.assertEqual(test_api.test_fxn.call_count, 3) + + def test_retry_on_aborted_exception_with_success_after_first_aborted_retry(self): + from google.api_core.exceptions import Aborted + import time + from google.cloud.spanner_v1._helpers import _retry_on_aborted_exception + import functools + + test_api = mock.create_autospec(self.test_class) + test_api.test_fxn.side_effect = [ + Aborted("aborted exception", errors=("Aborted error")), + "true", + ] + deadline = time.time() + 30 + result_after_retry = _retry_on_aborted_exception( + functools.partial(test_api.test_fxn), deadline, default_retry_delay=0 + ) + + self.assertEqual(test_api.test_fxn.call_count, 2) + self.assertTrue(result_after_retry) + + def test_retry_on_aborted_exception_with_success_after_three_retries(self): + from google.api_core.exceptions import Aborted + import time + from google.cloud.spanner_v1._helpers import _retry_on_aborted_exception + import functools + + test_api = mock.create_autospec(self.test_class) + # Case where aborted exception is thrown after other generic exceptions + aborted = Aborted("aborted exception", errors=["Aborted error"]) + test_api.test_fxn.side_effect = [ + aborted, + aborted, + aborted, + "true", + ] + deadline = time.time() + 30 + _retry_on_aborted_exception( + functools.partial(test_api.test_fxn), + deadline=deadline, + default_retry_delay=0, + ) + + self.assertEqual(test_api.test_fxn.call_count, 4) + + def test_retry_on_aborted_exception_raises_aborted_if_deadline_expires(self): + from google.api_core.exceptions import Aborted + import time + from google.cloud.spanner_v1._helpers import _retry_on_aborted_exception + import functools + + test_api = mock.create_autospec(self.test_class) + test_api.test_fxn.side_effect = [ + Aborted("aborted exception", errors=("Aborted error")), + "true", + ] + deadline = time.time() + 0.001 + with self.assertRaises(Aborted): + _retry_on_aborted_exception( + functools.partial(test_api.test_fxn), + deadline=deadline, + default_retry_delay=0.01, + ) + + self.assertEqual(test_api.test_fxn.call_count, 1) + + +class Test_metadata_with_leader_aware_routing(unittest.TestCase): + def _call_fut(self, *args, **kw): + from google.cloud.spanner_v1._helpers import _metadata_with_leader_aware_routing + + return _metadata_with_leader_aware_routing(*args, **kw) + + def test(self): + value = True + metadata = self._call_fut(True) + self.assertEqual( + metadata, ("x-goog-spanner-route-to-leader", str(value).lower()) + ) + + +class Test_merge_transaction_options(unittest.TestCase): + def _callFUT(self, *args, **kw): + from google.cloud.spanner_v1._helpers import _merge_Transaction_Options + + return _merge_Transaction_Options(*args, **kw) + + def test_default_none_and_merge_none(self): + default = merge = None + result = self._callFUT(default, merge) + self.assertIsNone(result) + + def test_default_options_and_merge_none(self): + default = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC, + ), + ) + merge = None + result = self._callFUT(default, merge) + expected = default + self.assertEqual(result, expected) + + def test_default_none_and_merge_options(self): + default = None + merge = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE, + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + ) + expected = merge + result = self._callFUT(default, merge) + self.assertEqual(result, expected) + + def test_default_and_merge_isolation_options(self): + default = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE, + read_write=TransactionOptions.ReadWrite(), + ) + merge = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + exclude_txn_from_change_streams=True, + ) + expected = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + ) + result = self._callFUT(default, merge) + self.assertEqual(result, expected) + + def test_default_isolation_and_merge_options(self): + default = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE + ) + merge = TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + ) + expected = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE, + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + ) + result = self._callFUT(default, merge) + self.assertEqual(result, expected) + + def test_default_isolation_and_merge_options_isolation_unspecified(self): + default = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE + ) + merge = TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + isolation_level=TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + ) + expected = TransactionOptions( + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE, + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + ) + result = self._callFUT(default, merge) + self.assertEqual(result, expected) + + def test_default_and_merge_read_lock_mode_options(self): + default = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC, + ), + ) + merge = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + exclude_txn_from_change_streams=True, + ) + expected = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + exclude_txn_from_change_streams=True, + ) + result = self._callFUT(default, merge) + self.assertEqual(result, expected) + + def test_default_read_lock_mode_and_merge_options(self): + default = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + ) + merge = TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + ) + expected = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + exclude_txn_from_change_streams=True, + ) + result = self._callFUT(default, merge) + self.assertEqual(result, expected) + + def test_default_read_lock_mode_and_merge_options_isolation_unspecified(self): + default = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + ) + merge = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED, + ), + exclude_txn_from_change_streams=True, + ) + expected = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + exclude_txn_from_change_streams=True, + ) + result = self._callFUT(default, merge) + self.assertEqual(result, expected) + + +class Test_interval(unittest.TestCase): + from google.protobuf.struct_pb2 import Value + from google.cloud.spanner_v1 import Interval + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + + def _callFUT(self, *args, **kw): + from google.cloud.spanner_v1._helpers import _make_value_pb + + return _make_value_pb(*args, **kw) + + def test_interval_cases(self): + test_cases = [ + { + "name": "Basic interval", + "interval": self.Interval(months=14, days=3, nanos=43926789000123), + "expected": "P1Y2M3DT12H12M6.789000123S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Months only", + "interval": self.Interval(months=10, days=0, nanos=0), + "expected": "P10M", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Days only", + "interval": self.Interval(months=0, days=10, nanos=0), + "expected": "P10D", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Seconds only", + "interval": self.Interval(months=0, days=0, nanos=10000000000), + "expected": "PT10S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Milliseconds only", + "interval": self.Interval(months=0, days=0, nanos=10000000), + "expected": "PT0.010S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Microseconds only", + "interval": self.Interval(months=0, days=0, nanos=10000), + "expected": "PT0.000010S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Nanoseconds only", + "interval": self.Interval(months=0, days=0, nanos=10), + "expected": "PT0.000000010S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Mixed components", + "interval": self.Interval(months=10, days=20, nanos=1030), + "expected": "P10M20DT0.000001030S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Mixed components with negative nanos", + "interval": self.Interval(months=10, days=20, nanos=-1030), + "expected": "P10M20DT-0.000001030S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Negative interval", + "interval": self.Interval(months=-14, days=-3, nanos=-43926789000123), + "expected": "P-1Y-2M-3DT-12H-12M-6.789000123S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Mixed signs", + "interval": self.Interval(months=10, days=3, nanos=-41401234000000), + "expected": "P10M3DT-11H-30M-1.234S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Large values", + "interval": self.Interval( + months=25, days=15, nanos=316223999999999999999 + ), + "expected": "P2Y1M15DT87839999H59M59.999999999S", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + { + "name": "Zero interval", + "interval": self.Interval(months=0, days=0, nanos=0), + "expected": "P0Y", + "expected_type": self.Type(code=self.TypeCode.INTERVAL), + }, + ] + + for case in test_cases: + with self.subTest(name=case["name"]): + value_pb = self._callFUT(case["interval"]) + self.assertIsInstance(value_pb, self.Value) + self.assertEqual(value_pb.string_value, case["expected"]) + # TODO: Add type checking once we have access to the type information + + +class Test_parse_interval(unittest.TestCase): + from google.protobuf.struct_pb2 import Value + + def _callFUT(self, *args, **kw): + from google.cloud.spanner_v1._helpers import _parse_interval + + return _parse_interval(*args, **kw) + + def test_parse_interval_cases(self): + test_cases = [ + { + "name": "full interval with all components", + "input": "P1Y2M3DT12H12M6.789000123S", + "expected_months": 14, + "expected_days": 3, + "expected_nanos": 43926789000123, + "want_err": False, + }, + { + "name": "interval with negative minutes", + "input": "P1Y2M3DT13H-48M6S", + "expected_months": 14, + "expected_days": 3, + "expected_nanos": 43926000000000, + "want_err": False, + }, + { + "name": "date only interval", + "input": "P1Y2M3D", + "expected_months": 14, + "expected_days": 3, + "expected_nanos": 0, + "want_err": False, + }, + { + "name": "years and months only", + "input": "P1Y2M", + "expected_months": 14, + "expected_days": 0, + "expected_nanos": 0, + "want_err": False, + }, + { + "name": "years only", + "input": "P1Y", + "expected_months": 12, + "expected_days": 0, + "expected_nanos": 0, + "want_err": False, + }, + { + "name": "months only", + "input": "P2M", + "expected_months": 2, + "expected_days": 0, + "expected_nanos": 0, + "want_err": False, + }, + { + "name": "days only", + "input": "P3D", + "expected_months": 0, + "expected_days": 3, + "expected_nanos": 0, + "want_err": False, + }, + { + "name": "time components with fractional seconds", + "input": "PT4H25M6.7890001S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 15906789000100, + "want_err": False, + }, + { + "name": "time components without fractional seconds", + "input": "PT4H25M6S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 15906000000000, + "want_err": False, + }, + { + "name": "hours and seconds only", + "input": "PT4H30S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 14430000000000, + "want_err": False, + }, + { + "name": "hours and minutes only", + "input": "PT4H1M", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 14460000000000, + "want_err": False, + }, + { + "name": "minutes only", + "input": "PT5M", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 300000000000, + "want_err": False, + }, + { + "name": "fractional seconds only", + "input": "PT6.789S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 6789000000, + "want_err": False, + }, + { + "name": "small fractional seconds", + "input": "PT0.123S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 123000000, + "want_err": False, + }, + { + "name": "very small fractional seconds", + "input": "PT.000000123S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 123, + "want_err": False, + }, + { + "name": "zero years", + "input": "P0Y", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 0, + "want_err": False, + }, + { + "name": "all negative components", + "input": "P-1Y-2M-3DT-12H-12M-6.789000123S", + "expected_months": -14, + "expected_days": -3, + "expected_nanos": -43926789000123, + "want_err": False, + }, + { + "name": "mixed signs in components", + "input": "P1Y-2M3DT13H-51M6.789S", + "expected_months": 10, + "expected_days": 3, + "expected_nanos": 43746789000000, + "want_err": False, + }, + { + "name": "negative years with mixed signs", + "input": "P-1Y2M-3DT-13H49M-6.789S", + "expected_months": -10, + "expected_days": -3, + "expected_nanos": -43866789000000, + "want_err": False, + }, + { + "name": "negative time components", + "input": "P1Y2M3DT-4H25M-6.7890001S", + "expected_months": 14, + "expected_days": 3, + "expected_nanos": -12906789000100, + "want_err": False, + }, + { + "name": "large time values", + "input": "PT100H100M100.5S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 366100500000000, + "want_err": False, + }, + { + "name": "only time components with seconds", + "input": "PT12H30M1S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 45001000000000, + "want_err": False, + }, + { + "name": "date and time no seconds", + "input": "P1Y2M3DT12H30M", + "expected_months": 14, + "expected_days": 3, + "expected_nanos": 45000000000000, + "want_err": False, + }, + { + "name": "fractional seconds with max digits", + "input": "PT0.123456789S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 123456789, + "want_err": False, + }, + { + "name": "hours and fractional seconds", + "input": "PT1H0.5S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 3600500000000, + "want_err": False, + }, + { + "name": "years and months to months with fractional seconds", + "input": "P1Y2M3DT12H30M1.23456789S", + "expected_months": 14, + "expected_days": 3, + "expected_nanos": 45001234567890, + "want_err": False, + }, + { + "name": "comma as decimal point", + "input": "P1Y2M3DT12H30M1,23456789S", + "expected_months": 14, + "expected_days": 3, + "expected_nanos": 45001234567890, + "want_err": False, + }, + { + "name": "fractional seconds without 0 before decimal", + "input": "PT.5S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 500000000, + "want_err": False, + }, + { + "name": "mixed signs", + "input": "P-1Y2M3DT12H-30M1.234S", + "expected_months": -10, + "expected_days": 3, + "expected_nanos": 41401234000000, + "want_err": False, + }, + { + "name": "more mixed signs", + "input": "P1Y-2M3DT-12H30M-1.234S", + "expected_months": 10, + "expected_days": 3, + "expected_nanos": -41401234000000, + "want_err": False, + }, + { + "name": "trailing zeros after decimal", + "input": "PT1.234000S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 1234000000, + "want_err": False, + }, + { + "name": "all zeros after decimal", + "input": "PT1.000S", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 1000000000, + "want_err": False, + }, + # Invalid cases + {"name": "invalid format", "input": "invalid", "want_err": True}, + {"name": "missing duration specifier", "input": "P", "want_err": True}, + {"name": "missing time components", "input": "PT", "want_err": True}, + {"name": "missing unit specifier", "input": "P1YM", "want_err": True}, + {"name": "missing T separator", "input": "P1Y2M3D4H5M6S", "want_err": True}, + { + "name": "missing decimal value", + "input": "P1Y2M3DT4H5M6.S", + "want_err": True, + }, + { + "name": "extra unit specifier", + "input": "P1Y2M3DT4H5M6.789SS", + "want_err": True, + }, + { + "name": "missing value after decimal", + "input": "P1Y2M3DT4H5M6.", + "want_err": True, + }, + { + "name": "non-digit after decimal", + "input": "P1Y2M3DT4H5M6.ABC", + "want_err": True, + }, + {"name": "missing unit", "input": "P1Y2M3", "want_err": True}, + {"name": "missing time value", "input": "P1Y2M3DT", "want_err": True}, + { + "name": "invalid negative sign position", + "input": "P-T1H", + "want_err": True, + }, + {"name": "trailing negative sign", "input": "PT1H-", "want_err": True}, + { + "name": "too many decimal places", + "input": "P1Y2M3DT4H5M6.789123456789S", + "want_err": True, + }, + { + "name": "multiple decimal points", + "input": "P1Y2M3DT4H5M6.123.456S", + "want_err": True, + }, + { + "name": "both dot and comma decimals", + "input": "P1Y2M3DT4H5M6.,789S", + "want_err": True, + }, + ] + + for case in test_cases: + with self.subTest(name=case["name"]): + value_pb = self.Value(string_value=case["input"]) + if case.get("want_err", False): + with self.assertRaises(ValueError): + self._callFUT(value_pb) + else: + result = self._callFUT(value_pb) + self.assertEqual(result.months, case["expected_months"]) + self.assertEqual(result.days, case["expected_days"]) + self.assertEqual(result.nanos, case["expected_nanos"]) + + def test_large_values(self): + large_test_cases = [ + { + "name": "large positive hours", + "input": "PT87840000H", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": 316224000000000000000, + "want_err": False, + }, + { + "name": "large negative hours", + "input": "PT-87840000H", + "expected_months": 0, + "expected_days": 0, + "expected_nanos": -316224000000000000000, + "want_err": False, + }, + { + "name": "large mixed values with max precision", + "input": "P2Y1M15DT87839999H59M59.999999999S", + "expected_months": 25, + "expected_days": 15, + "expected_nanos": 316223999999999999999, + "want_err": False, + }, + { + "name": "large mixed negative values with max precision", + "input": "P2Y1M15DT-87839999H-59M-59.999999999S", + "expected_months": 25, + "expected_days": 15, + "expected_nanos": -316223999999999999999, + "want_err": False, + }, + ] + + for case in large_test_cases: + with self.subTest(name=case["name"]): + value_pb = self.Value(string_value=case["input"]) + if case.get("want_err", False): + with self.assertRaises(ValueError): + self._callFUT(value_pb) + else: + result = self._callFUT(value_pb) + self.assertEqual(result.months, case["expected_months"]) + self.assertEqual(result.days, case["expected_days"]) + self.assertEqual(result.nanos, case["expected_nanos"]) diff --git a/tests/unit/test__opentelemetry_tracing.py b/tests/unit/test__opentelemetry_tracing.py new file mode 100644 index 0000000000..b3d49355c0 --- /dev/null +++ b/tests/unit/test__opentelemetry_tracing.py @@ -0,0 +1,233 @@ +import importlib +import mock +import unittest +import sys + +try: + from opentelemetry import trace as trace_api + from opentelemetry.trace.status import StatusCode +except ImportError: + pass + +from google.api_core.exceptions import GoogleAPICallError +from google.cloud.spanner_v1 import _opentelemetry_tracing + +from tests._helpers import ( + OpenTelemetryBase, + LIB_VERSION, + HAS_OPENTELEMETRY_INSTALLED, + enrich_with_otel_scope, +) + + +def _make_rpc_error(error_cls, trailing_metadata=None): + import grpc + + grpc_error = mock.create_autospec(grpc.Call, instance=True) + grpc_error.trailing_metadata.return_value = trailing_metadata + return error_cls("error", errors=(grpc_error,)) + + +def _make_session(): + from google.cloud.spanner_v1.session import Session + + return mock.Mock(autospec=Session, instance=True) + + +# Skip all of these tests if we don't have OpenTelemetry +if HAS_OPENTELEMETRY_INSTALLED: + + class TestNoTracing(unittest.TestCase): + def setUp(self): + self._temp_opentelemetry = sys.modules["opentelemetry"] + + sys.modules["opentelemetry"] = None + importlib.reload(_opentelemetry_tracing) + + def tearDown(self): + sys.modules["opentelemetry"] = self._temp_opentelemetry + importlib.reload(_opentelemetry_tracing) + + def test_no_trace_call(self): + with _opentelemetry_tracing.trace_call("Test", _make_session()) as no_span: + self.assertIsNone(no_span) + + class TestTracing(OpenTelemetryBase): + def test_trace_call(self): + extra_attributes = { + "attribute1": "value1", + # Since our database is mocked, we have to override the db.instance parameter so it is a string + "db.instance": "database_name", + } + + expected_attributes = enrich_with_otel_scope( + { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + } + ) + expected_attributes.update(extra_attributes) + + with _opentelemetry_tracing.trace_call( + "CloudSpanner.Test", _make_session(), extra_attributes + ) as span: + span.set_attribute("after_setup_attribute", 1) + + expected_attributes["after_setup_attribute"] = 1 + + span_list = self.ot_exporter.get_finished_spans() + self.assertEqual(len(span_list), 1) + span = span_list[0] + self.assertEqual(span.kind, trace_api.SpanKind.CLIENT) + self.assertEqual(span.attributes, expected_attributes) + self.assertEqual(span.name, "CloudSpanner.Test") + self.assertEqual(span.status.status_code, StatusCode.OK) + + def test_trace_error(self): + extra_attributes = {"db.instance": "database_name"} + + expected_attributes = enrich_with_otel_scope( + { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + } + ) + expected_attributes.update(extra_attributes) + + with self.assertRaises(GoogleAPICallError): + with _opentelemetry_tracing.trace_call( + "CloudSpanner.Test", _make_session(), extra_attributes + ) as span: + from google.api_core.exceptions import InvalidArgument + + raise _make_rpc_error(InvalidArgument) + + span_list = self.ot_exporter.get_finished_spans() + self.assertEqual(len(span_list), 1) + span = span_list[0] + self.assertEqual(span.kind, trace_api.SpanKind.CLIENT) + self.assertEqual(dict(span.attributes), expected_attributes) + self.assertEqual(span.name, "CloudSpanner.Test") + self.assertEqual(span.status.status_code, StatusCode.ERROR) + + def test_trace_grpc_error(self): + extra_attributes = {"db.instance": "database_name"} + + expected_attributes = enrich_with_otel_scope( + { + "db.type": "spanner", + "db.url": "spanner.googleapis.com:443", + "net.host.name": "spanner.googleapis.com:443", + } + ) + expected_attributes.update(extra_attributes) + + with self.assertRaises(GoogleAPICallError): + with _opentelemetry_tracing.trace_call( + "CloudSpanner.Test", _make_session(), extra_attributes + ) as span: + from google.api_core.exceptions import DataLoss + + raise DataLoss("error") + + span_list = self.ot_exporter.get_finished_spans() + self.assertEqual(len(span_list), 1) + span = span_list[0] + self.assertEqual(span.status.status_code, StatusCode.ERROR) + + def test_trace_codeless_error(self): + extra_attributes = {"db.instance": "database_name"} + + expected_attributes = enrich_with_otel_scope( + { + "db.type": "spanner", + "db.url": "spanner.googleapis.com:443", + "net.host.name": "spanner.googleapis.com:443", + } + ) + expected_attributes.update(extra_attributes) + + with self.assertRaises(GoogleAPICallError): + with _opentelemetry_tracing.trace_call( + "CloudSpanner.Test", _make_session(), extra_attributes + ) as span: + raise GoogleAPICallError("error") + + span_list = self.ot_exporter.get_finished_spans() + self.assertEqual(len(span_list), 1) + span = span_list[0] + self.assertEqual(span.status.status_code, StatusCode.ERROR) + + def test_trace_call_terminal_span_status_ALWAYS_ON_sampler(self): + # Verify that we don't unconditionally set the terminal span status to + # SpanStatus.OK per https://github.com/googleapis/python-spanner/issues/1246 + from opentelemetry.sdk.trace.export import SimpleSpanProcessor + from opentelemetry.sdk.trace.export.in_memory_span_exporter import ( + InMemorySpanExporter, + ) + from opentelemetry.trace.status import Status, StatusCode + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.sampling import ALWAYS_ON + + tracer_provider = TracerProvider(sampler=ALWAYS_ON) + trace_exporter = InMemorySpanExporter() + tracer_provider.add_span_processor(SimpleSpanProcessor(trace_exporter)) + observability_options = dict(tracer_provider=tracer_provider) + + session = _make_session() + with _opentelemetry_tracing.trace_call( + "VerifyTerminalSpanStatus", + session, + observability_options=observability_options, + ) as span: + span.set_status(Status(StatusCode.ERROR, "Our error exhibit")) + + span_list = trace_exporter.get_finished_spans() + got_statuses = [] + + for span in span_list: + got_statuses.append( + (span.name, span.status.status_code, span.status.description) + ) + + want_statuses = [ + ("VerifyTerminalSpanStatus", StatusCode.ERROR, "Our error exhibit"), + ] + assert got_statuses == want_statuses + + def test_trace_call_terminal_span_status_ALWAYS_OFF_sampler(self): + # Verify that we get the correct status even when using the ALWAYS_OFF + # sampler which produces the NonRecordingSpan per + # https://github.com/googleapis/python-spanner/issues/1286 + from opentelemetry.sdk.trace.export import SimpleSpanProcessor + from opentelemetry.sdk.trace.export.in_memory_span_exporter import ( + InMemorySpanExporter, + ) + from opentelemetry.sdk.trace import TracerProvider + from opentelemetry.sdk.trace.sampling import ALWAYS_OFF + + tracer_provider = TracerProvider(sampler=ALWAYS_OFF) + trace_exporter = InMemorySpanExporter() + tracer_provider.add_span_processor(SimpleSpanProcessor(trace_exporter)) + observability_options = dict(tracer_provider=tracer_provider) + + session = _make_session() + used_span = None + with _opentelemetry_tracing.trace_call( + "VerifyWithNonRecordingSpan", + session, + observability_options=observability_options, + ) as span: + used_span = span + + assert type(used_span).__name__ == "NonRecordingSpan" + span_list = list(trace_exporter.get_finished_spans()) + assert span_list == [] diff --git a/tests/unit/test_atomic_counter.py b/tests/unit/test_atomic_counter.py new file mode 100644 index 0000000000..92d10cac79 --- /dev/null +++ b/tests/unit/test_atomic_counter.py @@ -0,0 +1,78 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import random +import threading +import unittest +from google.cloud.spanner_v1._helpers import AtomicCounter + + +class TestAtomicCounter(unittest.TestCase): + def test_initialization(self): + ac_default = AtomicCounter() + assert ac_default.value == 0 + + ac_1 = AtomicCounter(1) + assert ac_1.value == 1 + + ac_negative_1 = AtomicCounter(-1) + assert ac_negative_1.value == -1 + + def test_increment(self): + ac = AtomicCounter() + result_default = ac.increment() + assert result_default == 1 + assert ac.value == 1 + + result_with_value = ac.increment(2) + assert result_with_value == 3 + assert ac.value == 3 + result_plus_100 = ac.increment(100) + assert result_plus_100 == 103 + + def test_plus_call(self): + ac = AtomicCounter() + ac += 1 + assert ac.value == 1 + + n = ac + 2 + assert n == 3 + assert ac.value == 1 + + n = 200 + ac + assert n == 201 + assert ac.value == 1 + + def test_multiple_threads_incrementing(self): + ac = AtomicCounter() + n = 200 + m = 10 + + def do_work(): + for i in range(m): + ac.increment() + + threads = [] + for i in range(n): + th = threading.Thread(target=do_work) + threads.append(th) + th.start() + + random.shuffle(threads) + for th in threads: + th.join() + assert not th.is_alive() + + # Finally the result should be n*m + assert ac.value == n * m diff --git a/tests/unit/test_backup.py b/tests/unit/test_backup.py index 0762305220..00621c2148 100644 --- a/tests/unit/test_backup.py +++ b/tests/unit/test_backup.py @@ -47,7 +47,7 @@ def _get_target_class(self): @staticmethod def _make_database_admin_api(): - from google.cloud.spanner_v1.client import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient return mock.create_autospec(DatabaseAdminClient, instance=True) @@ -62,60 +62,94 @@ def test_ctor_defaults(self): self.assertIsNone(backup._expire_time) def test_ctor_non_defaults(self): + from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig + instance = _Instance(self.INSTANCE_NAME) timestamp = self._make_timestamp() + encryption_config = CreateBackupEncryptionConfig( + encryption_type=CreateBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + kms_key_name="key_name", + ) backup = self._make_one( - self.BACKUP_ID, instance, database=self.DATABASE_NAME, expire_time=timestamp + self.BACKUP_ID, + instance, + database=self.DATABASE_NAME, + expire_time=timestamp, + encryption_config=encryption_config, + ) + + self.assertEqual(backup.backup_id, self.BACKUP_ID) + self.assertIs(backup._instance, instance) + self.assertEqual(backup._database, self.DATABASE_NAME) + self.assertIsNotNone(backup._expire_time) + self.assertIs(backup._expire_time, timestamp) + self.assertEqual(backup._encryption_config, encryption_config) + + def test_ctor_w_encryption_config_dict(self): + from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig + + instance = _Instance(self.INSTANCE_NAME) + timestamp = self._make_timestamp() + + encryption_config = {"encryption_type": 3, "kms_key_name": "key_name"} + backup = self._make_one( + self.BACKUP_ID, + instance, + database=self.DATABASE_NAME, + expire_time=timestamp, + encryption_config=encryption_config, ) + expected_encryption_config = CreateBackupEncryptionConfig(**encryption_config) self.assertEqual(backup.backup_id, self.BACKUP_ID) self.assertIs(backup._instance, instance) self.assertEqual(backup._database, self.DATABASE_NAME) self.assertIsNotNone(backup._expire_time) self.assertIs(backup._expire_time, timestamp) + self.assertEqual(backup._encryption_config, expected_encryption_config) def test_from_pb_project_mismatch(self): - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 + from google.cloud.spanner_admin_database_v1 import Backup ALT_PROJECT = "ALT_PROJECT" client = _Client(project=ALT_PROJECT) instance = _Instance(self.INSTANCE_NAME, client) - backup_pb = backup_pb2.Backup(name=self.BACKUP_NAME) + backup_pb = Backup(name=self.BACKUP_NAME) backup_class = self._get_target_class() with self.assertRaises(ValueError): backup_class.from_pb(backup_pb, instance) def test_from_pb_instance_mismatch(self): - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 + from google.cloud.spanner_admin_database_v1 import Backup ALT_INSTANCE = "/projects/%s/instances/ALT-INSTANCE" % (self.PROJECT_ID,) client = _Client() instance = _Instance(ALT_INSTANCE, client) - backup_pb = backup_pb2.Backup(name=self.BACKUP_NAME) + backup_pb = Backup(name=self.BACKUP_NAME) backup_class = self._get_target_class() with self.assertRaises(ValueError): backup_class.from_pb(backup_pb, instance) def test_from_pb_invalid_name(self): - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 + from google.cloud.spanner_admin_database_v1 import Backup client = _Client() instance = _Instance(self.INSTANCE_NAME, client) - backup_pb = backup_pb2.Backup(name="invalid_format") + backup_pb = Backup(name="invalid_format") backup_class = self._get_target_class() with self.assertRaises(ValueError): backup_class.from_pb(backup_pb, instance) def test_from_pb_success(self): - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 + from google.cloud.spanner_admin_database_v1 import Backup client = _Client() instance = _Instance(self.INSTANCE_NAME, client) - backup_pb = backup_pb2.Backup(name=self.BACKUP_NAME) + backup_pb = Backup(name=self.BACKUP_NAME) backup_class = self._get_target_class() backup = backup_class.from_pb(backup_pb, instance) @@ -157,11 +191,11 @@ def test_size_bytes_property(self): self.assertEqual(backup.size_bytes, expected) def test_state_property(self): - from google.cloud.spanner_admin_database_v1.gapic import enums + from google.cloud.spanner_admin_database_v1 import Backup instance = _Instance(self.INSTANCE_NAME) backup = self._make_one(self.BACKUP_ID, instance) - expected = backup._state = enums.Backup.State.READY + expected = backup._state = Backup.State.READY self.assertEqual(backup.state, expected) def test_referencing_databases_property(self): @@ -170,9 +204,32 @@ def test_referencing_databases_property(self): expected = backup._referencing_databases = [self.DATABASE_NAME] self.assertEqual(backup.referencing_databases, expected) + def test_encrpytion_info_property(self): + from google.cloud.spanner_admin_database_v1 import EncryptionInfo + + instance = _Instance(self.INSTANCE_NAME) + backup = self._make_one(self.BACKUP_ID, instance) + expected = backup._encryption_info = EncryptionInfo( + kms_key_version="kms_key_version" + ) + self.assertEqual(backup.encryption_info, expected) + + def test_encryption_config_property(self): + from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig + + instance = _Instance(self.INSTANCE_NAME) + backup = self._make_one(self.BACKUP_ID, instance) + expected = backup._encryption_config = CreateBackupEncryptionConfig( + encryption_type=CreateBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + kms_key_name="kms_key_name", + ) + self.assertEqual(backup._encryption_config, expected) + def test_create_grpc_error(self): from google.api_core.exceptions import GoogleAPICallError from google.api_core.exceptions import Unknown + from google.cloud.spanner_admin_database_v1 import Backup + from google.cloud.spanner_admin_database_v1 import CreateBackupRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -184,25 +241,29 @@ def test_create_grpc_error(self): self.BACKUP_ID, instance, database=self.DATABASE_NAME, expire_time=timestamp ) - from google.cloud._helpers import _datetime_to_pb_timestamp - - backup_pb = { - "database": self.DATABASE_NAME, - "expire_time": _datetime_to_pb_timestamp(timestamp), - } + backup_pb = Backup( + database=self.DATABASE_NAME, + expire_time=timestamp, + ) with self.assertRaises(GoogleAPICallError): backup.create() - api.create_backup.assert_called_once_with( + request = CreateBackupRequest( parent=self.INSTANCE_NAME, backup_id=self.BACKUP_ID, backup=backup_pb, + ) + + api.create_backup.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_create_already_exists(self): from google.cloud.exceptions import Conflict + from google.cloud.spanner_admin_database_v1 import Backup + from google.cloud.spanner_admin_database_v1 import CreateBackupRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -214,25 +275,29 @@ def test_create_already_exists(self): self.BACKUP_ID, instance, database=self.DATABASE_NAME, expire_time=timestamp ) - from google.cloud._helpers import _datetime_to_pb_timestamp - - backup_pb = { - "database": self.DATABASE_NAME, - "expire_time": _datetime_to_pb_timestamp(timestamp), - } + backup_pb = Backup( + database=self.DATABASE_NAME, + expire_time=timestamp, + ) with self.assertRaises(Conflict): backup.create() - api.create_backup.assert_called_once_with( + request = CreateBackupRequest( parent=self.INSTANCE_NAME, backup_id=self.BACKUP_ID, backup=backup_pb, + ) + + api.create_backup.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_create_instance_not_found(self): from google.cloud.exceptions import NotFound + from google.cloud.spanner_admin_database_v1 import Backup + from google.cloud.spanner_admin_database_v1 import CreateBackupRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -244,20 +309,22 @@ def test_create_instance_not_found(self): self.BACKUP_ID, instance, database=self.DATABASE_NAME, expire_time=timestamp ) - from google.cloud._helpers import _datetime_to_pb_timestamp - - backup_pb = { - "database": self.DATABASE_NAME, - "expire_time": _datetime_to_pb_timestamp(timestamp), - } + backup_pb = Backup( + database=self.DATABASE_NAME, + expire_time=timestamp, + ) with self.assertRaises(NotFound): backup.create() - api.create_backup.assert_called_once_with( + request = CreateBackupRequest( parent=self.INSTANCE_NAME, backup_id=self.BACKUP_ID, backup=backup_pb, + ) + + api.create_backup.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", backup.name)], ) @@ -277,34 +344,75 @@ def test_create_database_not_set(self): backup.create() def test_create_success(self): + from google.cloud.spanner_admin_database_v1 import Backup + from google.cloud.spanner_admin_database_v1 import CreateBackupRequest + from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig + from datetime import datetime + from datetime import timedelta + from datetime import timezone + op_future = object() client = _Client() api = client.database_admin_api = self._make_database_admin_api() api.create_backup.return_value = op_future instance = _Instance(self.INSTANCE_NAME, client=client) - timestamp = self._make_timestamp() + version_timestamp = datetime.utcnow() - timedelta(minutes=5) + version_timestamp = version_timestamp.replace(tzinfo=timezone.utc) + expire_timestamp = self._make_timestamp() + encryption_config = {"encryption_type": 3, "kms_key_name": "key_name"} backup = self._make_one( - self.BACKUP_ID, instance, database=self.DATABASE_NAME, expire_time=timestamp + self.BACKUP_ID, + instance, + database=self.DATABASE_NAME, + expire_time=expire_timestamp, + version_time=version_timestamp, + encryption_config=encryption_config, ) - from google.cloud._helpers import _datetime_to_pb_timestamp - - backup_pb = { - "database": self.DATABASE_NAME, - "expire_time": _datetime_to_pb_timestamp(timestamp), - } + backup_pb = Backup( + database=self.DATABASE_NAME, + expire_time=expire_timestamp, + version_time=version_timestamp, + ) future = backup.create() self.assertIs(future, op_future) - api.create_backup.assert_called_once_with( + expected_encryption_config = CreateBackupEncryptionConfig(**encryption_config) + request = CreateBackupRequest( parent=self.INSTANCE_NAME, backup_id=self.BACKUP_ID, backup=backup_pb, + encryption_config=expected_encryption_config, + ) + + api.create_backup.assert_called_once_with( + request=request, metadata=[("google-cloud-resource-prefix", backup.name)], ) + def test_create_w_invalid_encryption_config(self): + from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig + + client = _Client() + instance = _Instance(self.INSTANCE_NAME, client=client) + expire_timestamp = self._make_timestamp() + encryption_config = { + "encryption_type": CreateBackupEncryptionConfig.EncryptionType.GOOGLE_DEFAULT_ENCRYPTION, + "kms_key_name": "key_name", + } + backup = self._make_one( + self.BACKUP_ID, + instance, + database=self.DATABASE_NAME, + expire_time=expire_timestamp, + encryption_config=encryption_config, + ) + + with self.assertRaises(ValueError): + backup.create() + def test_exists_grpc_error(self): from google.api_core.exceptions import Unknown @@ -319,7 +427,8 @@ def test_exists_grpc_error(self): backup.exists() api.get_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_exists_not_found(self): @@ -335,14 +444,15 @@ def test_exists_not_found(self): self.assertFalse(backup.exists()) api.get_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_exists_success(self): - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 + from google.cloud.spanner_admin_database_v1 import Backup client = _Client() - backup_pb = backup_pb2.Backup(name=self.BACKUP_NAME) + backup_pb = Backup(name=self.BACKUP_NAME) api = client.database_admin_api = self._make_database_admin_api() api.get_backup.return_value = backup_pb @@ -352,7 +462,8 @@ def test_exists_success(self): self.assertTrue(backup.exists()) api.get_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_delete_grpc_error(self): @@ -368,7 +479,8 @@ def test_delete_grpc_error(self): backup.delete() api.delete_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_delete_not_found(self): @@ -384,7 +496,8 @@ def test_delete_not_found(self): backup.delete() api.delete_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_delete_success(self): @@ -399,7 +512,8 @@ def test_delete_success(self): backup.delete() api.delete_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_reload_grpc_error(self): @@ -415,7 +529,8 @@ def test_reload_grpc_error(self): backup.reload() api.get_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_reload_not_found(self): @@ -431,25 +546,28 @@ def test_reload_not_found(self): backup.reload() api.get_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_reload_success(self): - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 - from google.cloud.spanner_admin_database_v1.gapic import enums - from google.cloud._helpers import _datetime_to_pb_timestamp + from google.cloud.spanner_admin_database_v1 import Backup + from google.cloud.spanner_admin_database_v1 import EncryptionInfo timestamp = self._make_timestamp() + encryption_info = EncryptionInfo(kms_key_version="kms_key_version") client = _Client() - backup_pb = backup_pb2.Backup( + backup_pb = Backup( name=self.BACKUP_NAME, database=self.DATABASE_NAME, - expire_time=_datetime_to_pb_timestamp(timestamp), - create_time=_datetime_to_pb_timestamp(timestamp), + expire_time=timestamp, + version_time=timestamp, + create_time=timestamp, size_bytes=10, state=1, referencing_databases=[], + encryption_info=encryption_info, ) api = client.database_admin_api = self._make_database_admin_api() api.get_backup.return_value = backup_pb @@ -461,17 +579,20 @@ def test_reload_success(self): self.assertEqual(backup.database, self.DATABASE_NAME) self.assertEqual(backup.expire_time, timestamp) self.assertEqual(backup.create_time, timestamp) + self.assertEqual(backup.version_time, timestamp) self.assertEqual(backup.size_bytes, 10) - self.assertEqual(backup.state, enums.Backup.State.CREATING) + self.assertEqual(backup.state, Backup.State.CREATING) self.assertEqual(backup.referencing_databases, []) + self.assertEqual(backup.encryption_info, encryption_info) api.get_backup.assert_called_once_with( - self.BACKUP_NAME, metadata=[("google-cloud-resource-prefix", backup.name)] + name=self.BACKUP_NAME, + metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_update_expire_time_grpc_error(self): from google.api_core.exceptions import Unknown - from google.cloud._helpers import _datetime_to_pb_timestamp + from google.cloud.spanner_admin_database_v1 import Backup client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -483,20 +604,20 @@ def test_update_expire_time_grpc_error(self): with self.assertRaises(Unknown): backup.update_expire_time(expire_time) - backup_update = { - "name": self.BACKUP_NAME, - "expire_time": _datetime_to_pb_timestamp(expire_time), - } + backup_update = Backup( + name=self.BACKUP_NAME, + expire_time=expire_time, + ) update_mask = {"paths": ["expire_time"]} api.update_backup.assert_called_once_with( - backup_update, - update_mask, + backup=backup_update, + update_mask=update_mask, metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_update_expire_time_not_found(self): from google.api_core.exceptions import NotFound - from google.cloud._helpers import _datetime_to_pb_timestamp + from google.cloud.spanner_admin_database_v1 import Backup client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -508,75 +629,52 @@ def test_update_expire_time_not_found(self): with self.assertRaises(NotFound): backup.update_expire_time(expire_time) - backup_update = { - "name": self.BACKUP_NAME, - "expire_time": _datetime_to_pb_timestamp(expire_time), - } + backup_update = Backup( + name=self.BACKUP_NAME, + expire_time=expire_time, + ) update_mask = {"paths": ["expire_time"]} api.update_backup.assert_called_once_with( - backup_update, - update_mask, + backup=backup_update, + update_mask=update_mask, metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_update_expire_time_success(self): - from google.cloud._helpers import _datetime_to_pb_timestamp - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 + from google.cloud.spanner_admin_database_v1 import Backup client = _Client() api = client.database_admin_api = self._make_database_admin_api() - api.update_backup.return_type = backup_pb2.Backup(name=self.BACKUP_NAME) + api.update_backup.return_type = Backup(name=self.BACKUP_NAME) instance = _Instance(self.INSTANCE_NAME, client=client) backup = self._make_one(self.BACKUP_ID, instance) expire_time = self._make_timestamp() backup.update_expire_time(expire_time) - backup_update = { - "name": self.BACKUP_NAME, - "expire_time": _datetime_to_pb_timestamp(expire_time), - } + backup_update = Backup( + name=self.BACKUP_NAME, + expire_time=expire_time, + ) update_mask = {"paths": ["expire_time"]} api.update_backup.assert_called_once_with( - backup_update, - update_mask, + backup=backup_update, + update_mask=update_mask, metadata=[("google-cloud-resource-prefix", backup.name)], ) def test_is_ready(self): - from google.cloud.spanner_admin_database_v1.gapic import enums + from google.cloud.spanner_admin_database_v1 import Backup client = _Client() instance = _Instance(self.INSTANCE_NAME, client=client) backup = self._make_one(self.BACKUP_ID, instance) - backup._state = enums.Backup.State.READY + backup._state = Backup.State.READY self.assertTrue(backup.is_ready()) - backup._state = enums.Backup.State.CREATING + backup._state = Backup.State.CREATING self.assertFalse(backup.is_ready()) -class TestBackupInfo(_BaseTest): - def test_from_pb(self): - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 - from google.cloud.spanner_v1.backup import BackupInfo - from google.cloud._helpers import _datetime_to_pb_timestamp - - backup_name = "backup_name" - timestamp = self._make_timestamp() - database_name = "database_name" - - pb = backup_pb2.BackupInfo( - backup=backup_name, - create_time=_datetime_to_pb_timestamp(timestamp), - source_database=database_name, - ) - backup_info = BackupInfo.from_pb(pb) - - self.assertEqual(backup_info.backup, backup_name) - self.assertEqual(backup_info.create_time, timestamp) - self.assertEqual(backup_info.source_database, database_name) - - class _Client(object): def __init__(self, project=TestBackup.PROJECT_ID): self.project = project diff --git a/tests/unit/test_batch.py b/tests/unit/test_batch.py index f4fd9d12ab..1582fcf4a9 100644 --- a/tests/unit/test_batch.py +++ b/tests/unit/test_batch.py @@ -14,18 +14,54 @@ import unittest - +from tests import _helpers as ot_helpers +from unittest.mock import MagicMock +from tests._helpers import ( + OpenTelemetryBase, + LIB_VERSION, + StatusCode, + enrich_with_otel_scope, +) +from google.cloud.spanner_v1 import ( + RequestOptions, + CommitResponse, + TransactionOptions, + Mutation, + BatchWriteResponse, + DefaultTransactionOptions, +) +from google.cloud._helpers import UTC, _datetime_to_pb_timestamp +import datetime +from google.api_core.exceptions import Aborted, Unknown +from google.cloud.spanner_v1.batch import MutationGroups, _BatchBase, Batch +from google.cloud.spanner_v1.keyset import KeySet +from google.rpc.status_pb2 import Status + +from google.cloud.spanner_v1._helpers import ( + AtomicCounter, + _metadata_with_request_id, +) +from google.cloud.spanner_v1.request_id_header import REQ_RAND_PROCESS_ID TABLE_NAME = "citizens" COLUMNS = ["email", "first_name", "last_name", "age"] VALUES = [ - [u"phred@exammple.com", u"Phred", u"Phlyntstone", 32], - [u"bharney@example.com", u"Bharney", u"Rhubble", 31], + ["phred@exammple.com", "Phred", "Phlyntstone", 32], + ["bharney@example.com", "Bharney", "Rhubble", 31], ] +BASE_ATTRIBUTES = { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": "testing", + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", +} +enrich_with_otel_scope(BASE_ATTRIBUTES) class _BaseTest(unittest.TestCase): - PROJECT_ID = "project-id" INSTANCE_ID = "instance-id" INSTANCE_NAME = "projects/" + PROJECT_ID + "/instances/" + INSTANCE_ID @@ -33,6 +69,7 @@ class _BaseTest(unittest.TestCase): DATABASE_NAME = INSTANCE_NAME + "/databases/" + DATABASE_ID SESSION_ID = "session-id" SESSION_NAME = DATABASE_NAME + "/sessions/" + SESSION_ID + TRANSACTION_TAG = "transaction-tag" def _make_one(self, *args, **kwargs): return self._getTargetClass()(*args, **kwargs) @@ -40,23 +77,16 @@ def _make_one(self, *args, **kwargs): class Test_BatchBase(_BaseTest): def _getTargetClass(self): - from google.cloud.spanner_v1.batch import _BatchBase - return _BatchBase def _compare_values(self, result, source): - from google.protobuf.struct_pb2 import ListValue - from google.protobuf.struct_pb2 import Value - for found, expected in zip(result, source): - self.assertIsInstance(found, ListValue) - self.assertEqual(len(found.values), len(expected)) - for found_cell, expected_cell in zip(found.values, expected): - self.assertIsInstance(found_cell, Value) + self.assertEqual(len(found), len(expected)) + for found_cell, expected_cell in zip(found, expected): if isinstance(expected_cell, int): - self.assertEqual(int(found_cell.string_value), expected_cell) + self.assertEqual(int(found_cell), expected_cell) else: - self.assertEqual(found_cell.string_value, expected_cell) + self.assertEqual(found_cell, expected_cell) def test_ctor(self): session = _Session() @@ -64,15 +94,7 @@ def test_ctor(self): self.assertIs(base._session, session) self.assertEqual(len(base._mutations), 0) - def test__check_state_virtual(self): - session = _Session() - base = self._make_one(session) - with self.assertRaises(NotImplementedError): - base._check_state() - def test_insert(self): - from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation - session = _Session() base = self._make_one(session) @@ -88,8 +110,6 @@ def test_insert(self): self._compare_values(write.values, VALUES) def test_update(self): - from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation - session = _Session() base = self._make_one(session) @@ -105,8 +125,6 @@ def test_update(self): self._compare_values(write.values, VALUES) def test_insert_or_update(self): - from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation - session = _Session() base = self._make_one(session) @@ -122,8 +140,6 @@ def test_insert_or_update(self): self._compare_values(write.values, VALUES) def test_replace(self): - from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation - session = _Session() base = self._make_one(session) @@ -139,9 +155,6 @@ def test_replace(self): self._compare_values(write.values, VALUES) def test_delete(self): - from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation - from google.cloud.spanner_v1.keyset import KeySet - keys = [[0], [1], [2]] keyset = KeySet(keys=keys) session = _Session() @@ -159,15 +172,11 @@ def test_delete(self): self.assertEqual(len(key_set_pb.ranges), 0) self.assertEqual(len(key_set_pb.keys), len(keys)) for found, expected in zip(key_set_pb.keys, keys): - self.assertEqual( - [int(value.string_value) for value in found.values], expected - ) + self.assertEqual([int(value) for value in found], expected) -class TestBatch(_BaseTest): +class TestBatch(_BaseTest, OpenTelemetryBase): def _getTargetClass(self): - from google.cloud.spanner_v1.batch import Batch - return Batch def test_ctor(self): @@ -176,8 +185,6 @@ def test_ctor(self): self.assertIs(batch._session, session) def test_commit_already_committed(self): - from google.cloud.spanner_v1.keyset import KeySet - keys = [[0], [1], [2]] keyset = KeySet(keys=keys) database = _Database() @@ -189,10 +196,9 @@ def test_commit_already_committed(self): with self.assertRaises(ValueError): batch.commit() - def test_commit_grpc_error(self): - from google.api_core.exceptions import Unknown - from google.cloud.spanner_v1.keyset import KeySet + self.assertNoSpans() + def test_commit_grpc_error(self): keys = [[0], [1], [2]] keyset = KeySet(keys=keys) database = _Database() @@ -204,13 +210,16 @@ def test_commit_grpc_error(self): with self.assertRaises(Unknown): batch.commit() - def test_commit_ok(self): - import datetime - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionOptions - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner.Batch.commit", + status=StatusCode.ERROR, + attributes=dict( + BASE_ATTRIBUTES, num_mutations=1, x_goog_spanner_request_id=req_id + ), + ) + def test_commit_ok(self): now = datetime.datetime.utcnow().replace(tzinfo=UTC) now_pb = _datetime_to_pb_timestamp(now) response = CommitResponse(commit_timestamp=now_pb) @@ -225,17 +234,222 @@ def test_commit_ok(self): self.assertEqual(committed, now) self.assertEqual(batch.committed, committed) - (session, mutations, single_use_txn, metadata) = api._committed + ( + session, + mutations, + single_use_txn, + request_options, + max_commit_delay, + metadata, + ) = api._committed self.assertEqual(session, self.SESSION_NAME) self.assertEqual(mutations, batch._mutations) self.assertIsInstance(single_use_txn, TransactionOptions) - self.assertTrue(single_use_txn.HasField("read_write")) - self.assertEqual(metadata, [("google-cloud-resource-prefix", database.name)]) + self.assertTrue(type(single_use_txn).pb(single_use_txn).HasField("read_write")) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertEqual( + metadata, + [ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + self.assertEqual(request_options, RequestOptions()) + self.assertEqual(max_commit_delay, None) + + self.assertSpanAttributes( + "CloudSpanner.Batch.commit", + attributes=dict( + BASE_ATTRIBUTES, num_mutations=1, x_goog_spanner_request_id=req_id + ), + ) + + def test_aborted_exception_on_commit_with_retries(self): + # Test case to verify that an Aborted exception is raised when + # batch.commit() is called and the transaction is aborted internally. - def test_context_mgr_already_committed(self): - import datetime - from google.cloud._helpers import UTC + database = _Database() + # Setup the spanner API which throws Aborted exception when calling commit API. + api = database.spanner_api = _FauxSpannerAPI(_aborted_error=True) + api.commit = MagicMock( + side_effect=Aborted("Transaction was aborted", errors=("Aborted error")) + ) + + # Create mock session and batch objects + session = _Session(database) + batch = self._make_one(session) + batch.insert(TABLE_NAME, COLUMNS, VALUES) + + # Assertion: Ensure that calling batch.commit() raises the Aborted exception + with self.assertRaises(Aborted) as context: + batch.commit(timeout_secs=0.1, default_retry_delay=0) + + # Verify additional details about the exception + self.assertEqual(str(context.exception), "409 Transaction was aborted") + self.assertGreater( + api.commit.call_count, 1, "commit should be called more than once" + ) + + def _test_commit_with_options( + self, + request_options=None, + max_commit_delay_in=None, + exclude_txn_from_change_streams=False, + isolation_level=TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED, + ): + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + response = CommitResponse(commit_timestamp=now_pb) + database = _Database() + api = database.spanner_api = _FauxSpannerAPI(_commit_response=response) + session = _Session(database) + batch = self._make_one(session) + batch.transaction_tag = self.TRANSACTION_TAG + batch.insert(TABLE_NAME, COLUMNS, VALUES) + committed = batch.commit( + request_options=request_options, + max_commit_delay=max_commit_delay_in, + exclude_txn_from_change_streams=exclude_txn_from_change_streams, + isolation_level=isolation_level, + read_lock_mode=read_lock_mode, + ) + + self.assertEqual(committed, now) + self.assertEqual(batch.committed, committed) + + if type(request_options) is dict: + expected_request_options = RequestOptions(request_options) + else: + expected_request_options = request_options + expected_request_options.transaction_tag = self.TRANSACTION_TAG + expected_request_options.request_tag = None + + ( + session, + mutations, + single_use_txn, + actual_request_options, + max_commit_delay, + metadata, + ) = api._committed + self.assertEqual(session, self.SESSION_NAME) + self.assertEqual(mutations, batch._mutations) + self.assertIsInstance(single_use_txn, TransactionOptions) + self.assertTrue(type(single_use_txn).pb(single_use_txn).HasField("read_write")) + self.assertEqual( + single_use_txn.exclude_txn_from_change_streams, + exclude_txn_from_change_streams, + ) + self.assertEqual( + single_use_txn.isolation_level, + isolation_level, + ) + self.assertEqual( + single_use_txn.read_write.read_lock_mode, + read_lock_mode, + ) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertEqual( + metadata, + [ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + self.assertEqual(actual_request_options, expected_request_options) + + self.assertSpanAttributes( + "CloudSpanner.Batch.commit", + attributes=dict( + BASE_ATTRIBUTES, num_mutations=1, x_goog_spanner_request_id=req_id + ), + ) + + self.assertEqual(max_commit_delay_in, max_commit_delay) + + def test_commit_w_request_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._test_commit_with_options(request_options=request_options) + + def test_commit_w_transaction_tag_success(self): + request_options = RequestOptions( + transaction_tag="tag-1-1", + ) + self._test_commit_with_options(request_options=request_options) + + def test_commit_w_request_and_transaction_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + transaction_tag="tag-1-1", + ) + self._test_commit_with_options(request_options=request_options) + + def test_commit_w_request_and_transaction_tag_dictionary_success(self): + request_options = {"request_tag": "tag-1", "transaction_tag": "tag-1-1"} + self._test_commit_with_options(request_options=request_options) + + def test_commit_w_incorrect_tag_dictionary_error(self): + request_options = {"incorrect_tag": "tag-1-1"} + with self.assertRaises(ValueError): + self._test_commit_with_options(request_options=request_options) + + def test_commit_w_max_commit_delay(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._test_commit_with_options( + request_options=request_options, + max_commit_delay_in=datetime.timedelta(milliseconds=100), + ) + + def test_commit_w_exclude_txn_from_change_streams(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._test_commit_with_options( + request_options=request_options, exclude_txn_from_change_streams=True + ) + + def test_commit_w_isolation_level(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._test_commit_with_options( + request_options=request_options, + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + ) + + def test_commit_w_read_lock_mode(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._test_commit_with_options( + request_options=request_options, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ) + + def test_commit_w_isolation_level_and_read_lock_mode(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._test_commit_with_options( + request_options=request_options, + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC, + ) + def test_context_mgr_already_committed(self): now = datetime.datetime.utcnow().replace(tzinfo=UTC) database = _Database() api = database.spanner_api = _FauxSpannerAPI() @@ -250,12 +464,6 @@ def test_context_mgr_already_committed(self): self.assertEqual(api._committed, None) def test_context_mgr_success(self): - import datetime - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionOptions - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - now = datetime.datetime.utcnow().replace(tzinfo=UTC) now_pb = _datetime_to_pb_timestamp(now) response = CommitResponse(commit_timestamp=now_pb) @@ -269,19 +477,40 @@ def test_context_mgr_success(self): self.assertEqual(batch.committed, now) - (session, mutations, single_use_txn, metadata) = api._committed + ( + session, + mutations, + single_use_txn, + request_options, + _, + metadata, + ) = api._committed self.assertEqual(session, self.SESSION_NAME) self.assertEqual(mutations, batch._mutations) self.assertIsInstance(single_use_txn, TransactionOptions) - self.assertTrue(single_use_txn.HasField("read_write")) - self.assertEqual(metadata, [("google-cloud-resource-prefix", database.name)]) + self.assertTrue(type(single_use_txn).pb(single_use_txn).HasField("read_write")) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertEqual( + metadata, + [ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + self.assertEqual(request_options, RequestOptions()) + + self.assertSpanAttributes( + "CloudSpanner.Batch.commit", + attributes=dict( + BASE_ATTRIBUTES, num_mutations=1, x_goog_spanner_request_id=req_id + ), + ) def test_context_mgr_failure(self): - import datetime - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - now = datetime.datetime.utcnow().replace(tzinfo=UTC) now_pb = _datetime_to_pb_timestamp(now) response = CommitResponse(commit_timestamp=now_pb) @@ -303,38 +532,253 @@ class _BailOut(Exception): self.assertEqual(len(batch._mutations), 1) +class TestMutationGroups(_BaseTest, OpenTelemetryBase): + def _getTargetClass(self): + return MutationGroups + + def test_ctor(self): + session = _Session() + groups = self._make_one(session) + self.assertIs(groups._session, session) + + def test_batch_write_already_committed(self): + keys = [[0], [1], [2]] + keyset = KeySet(keys=keys) + database = _Database() + database.spanner_api = _FauxSpannerAPI(_batch_write_response=[]) + session = _Session(database) + groups = self._make_one(session) + group = groups.group() + group.delete(TABLE_NAME, keyset=keyset) + groups.batch_write() + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner.batch_write", + status=StatusCode.OK, + attributes=dict( + BASE_ATTRIBUTES, num_mutation_groups=1, x_goog_spanner_request_id=req_id + ), + ) + assert groups.committed + # The second call to batch_write should raise an error. + with self.assertRaises(ValueError): + groups.batch_write() + + def test_batch_write_grpc_error(self): + keys = [[0], [1], [2]] + keyset = KeySet(keys=keys) + database = _Database() + database.spanner_api = _FauxSpannerAPI(_rpc_error=True) + session = _Session(database) + groups = self._make_one(session) + group = groups.group() + group.delete(TABLE_NAME, keyset=keyset) + + with self.assertRaises(Unknown): + groups.batch_write() + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner.batch_write", + status=StatusCode.ERROR, + attributes=dict( + BASE_ATTRIBUTES, num_mutation_groups=1, x_goog_spanner_request_id=req_id + ), + ) + + def _test_batch_write_with_request_options( + self, + request_options=None, + exclude_txn_from_change_streams=False, + enable_end_to_end_tracing=False, + ): + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + status_pb = Status(code=200) + response = BatchWriteResponse( + commit_timestamp=now_pb, indexes=[0], status=status_pb + ) + database = _Database(enable_end_to_end_tracing=enable_end_to_end_tracing) + api = database.spanner_api = _FauxSpannerAPI(_batch_write_response=[response]) + session = _Session(database) + groups = self._make_one(session) + group = groups.group() + group.insert(TABLE_NAME, COLUMNS, VALUES) + + response_iter = groups.batch_write( + request_options, + exclude_txn_from_change_streams=exclude_txn_from_change_streams, + ) + self.assertEqual(len(response_iter), 1) + self.assertEqual(response_iter[0], response) + + ( + session, + mutation_groups, + actual_request_options, + metadata, + request_exclude_txn_from_change_streams, + ) = api._batch_request + self.assertEqual(session, self.SESSION_NAME) + self.assertEqual(mutation_groups, groups._mutation_groups) + expected_metadata = [ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ] + + if enable_end_to_end_tracing and ot_helpers.HAS_OPENTELEMETRY_INSTALLED: + expected_metadata.append(("x-goog-spanner-end-to-end-tracing", "true")) + self.assertTrue( + any(key == "traceparent" for key, _ in metadata), + "traceparent is missing in metadata", + ) + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + expected_metadata.append( + ("x-goog-spanner-request-id", req_id), + ) + + # Remove traceparent from actual metadata for comparison + filtered_metadata = [item for item in metadata if item[0] != "traceparent"] + + self.assertEqual(filtered_metadata, expected_metadata) + + if request_options is None: + expected_request_options = RequestOptions() + elif type(request_options) is dict: + expected_request_options = RequestOptions(request_options) + else: + expected_request_options = request_options + self.assertEqual(actual_request_options, expected_request_options) + self.assertEqual( + request_exclude_txn_from_change_streams, exclude_txn_from_change_streams + ) + + self.assertSpanAttributes( + "CloudSpanner.batch_write", + status=StatusCode.OK, + attributes=dict( + BASE_ATTRIBUTES, num_mutation_groups=1, x_goog_spanner_request_id=req_id + ), + ) + + def test_batch_write_no_request_options(self): + self._test_batch_write_with_request_options() + + def test_batch_write_end_to_end_tracing_enabled(self): + self._test_batch_write_with_request_options(enable_end_to_end_tracing=True) + + def test_batch_write_w_transaction_tag_success(self): + self._test_batch_write_with_request_options( + RequestOptions(transaction_tag="tag-1-1") + ) + + def test_batch_write_w_transaction_tag_dictionary_success(self): + self._test_batch_write_with_request_options({"transaction_tag": "tag-1-1"}) + + def test_batch_write_w_incorrect_tag_dictionary_error(self): + with self.assertRaises(ValueError): + self._test_batch_write_with_request_options({"incorrect_tag": "tag-1-1"}) + + def test_batch_write_w_exclude_txn_from_change_streams(self): + self._test_batch_write_with_request_options( + exclude_txn_from_change_streams=True + ) + + class _Session(object): def __init__(self, database=None, name=TestBatch.SESSION_NAME): self._database = database self.name = name + @property + def session_id(self): + return self.name + class _Database(object): name = "testing" + _route_to_leader_enabled = True + NTH_CLIENT_ID = AtomicCounter() + + def __init__(self, enable_end_to_end_tracing=False): + self.name = "testing" + self._route_to_leader_enabled = True + if enable_end_to_end_tracing: + self.observability_options = dict(enable_end_to_end_tracing=True) + self.default_transaction_options = DefaultTransactionOptions() + self._nth_request = 0 + self._nth_client_id = _Database.NTH_CLIENT_ID.increment() + + @property + def _next_nth_request(self): + self._nth_request += 1 + return self._nth_request + + def metadata_with_request_id( + self, nth_request, nth_attempt, prior_metadata=[], span=None + ): + return _metadata_with_request_id( + self._nth_client_id, + self._channel_id, + nth_request, + nth_attempt, + prior_metadata, + span, + ) + @property + def _channel_id(self): + return 1 -class _FauxSpannerAPI: +class _FauxSpannerAPI: _create_instance_conflict = False _instance_not_found = False _committed = None + _batch_request = None _rpc_error = False + _aborted_error = False def __init__(self, **kwargs): self.__dict__.update(**kwargs) def commit( self, - session, - mutations, - transaction_id="", - single_use_transaction=None, + request=None, metadata=None, ): - from google.api_core.exceptions import Unknown - - assert transaction_id == "" - self._committed = (session, mutations, single_use_transaction, metadata) + max_commit_delay = None + if type(request).pb(request).HasField("max_commit_delay"): + max_commit_delay = request.max_commit_delay + + assert request.transaction_id == b"" + self._committed = ( + request.session, + request.mutations, + request.single_use_transaction, + request.request_options, + max_commit_delay, + metadata, + ) if self._rpc_error: raise Unknown("error") + if self._aborted_error: + raise Aborted("Transaction was aborted", errors=("Aborted error")) return self._commit_response + + def batch_write( + self, + request=None, + metadata=None, + ): + self._batch_request = ( + request.session, + request.mutation_groups, + request.request_options, + metadata, + request.exclude_txn_from_change_streams, + ) + if self._rpc_error: + raise Unknown("error") + return self._batch_write_response diff --git a/tests/unit/test_client.py b/tests/unit/test_client.py index b9446fd867..212dc9ee4f 100644 --- a/tests/unit/test_client.py +++ b/tests/unit/test_client.py @@ -14,22 +14,15 @@ import unittest +import os import mock +from google.auth.credentials import AnonymousCredentials - -def _make_credentials(): - import google.auth.credentials - - class _CredentialsWithScopes( - google.auth.credentials.Credentials, google.auth.credentials.Scoped - ): - pass - - return mock.Mock(spec=_CredentialsWithScopes) +from google.cloud.spanner_v1 import DirectedReadOptions, DefaultTransactionOptions +from tests._builders import build_scoped_credentials class TestClient(unittest.TestCase): - PROJECT = "PROJECT" PATH = "projects/%s" % (PROJECT,) CONFIGURATION_NAME = "config-name" @@ -37,8 +30,25 @@ class TestClient(unittest.TestCase): INSTANCE_NAME = "%s/instances/%s" % (PATH, INSTANCE_ID) DISPLAY_NAME = "display-name" NODE_COUNT = 5 + PROCESSING_UNITS = 5000 + LABELS = {"test": "true"} TIMEOUT_SECONDS = 80 - USER_AGENT = "you-sir-age-int" + LEADER_OPTIONS = ["leader1", "leader2"] + DIRECTED_READ_OPTIONS = { + "include_replicas": { + "replica_selections": [ + { + "location": "us-west1", + "type_": DirectedReadOptions.ReplicaSelection.Type.READ_ONLY, + }, + ], + "auto_failover_disabled": True, + }, + } + DEFAULT_TRANSACTION_OPTIONS = DefaultTransactionOptions( + isolation_level="SERIALIZABLE", + read_lock_mode="PESSIMISTIC", + ) def _get_target_class(self): from google.cloud import spanner @@ -54,10 +64,12 @@ def _constructor_test_helper( creds, expected_creds=None, client_info=None, - user_agent=None, client_options=None, query_options=None, expected_query_options=None, + route_to_leader_enabled=True, + directed_read_options=None, + default_transaction_options=None, ): import google.api_core.client_options from google.cloud.spanner_v1 import client as MUT @@ -70,18 +82,21 @@ def _constructor_test_helper( expected_client_info = MUT._CLIENT_INFO kwargs["client_options"] = client_options - if type(client_options) == dict: + if type(client_options) is dict: expected_client_options = google.api_core.client_options.from_dict( client_options ) else: expected_client_options = client_options + if route_to_leader_enabled is not None: + kwargs["route_to_leader_enabled"] = route_to_leader_enabled client = self._make_one( project=self.PROJECT, credentials=creds, - user_agent=user_agent, query_options=query_options, + directed_read_options=directed_read_options, + default_transaction_options=default_transaction_options, **kwargs ) @@ -90,11 +105,12 @@ def _constructor_test_helper( self.assertIs(client._credentials, expected_creds) if expected_scopes is not None: - creds.with_scopes.assert_called_once_with(expected_scopes) + creds.with_scopes.assert_called_once_with( + expected_scopes, default_scopes=None + ) self.assertEqual(client.project, self.PROJECT) self.assertIs(client._client_info, expected_client_info) - self.assertEqual(client.user_agent, user_agent) if expected_client_options is not None: self.assertIsInstance( client._client_options, google.api_core.client_options.ClientOptions @@ -105,49 +121,52 @@ def _constructor_test_helper( ) if expected_query_options is not None: self.assertEqual(client._query_options, expected_query_options) + if route_to_leader_enabled is not None: + self.assertEqual(client.route_to_leader_enabled, route_to_leader_enabled) + else: + self.assertFalse(client.route_to_leader_enabled) + if directed_read_options is not None: + self.assertEqual(client.directed_read_options, directed_read_options) + if default_transaction_options is not None: + self.assertEqual( + client.default_transaction_options, default_transaction_options + ) @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") @mock.patch("warnings.warn") def test_constructor_emulator_host_warning(self, mock_warn, mock_em): from google.cloud.spanner_v1 import client as MUT + from google.auth.credentials import AnonymousCredentials - expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) - creds = _make_credentials() + expected_scopes = None + creds = build_scoped_credentials() mock_em.return_value = "http://emulator.host.com" - self._constructor_test_helper(expected_scopes, creds) + with mock.patch("google.cloud.spanner_v1.client.AnonymousCredentials") as patch: + expected_creds = patch.return_value = AnonymousCredentials() + self._constructor_test_helper(expected_scopes, creds, expected_creds) mock_warn.assert_called_once_with(MUT._EMULATOR_HOST_HTTP_SCHEME) def test_constructor_default_scopes(self): from google.cloud.spanner_v1 import client as MUT expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) - creds = _make_credentials() + creds = build_scoped_credentials() self._constructor_test_helper(expected_scopes, creds) - @mock.patch("warnings.warn") - def test_constructor_custom_user_agent_and_timeout(self, mock_warn): - from google.cloud.spanner_v1 import client as MUT - - CUSTOM_USER_AGENT = "custom-application" - expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) - creds = _make_credentials() - self._constructor_test_helper( - expected_scopes, creds, user_agent=CUSTOM_USER_AGENT - ) - mock_warn.assert_called_once_with( - MUT._USER_AGENT_DEPRECATED, DeprecationWarning, stacklevel=2 - ) - def test_constructor_custom_client_info(self): from google.cloud.spanner_v1 import client as MUT client_info = mock.Mock() expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) - creds = _make_credentials() + creds = build_scoped_credentials() self._constructor_test_helper(expected_scopes, creds, client_info=client_info) + # Disable metrics to avoid google.auth.default calls from Metric Exporter + @mock.patch.dict(os.environ, {"SPANNER_ENABLE_BUILTIN_METRICS": ""}) def test_constructor_implicit_credentials(self): - creds = _make_credentials() + from google.cloud.spanner_v1 import client as MUT + + creds = build_scoped_credentials() patch = mock.patch("google.auth.default", return_value=(creds, None)) with patch as default: @@ -155,10 +174,10 @@ def test_constructor_implicit_credentials(self): None, None, expected_creds=creds.with_scopes.return_value ) - default.assert_called_once_with() + default.assert_called_once_with(scopes=(MUT.SPANNER_ADMIN_SCOPE,)) def test_constructor_credentials_wo_create_scoped(self): - creds = _make_credentials() + creds = build_scoped_credentials() expected_scopes = None self._constructor_test_helper(expected_scopes, creds) @@ -167,7 +186,7 @@ def test_constructor_custom_client_options_obj(self): from google.cloud.spanner_v1 import client as MUT expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) - creds = _make_credentials() + creds = build_scoped_credentials() self._constructor_test_helper( expected_scopes, creds, @@ -178,50 +197,94 @@ def test_constructor_custom_client_options_dict(self): from google.cloud.spanner_v1 import client as MUT expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) - creds = _make_credentials() + creds = build_scoped_credentials() self._constructor_test_helper( expected_scopes, creds, client_options={"api_endpoint": "endpoint"} ) def test_constructor_custom_query_options_client_config(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest from google.cloud.spanner_v1 import client as MUT expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) - creds = _make_credentials() + creds = build_scoped_credentials() + query_options = expected_query_options = ExecuteSqlRequest.QueryOptions( + optimizer_version="1", + optimizer_statistics_package="auto_20191128_14_47_22UTC", + ) self._constructor_test_helper( expected_scopes, creds, - query_options=ExecuteSqlRequest.QueryOptions(optimizer_version="1"), - expected_query_options=ExecuteSqlRequest.QueryOptions( - optimizer_version="1" - ), + query_options=query_options, + expected_query_options=expected_query_options, ) + @mock.patch( + "google.cloud.spanner_v1.client._get_spanner_optimizer_statistics_package" + ) @mock.patch("google.cloud.spanner_v1.client._get_spanner_optimizer_version") - def test_constructor_custom_query_options_env_config(self, mock_ver): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + def test_constructor_custom_query_options_env_config(self, mock_ver, mock_stats): + from google.cloud.spanner_v1 import ExecuteSqlRequest from google.cloud.spanner_v1 import client as MUT expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) - creds = _make_credentials() + creds = build_scoped_credentials() mock_ver.return_value = "2" + mock_stats.return_value = "auto_20191128_14_47_22UTC" + query_options = ExecuteSqlRequest.QueryOptions( + optimizer_version="1", + optimizer_statistics_package="auto_20191128_10_47_22UTC", + ) + expected_query_options = ExecuteSqlRequest.QueryOptions( + optimizer_version="2", + optimizer_statistics_package="auto_20191128_14_47_22UTC", + ) + self._constructor_test_helper( + expected_scopes, + creds, + query_options=query_options, + expected_query_options=expected_query_options, + ) + + def test_constructor_w_directed_read_options(self): + from google.cloud.spanner_v1 import client as MUT + + expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) + creds = build_scoped_credentials() + self._constructor_test_helper( + expected_scopes, creds, directed_read_options=self.DIRECTED_READ_OPTIONS + ) + + def test_constructor_route_to_leader_disbled(self): + from google.cloud.spanner_v1 import client as MUT + + expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) + creds = build_scoped_credentials() + self._constructor_test_helper( + expected_scopes, creds, route_to_leader_enabled=False + ) + + def test_constructor_w_default_transaction_options(self): + from google.cloud.spanner_v1 import client as MUT + + expected_scopes = (MUT.SPANNER_ADMIN_SCOPE,) + creds = build_scoped_credentials() self._constructor_test_helper( expected_scopes, creds, - query_options=ExecuteSqlRequest.QueryOptions(optimizer_version="1"), - expected_query_options=ExecuteSqlRequest.QueryOptions( - optimizer_version="2" - ), + default_transaction_options=self.DEFAULT_TRANSACTION_OPTIONS, ) @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") def test_instance_admin_api(self, mock_em): from google.cloud.spanner_v1.client import SPANNER_ADMIN_SCOPE + from google.api_core.client_options import ClientOptions + + mock_em.return_value = None - credentials = _make_credentials() + credentials = build_scoped_credentials() client_info = mock.Mock() - client_options = mock.Mock() + client_options = ClientOptions(quota_project_id="QUOTA-PROJECT") client = self._make_one( project=self.PROJECT, credentials=credentials, @@ -230,7 +293,6 @@ def test_instance_admin_api(self, mock_em): ) expected_scopes = (SPANNER_ADMIN_SCOPE,) - mock_em.return_value = None inst_module = "google.cloud.spanner_v1.client.InstanceAdminClient" with mock.patch(inst_module) as instance_admin_client: api = client.instance_admin_api @@ -242,18 +304,53 @@ def test_instance_admin_api(self, mock_em): self.assertIs(again, api) instance_admin_client.assert_called_once_with( - credentials=credentials.with_scopes.return_value, + credentials=mock.ANY, client_info=client_info, client_options=client_options + ) + + credentials.with_scopes.assert_called_once_with( + expected_scopes, default_scopes=None + ) + + @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") + def test_instance_admin_api_emulator_env(self, mock_em): + from google.api_core.client_options import ClientOptions + + mock_em.return_value = "emulator.host" + credentials = build_scoped_credentials() + client_info = mock.Mock() + client_options = ClientOptions(api_endpoint="endpoint") + client = self._make_one( + project=self.PROJECT, + credentials=credentials, client_info=client_info, client_options=client_options, ) - credentials.with_scopes.assert_called_once_with(expected_scopes) + inst_module = "google.cloud.spanner_v1.client.InstanceAdminClient" + with mock.patch(inst_module) as instance_admin_client: + api = client.instance_admin_api - @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") - def test_instance_admin_api_emulator(self, mock_em): - credentials = _make_credentials() + self.assertIs(api, instance_admin_client.return_value) + + # API instance is cached + again = client.instance_admin_api + self.assertIs(again, api) + + self.assertEqual(len(instance_admin_client.call_args_list), 1) + called_args, called_kw = instance_admin_client.call_args + self.assertEqual(called_args, ()) + self.assertEqual(called_kw["client_info"], client_info) + self.assertEqual(called_kw["client_options"], client_options) + self.assertIn("transport", called_kw) + self.assertNotIn("credentials", called_kw) + + def test_instance_admin_api_emulator_code(self): + from google.auth.credentials import AnonymousCredentials + from google.api_core.client_options import ClientOptions + + credentials = AnonymousCredentials() client_info = mock.Mock() - client_options = mock.Mock() + client_options = ClientOptions(api_endpoint="emulator.host") client = self._make_one( project=self.PROJECT, credentials=credentials, @@ -261,7 +358,6 @@ def test_instance_admin_api_emulator(self, mock_em): client_options=client_options, ) - mock_em.return_value = "true" inst_module = "google.cloud.spanner_v1.client.InstanceAdminClient" with mock.patch(inst_module) as instance_admin_client: api = client.instance_admin_api @@ -283,10 +379,12 @@ def test_instance_admin_api_emulator(self, mock_em): @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") def test_database_admin_api(self, mock_em): from google.cloud.spanner_v1.client import SPANNER_ADMIN_SCOPE + from google.api_core.client_options import ClientOptions - credentials = _make_credentials() + mock_em.return_value = None + credentials = build_scoped_credentials() client_info = mock.Mock() - client_options = mock.Mock() + client_options = ClientOptions(quota_project_id="QUOTA-PROJECT") client = self._make_one( project=self.PROJECT, credentials=credentials, @@ -295,7 +393,6 @@ def test_database_admin_api(self, mock_em): ) expected_scopes = (SPANNER_ADMIN_SCOPE,) - mock_em.return_value = None db_module = "google.cloud.spanner_v1.client.DatabaseAdminClient" with mock.patch(db_module) as database_admin_client: api = client.database_admin_api @@ -307,18 +404,53 @@ def test_database_admin_api(self, mock_em): self.assertIs(again, api) database_admin_client.assert_called_once_with( - credentials=credentials.with_scopes.return_value, + credentials=mock.ANY, client_info=client_info, client_options=client_options + ) + + credentials.with_scopes.assert_called_once_with( + expected_scopes, default_scopes=None + ) + + @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") + def test_database_admin_api_emulator_env(self, mock_em): + from google.api_core.client_options import ClientOptions + + mock_em.return_value = "host:port" + credentials = build_scoped_credentials() + client_info = mock.Mock() + client_options = ClientOptions(api_endpoint="endpoint") + client = self._make_one( + project=self.PROJECT, + credentials=credentials, client_info=client_info, client_options=client_options, ) - credentials.with_scopes.assert_called_once_with(expected_scopes) + db_module = "google.cloud.spanner_v1.client.DatabaseAdminClient" + with mock.patch(db_module) as database_admin_client: + api = client.database_admin_api + + self.assertIs(api, database_admin_client.return_value) - @mock.patch("google.cloud.spanner_v1.client._get_spanner_emulator_host") - def test_database_admin_api_emulator(self, mock_em): - credentials = _make_credentials() + # API instance is cached + again = client.database_admin_api + self.assertIs(again, api) + + self.assertEqual(len(database_admin_client.call_args_list), 1) + called_args, called_kw = database_admin_client.call_args + self.assertEqual(called_args, ()) + self.assertEqual(called_kw["client_info"], client_info) + self.assertEqual(called_kw["client_options"], client_options) + self.assertIn("transport", called_kw) + self.assertNotIn("credentials", called_kw) + + def test_database_admin_api_emulator_code(self): + from google.auth.credentials import AnonymousCredentials + from google.api_core.client_options import ClientOptions + + credentials = AnonymousCredentials() client_info = mock.Mock() - client_options = mock.Mock() + client_options = ClientOptions(api_endpoint="emulator.host") client = self._make_one( project=self.PROJECT, credentials=credentials, @@ -326,7 +458,6 @@ def test_database_admin_api_emulator(self, mock_em): client_options=client_options, ) - mock_em.return_value = "host:port" db_module = "google.cloud.spanner_v1.client.DatabaseAdminClient" with mock.patch(db_module) as database_admin_client: api = client.database_admin_api @@ -346,7 +477,7 @@ def test_database_admin_api_emulator(self, mock_em): self.assertNotIn("credentials", called_kw) def test_copy(self): - credentials = _make_credentials() + credentials = build_scoped_credentials() # Make sure it "already" is scoped. credentials.requires_scopes = False @@ -357,94 +488,97 @@ def test_copy(self): self.assertEqual(new_client.project, client.project) def test_credentials_property(self): - credentials = _make_credentials() + credentials = build_scoped_credentials() client = self._make_one(project=self.PROJECT, credentials=credentials) self.assertIs(client.credentials, credentials.with_scopes.return_value) def test_project_name_property(self): - credentials = _make_credentials() + credentials = build_scoped_credentials() client = self._make_one(project=self.PROJECT, credentials=credentials) project_name = "projects/" + self.PROJECT self.assertEqual(client.project_name, project_name) def test_list_instance_configs(self): - from google.cloud.spanner_admin_instance_v1.gapic import instance_admin_client - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2, + from google.cloud.spanner_admin_instance_v1 import InstanceAdminClient + from google.cloud.spanner_admin_instance_v1 import ( + InstanceConfig as InstanceConfigPB, ) - from google.cloud.spanner_v1.client import InstanceConfig + from google.cloud.spanner_admin_instance_v1 import ListInstanceConfigsRequest + from google.cloud.spanner_admin_instance_v1 import ListInstanceConfigsResponse - api = instance_admin_client.InstanceAdminClient(mock.Mock()) - credentials = _make_credentials() + api = InstanceAdminClient(credentials=AnonymousCredentials()) + credentials = build_scoped_credentials() client = self._make_one(project=self.PROJECT, credentials=credentials) client._instance_admin_api = api - instance_config_pbs = spanner_instance_admin_pb2.ListInstanceConfigsResponse( + instance_config_pbs = ListInstanceConfigsResponse( instance_configs=[ - spanner_instance_admin_pb2.InstanceConfig( - name=self.CONFIGURATION_NAME, display_name=self.DISPLAY_NAME + InstanceConfigPB( + name=self.CONFIGURATION_NAME, + display_name=self.DISPLAY_NAME, + leader_options=self.LEADER_OPTIONS, ) ] ) - lic_api = api._inner_api_calls["list_instance_configs"] = mock.Mock( - return_value=instance_config_pbs - ) + lic_api = api._transport._wrapped_methods[ + api._transport.list_instance_configs + ] = mock.Mock(return_value=instance_config_pbs) response = client.list_instance_configs() instance_configs = list(response) instance_config = instance_configs[0] - self.assertIsInstance(instance_config, InstanceConfig) + self.assertIsInstance(instance_config, InstanceConfigPB) self.assertEqual(instance_config.name, self.CONFIGURATION_NAME) self.assertEqual(instance_config.display_name, self.DISPLAY_NAME) + self.assertEqual(instance_config.leader_options, self.LEADER_OPTIONS) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", client.project_name), ("x-goog-request-params", "parent={}".format(client.project_name)), - ] + ) lic_api.assert_called_once_with( - spanner_instance_admin_pb2.ListInstanceConfigsRequest(parent=self.PATH), + ListInstanceConfigsRequest(parent=self.PATH), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, ) def test_list_instance_configs_w_options(self): - from google.cloud.spanner_admin_instance_v1.gapic import instance_admin_client - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2, + from google.cloud.spanner_admin_instance_v1 import InstanceAdminClient + from google.cloud.spanner_admin_instance_v1 import ( + InstanceConfig as InstanceConfigPB, ) + from google.cloud.spanner_admin_instance_v1 import ListInstanceConfigsRequest + from google.cloud.spanner_admin_instance_v1 import ListInstanceConfigsResponse - api = instance_admin_client.InstanceAdminClient(mock.Mock()) - credentials = _make_credentials() + credentials = build_scoped_credentials() + api = InstanceAdminClient(credentials=credentials) client = self._make_one(project=self.PROJECT, credentials=credentials) client._instance_admin_api = api - instance_config_pbs = spanner_instance_admin_pb2.ListInstanceConfigsResponse( + instance_config_pbs = ListInstanceConfigsResponse( instance_configs=[ - spanner_instance_admin_pb2.InstanceConfig( + InstanceConfigPB( name=self.CONFIGURATION_NAME, display_name=self.DISPLAY_NAME ) ] ) - lic_api = api._inner_api_calls["list_instance_configs"] = mock.Mock( - return_value=instance_config_pbs - ) + lic_api = api._transport._wrapped_methods[ + api._transport.list_instance_configs + ] = mock.Mock(return_value=instance_config_pbs) - token = "token" page_size = 42 - list(client.list_instance_configs(page_token=token, page_size=42)) + list(client.list_instance_configs(page_size=42)) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", client.project_name), ("x-goog-request-params", "parent={}".format(client.project_name)), - ] + ) lic_api.assert_called_once_with( - spanner_instance_admin_pb2.ListInstanceConfigsRequest( - parent=self.PATH, page_size=page_size, page_token=token - ), + ListInstanceConfigsRequest(parent=self.PATH, page_size=page_size), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, @@ -454,7 +588,7 @@ def test_instance_factory_defaults(self): from google.cloud.spanner_v1.instance import DEFAULT_NODE_COUNT from google.cloud.spanner_v1.instance import Instance - credentials = _make_credentials() + credentials = build_scoped_credentials() client = self._make_one(project=self.PROJECT, credentials=credentials) instance = client.instance(self.INSTANCE_ID) @@ -464,12 +598,13 @@ def test_instance_factory_defaults(self): self.assertIsNone(instance.configuration_name) self.assertEqual(instance.display_name, self.INSTANCE_ID) self.assertEqual(instance.node_count, DEFAULT_NODE_COUNT) + self.assertEqual(instance.labels, {}) self.assertIs(instance._client, client) def test_instance_factory_explicit(self): from google.cloud.spanner_v1.instance import Instance - credentials = _make_credentials() + credentials = build_scoped_credentials() client = self._make_one(project=self.PROJECT, credentials=credentials) instance = client.instance( @@ -477,6 +612,7 @@ def test_instance_factory_explicit(self): self.CONFIGURATION_NAME, display_name=self.DISPLAY_NAME, node_count=self.NODE_COUNT, + labels=self.LABELS, ) self.assertIsInstance(instance, Instance) @@ -484,85 +620,84 @@ def test_instance_factory_explicit(self): self.assertEqual(instance.configuration_name, self.CONFIGURATION_NAME) self.assertEqual(instance.display_name, self.DISPLAY_NAME) self.assertEqual(instance.node_count, self.NODE_COUNT) + self.assertEqual(instance.labels, self.LABELS) self.assertIs(instance._client, client) def test_list_instances(self): - from google.cloud.spanner_admin_instance_v1.gapic import instance_admin_client - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2, - ) - from google.cloud.spanner_v1.client import Instance + from google.cloud.spanner_admin_instance_v1 import InstanceAdminClient + from google.cloud.spanner_admin_instance_v1 import Instance as InstancePB + from google.cloud.spanner_admin_instance_v1 import ListInstancesRequest + from google.cloud.spanner_admin_instance_v1 import ListInstancesResponse - api = instance_admin_client.InstanceAdminClient(mock.Mock()) - credentials = _make_credentials() + credentials = build_scoped_credentials() + api = InstanceAdminClient(credentials=credentials) client = self._make_one(project=self.PROJECT, credentials=credentials) client._instance_admin_api = api - instance_pbs = spanner_instance_admin_pb2.ListInstancesResponse( + instance_pbs = ListInstancesResponse( instances=[ - spanner_instance_admin_pb2.Instance( + InstancePB( name=self.INSTANCE_NAME, config=self.CONFIGURATION_NAME, display_name=self.DISPLAY_NAME, node_count=self.NODE_COUNT, + processing_units=self.PROCESSING_UNITS, ) ] ) - li_api = api._inner_api_calls["list_instances"] = mock.Mock( - return_value=instance_pbs - ) + li_api = api._transport._wrapped_methods[ + api._transport.list_instances + ] = mock.Mock(return_value=instance_pbs) response = client.list_instances() instances = list(response) instance = instances[0] - self.assertIsInstance(instance, Instance) + self.assertIsInstance(instance, InstancePB) self.assertEqual(instance.name, self.INSTANCE_NAME) - self.assertEqual(instance.configuration_name, self.CONFIGURATION_NAME) + self.assertEqual(instance.config, self.CONFIGURATION_NAME) self.assertEqual(instance.display_name, self.DISPLAY_NAME) self.assertEqual(instance.node_count, self.NODE_COUNT) + self.assertEqual(instance.processing_units, self.PROCESSING_UNITS) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", client.project_name), ("x-goog-request-params", "parent={}".format(client.project_name)), - ] + ) li_api.assert_called_once_with( - spanner_instance_admin_pb2.ListInstancesRequest(parent=self.PATH), + ListInstancesRequest(parent=self.PATH), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, ) def test_list_instances_w_options(self): - from google.cloud.spanner_admin_instance_v1.gapic import instance_admin_client - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2, - ) + from google.cloud.spanner_admin_instance_v1 import InstanceAdminClient + from google.cloud.spanner_admin_instance_v1 import ListInstancesRequest + from google.cloud.spanner_admin_instance_v1 import ListInstancesResponse - api = instance_admin_client.InstanceAdminClient(mock.Mock()) - credentials = _make_credentials() + credentials = build_scoped_credentials() + api = InstanceAdminClient(credentials=credentials) client = self._make_one(project=self.PROJECT, credentials=credentials) client._instance_admin_api = api - instance_pbs = spanner_instance_admin_pb2.ListInstancesResponse(instances=[]) + instance_pbs = ListInstancesResponse(instances=[]) - li_api = api._inner_api_calls["list_instances"] = mock.Mock( - return_value=instance_pbs - ) + li_api = api._transport._wrapped_methods[ + api._transport.list_instances + ] = mock.Mock(return_value=instance_pbs) - token = "token" page_size = 42 - list(client.list_instances(page_token=token, page_size=42)) + filter_ = "name:instance" + list(client.list_instances(filter_=filter_, page_size=42)) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", client.project_name), ("x-goog-request-params", "parent={}".format(client.project_name)), - ] + ) li_api.assert_called_once_with( - spanner_instance_admin_pb2.ListInstancesRequest( - parent=self.PATH, page_size=page_size, page_token=token - ), + ListInstancesRequest(parent=self.PATH, filter=filter_, page_size=page_size), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, diff --git a/tests/unit/test_database.py b/tests/unit/test_database.py index 37d9eb41a9..1c7f58c4ab 100644 --- a/tests/unit/test_database.py +++ b/tests/unit/test_database.py @@ -16,7 +16,30 @@ import unittest import mock - +from google.api_core import gapic_v1 +from google.cloud.spanner_admin_database_v1 import ( + Database as DatabasePB, + DatabaseDialect, +) + +from google.cloud.spanner_v1.param_types import INT64 +from google.api_core.retry import Retry +from google.protobuf.field_mask_pb2 import FieldMask + +from google.cloud.spanner_v1 import ( + RequestOptions, + DirectedReadOptions, + DefaultTransactionOptions, +) +from google.cloud.spanner_v1._helpers import ( + AtomicCounter, + _metadata_with_request_id, +) +from google.cloud.spanner_v1.request_id_header import REQ_RAND_PROCESS_ID +from google.cloud.spanner_v1.session import Session +from google.cloud.spanner_v1.database_sessions_manager import TransactionType +from tests._builders import build_spanner_api +from tests._helpers import is_multiplexed_enabled DML_WO_PARAM = """ DELETE FROM citizens @@ -27,23 +50,22 @@ VALUES ("Phred", "Phlyntstone", @age) """ PARAMS = {"age": 30} -PARAM_TYPES = {"age": "INT64"} +PARAM_TYPES = {"age": INT64} MODE = 2 # PROFILE - - -def _make_credentials(): # pragma: NO COVER - import google.auth.credentials - - class _CredentialsWithScopes( - google.auth.credentials.Credentials, google.auth.credentials.Scoped - ): - pass - - return mock.Mock(spec=_CredentialsWithScopes) +DIRECTED_READ_OPTIONS = { + "include_replicas": { + "replica_selections": [ + { + "location": "us-west1", + "type_": DirectedReadOptions.ReplicaSelection.Type.READ_ONLY, + }, + ], + "auto_failover_disabled": True, + }, +} class _BaseTest(unittest.TestCase): - PROJECT_ID = "project-id" PARENT = "projects/" + PROJECT_ID INSTANCE_ID = "instance-id" @@ -53,8 +75,11 @@ class _BaseTest(unittest.TestCase): SESSION_ID = "session_id" SESSION_NAME = DATABASE_NAME + "/sessions/" + SESSION_ID TRANSACTION_ID = b"transaction_id" + RETRY_TRANSACTION_ID = b"transaction_id_retry" BACKUP_ID = "backup_id" BACKUP_NAME = INSTANCE_NAME + "/backups/" + BACKUP_ID + TRANSACTION_TAG = "transaction-tag" + DATABASE_ROLE = "dummy-role" def _make_one(self, *args, **kwargs): return self._get_target_class()(*args, **kwargs) @@ -87,11 +112,11 @@ def _make_database_admin_api(): @staticmethod def _make_spanner_api(): - import google.cloud.spanner_v1.gapic.spanner_client + from google.cloud.spanner_v1 import SpannerClient - return mock.create_autospec( - google.cloud.spanner_v1.gapic.spanner_client.SpannerClient, instance=True - ) + api = mock.create_autospec(SpannerClient, instance=True) + api._transport = "transport" + return api def test_ctor_defaults(self): from google.cloud.spanner_v1.pool import BurstyPool @@ -104,8 +129,12 @@ def test_ctor_defaults(self): self.assertIs(database._instance, instance) self.assertEqual(list(database.ddl_statements), []) self.assertIsInstance(database._pool, BurstyPool) + self.assertFalse(database.log_commit_stats) + self.assertIsNone(database._logger) # BurstyPool does not create sessions during 'bind()'. self.assertTrue(database._pool._sessions.empty()) + self.assertIsNone(database.database_role) + self.assertTrue(database._route_to_leader_enabled, True) def test_ctor_w_explicit_pool(self): instance = _Instance(self.INSTANCE_NAME) @@ -117,15 +146,32 @@ def test_ctor_w_explicit_pool(self): self.assertIs(database._pool, pool) self.assertIs(pool._bound, database) - def test_ctor_w_ddl_statements_non_string(self): + def test_ctor_w_database_role(self): + instance = _Instance(self.INSTANCE_NAME) + database = self._make_one( + self.DATABASE_ID, instance, database_role=self.DATABASE_ROLE + ) + self.assertEqual(database.database_id, self.DATABASE_ID) + self.assertIs(database._instance, instance) + self.assertIs(database.database_role, self.DATABASE_ROLE) + + def test_ctor_w_route_to_leader_disbled(self): + client = _Client(route_to_leader_enabled=False) + instance = _Instance(self.INSTANCE_NAME, client=client) + database = self._make_one( + self.DATABASE_ID, instance, database_role=self.DATABASE_ROLE + ) + self.assertEqual(database.database_id, self.DATABASE_ID) + self.assertIs(database._instance, instance) + self.assertFalse(database._route_to_leader_enabled) + def test_ctor_w_ddl_statements_non_string(self): with self.assertRaises(ValueError): self._make_one( self.DATABASE_ID, instance=object(), ddl_statements=[object()] ) def test_ctor_w_ddl_statements_w_create_database(self): - with self.assertRaises(ValueError): self._make_one( self.DATABASE_ID, @@ -145,54 +191,87 @@ def test_ctor_w_ddl_statements_ok(self): self.assertIs(database._instance, instance) self.assertEqual(list(database.ddl_statements), DDL_STATEMENTS) - def test_from_pb_bad_database_name(self): - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2 as admin_v1_pb2, + def test_ctor_w_explicit_logger(self): + from logging import Logger + + instance = _Instance(self.INSTANCE_NAME) + logger = mock.create_autospec(Logger, instance=True) + database = self._make_one(self.DATABASE_ID, instance, logger=logger) + self.assertEqual(database.database_id, self.DATABASE_ID) + self.assertIs(database._instance, instance) + self.assertEqual(list(database.ddl_statements), []) + self.assertFalse(database.log_commit_stats) + self.assertEqual(database._logger, logger) + + def test_ctor_w_encryption_config(self): + from google.cloud.spanner_admin_database_v1 import EncryptionConfig + + instance = _Instance(self.INSTANCE_NAME) + encryption_config = EncryptionConfig(kms_key_name="kms_key") + database = self._make_one( + self.DATABASE_ID, instance, encryption_config=encryption_config + ) + self.assertEqual(database.database_id, self.DATABASE_ID) + self.assertIs(database._instance, instance) + self.assertEqual(database._encryption_config, encryption_config) + + def test_ctor_w_directed_read_options(self): + client = _Client(directed_read_options=DIRECTED_READ_OPTIONS) + instance = _Instance(self.INSTANCE_NAME, client=client) + database = self._make_one( + self.DATABASE_ID, instance, database_role=self.DATABASE_ROLE ) + self.assertEqual(database.database_id, self.DATABASE_ID) + self.assertIs(database._instance, instance) + self.assertEqual(database._directed_read_options, DIRECTED_READ_OPTIONS) + + def test_ctor_w_proto_descriptors(self): + instance = _Instance(self.INSTANCE_NAME) + database = self._make_one(self.DATABASE_ID, instance, proto_descriptors=b"") + self.assertEqual(database.database_id, self.DATABASE_ID) + self.assertIs(database._instance, instance) + self.assertEqual(database._proto_descriptors, b"") + + def test_from_pb_bad_database_name(self): + from google.cloud.spanner_admin_database_v1 import Database database_name = "INCORRECT_FORMAT" - database_pb = admin_v1_pb2.Database(name=database_name) + database_pb = Database(name=database_name) klass = self._get_target_class() with self.assertRaises(ValueError): klass.from_pb(database_pb, None) def test_from_pb_project_mistmatch(self): - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_database_v1 import Database ALT_PROJECT = "ALT_PROJECT" client = _Client(project=ALT_PROJECT) instance = _Instance(self.INSTANCE_NAME, client) - database_pb = admin_v1_pb2.Database(name=self.DATABASE_NAME) + database_pb = Database(name=self.DATABASE_NAME) klass = self._get_target_class() with self.assertRaises(ValueError): klass.from_pb(database_pb, instance) def test_from_pb_instance_mistmatch(self): - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_database_v1 import Database ALT_INSTANCE = "/projects/%s/instances/ALT-INSTANCE" % (self.PROJECT_ID,) client = _Client() instance = _Instance(ALT_INSTANCE, client) - database_pb = admin_v1_pb2.Database(name=self.DATABASE_NAME) + database_pb = Database(name=self.DATABASE_NAME) klass = self._get_target_class() with self.assertRaises(ValueError): klass.from_pb(database_pb, instance) def test_from_pb_success_w_explicit_pool(self): - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_database_v1 import Database client = _Client() instance = _Instance(self.INSTANCE_NAME, client) - database_pb = admin_v1_pb2.Database(name=self.DATABASE_NAME) + database_pb = Database(name=self.DATABASE_NAME) klass = self._get_target_class() pool = _Pool() @@ -204,16 +283,14 @@ def test_from_pb_success_w_explicit_pool(self): self.assertIs(database._pool, pool) def test_from_pb_success_w_hyphen_w_default_pool(self): - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_database_v1 import Database from google.cloud.spanner_v1.pool import BurstyPool DATABASE_ID_HYPHEN = "database-id" DATABASE_NAME_HYPHEN = self.INSTANCE_NAME + "/databases/" + DATABASE_ID_HYPHEN client = _Client() instance = _Instance(self.INSTANCE_NAME, client) - database_pb = admin_v1_pb2.Database(name=DATABASE_NAME_HYPHEN) + database_pb = Database(name=DATABASE_NAME_HYPHEN) klass = self._get_target_class() database = klass.from_pb(database_pb, instance) @@ -240,16 +317,16 @@ def test_create_time_property(self): self.assertEqual(database.create_time, expected_create_time) def test_state_property(self): - from google.cloud.spanner_admin_database_v1.gapic import enums + from google.cloud.spanner_admin_database_v1 import Database instance = _Instance(self.INSTANCE_NAME) pool = _Pool() database = self._make_one(self.DATABASE_ID, instance, pool=pool) - expected_state = database._state = enums.Database.State.READY + expected_state = database._state = Database.State.READY self.assertEqual(database.state, expected_state) def test_restore_info(self): - from google.cloud.spanner_v1.database import RestoreInfo + from google.cloud.spanner_admin_database_v1 import RestoreInfo instance = _Instance(self.INSTANCE_NAME) pool = _Pool() @@ -259,139 +336,103 @@ def test_restore_info(self): ) self.assertEqual(database.restore_info, restore_info) - def test_spanner_api_property_w_scopeless_creds(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) - - client = _Client() - client.instance_admin_api.get_instance.return_value = admin_v1_pb2.Instance( - endpoint_uris=[] - ) - client_info = client._client_info = mock.Mock() - client_options = client._client_options = mock.Mock() - credentials = client.credentials = object() - instance = _Instance(self.INSTANCE_NAME, client=client) + def test_version_retention_period(self): + instance = _Instance(self.INSTANCE_NAME) pool = _Pool() database = self._make_one(self.DATABASE_ID, instance, pool=pool) + version_retention_period = database._version_retention_period = "1d" + self.assertEqual(database.version_retention_period, version_retention_period) - patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - - with mock.patch("os.getenv") as getenv: - getenv.return_value = "true" - with patch as spanner_client: - api = database.spanner_api - - self.assertIs(api, spanner_client.return_value) - - # API instance is cached - again = database.spanner_api - self.assertIs(again, api) + def test_earliest_version_time(self): + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + earliest_version_time = database._earliest_version_time = self._make_timestamp() + self.assertEqual(database.earliest_version_time, earliest_version_time) - client.instance_admin_api.get_instance.assert_called_once() - spanner_client.assert_called_once_with( - credentials=credentials, - client_info=client_info, - client_options=client_options, - ) + def test_logger_property_default(self): + import logging - def test_spanner_api_w_scoped_creds(self): - import google.auth.credentials - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) - from google.cloud.spanner_v1.database import SPANNER_DATA_SCOPE + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + logger = logging.getLogger(database.name) + self.assertEqual(database.logger, logger) - class _CredentialsWithScopes(google.auth.credentials.Scoped): - def __init__(self, scopes=(), source=None): - self._scopes = scopes - self._source = source + def test_logger_property_custom(self): + import logging - def requires_scopes(self): # pragma: NO COVER - return True + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + logger = database._logger = mock.create_autospec(logging.Logger, instance=True) + self.assertEqual(database.logger, logger) - def with_scopes(self, scopes): - return self.__class__(scopes, self) + def test_encryption_config(self): + from google.cloud.spanner_admin_database_v1 import EncryptionConfig - expected_scopes = (SPANNER_DATA_SCOPE,) - client = _Client() - client_info = client._client_info = mock.Mock() - client_options = client._client_options = mock.Mock() - credentials = client.credentials = _CredentialsWithScopes() - instance = _Instance(self.INSTANCE_NAME, client=client) + instance = _Instance(self.INSTANCE_NAME) pool = _Pool() database = self._make_one(self.DATABASE_ID, instance, pool=pool) - - patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - client.instance_admin_api.get_instance.return_value = admin_v1_pb2.Instance( - endpoint_uris=[] + encryption_config = database._encryption_config = mock.create_autospec( + EncryptionConfig, instance=True ) + self.assertEqual(database.encryption_config, encryption_config) - with mock.patch("os.getenv") as getenv: - getenv.return_value = "true" - with patch as spanner_client: - api = database.spanner_api + def test_encryption_info(self): + from google.cloud.spanner_admin_database_v1 import EncryptionInfo - self.assertNotIn(instance.name, client._endpoint_cache) - - # API instance is cached - again = database.spanner_api - self.assertIs(again, api) + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + encryption_info = database._encryption_info = [ + mock.create_autospec(EncryptionInfo, instance=True) + ] + self.assertEqual(database.encryption_info, encryption_info) - client.instance_admin_api.get_instance.assert_called_once() - self.assertEqual(len(spanner_client.call_args_list), 1) - called_args, called_kw = spanner_client.call_args - self.assertEqual(called_args, ()) - self.assertEqual(called_kw["client_info"], client_info) - self.assertEqual(called_kw["client_options"], client_options) - scoped = called_kw["credentials"] - self.assertEqual(scoped._scopes, expected_scopes) - self.assertIs(scoped._source, credentials) + def test_default_leader(self): + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + default_leader = database._default_leader = "us-east4" + self.assertEqual(database.default_leader, default_leader) - def test_spanner_api_property_w_scopeless_creds_and_new_endpoint(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, + def test_proto_descriptors(self): + instance = _Instance(self.INSTANCE_NAME) + pool = _Pool() + database = self._make_one( + self.DATABASE_ID, instance, pool=pool, proto_descriptors=b"" ) + self.assertEqual(database.proto_descriptors, b"") + def test_spanner_api_property_w_scopeless_creds(self): client = _Client() - client.instance_admin_api.get_instance.return_value = admin_v1_pb2.Instance( - endpoint_uris=["test1", "test2"] - ) client_info = client._client_info = mock.Mock() - client._client_options = mock.Mock() + client_options = client._client_options = mock.Mock() credentials = client.credentials = object() instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() database = self._make_one(self.DATABASE_ID, instance, pool=pool) - client_patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - options_patch = mock.patch("google.cloud.spanner_v1.database.ClientOptions") + patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - with mock.patch("os.getenv") as getenv: - getenv.return_value = "true" - with options_patch as options: - with client_patch as spanner_client: - api = database.spanner_api + with patch as spanner_client: + api = database.spanner_api self.assertIs(api, spanner_client.return_value) - self.assertIn(instance.name, client._endpoint_cache) # API instance is cached again = database.spanner_api self.assertIs(again, api) - self.assertEqual(len(spanner_client.call_args_list), 1) - called_args, called_kw = spanner_client.call_args - self.assertEqual(called_args, ()) - self.assertEqual(called_kw["client_info"], client_info) - self.assertEqual(called_kw["credentials"], credentials) - options.assert_called_with(api_endpoint="test1") + spanner_client.assert_called_once_with( + credentials=credentials, + client_info=client_info, + client_options=client_options, + ) - def test_spanner_api_w_scoped_creds_and_new_endpoint(self): + def test_spanner_api_w_scoped_creds(self): import google.auth.credentials - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) from google.cloud.spanner_v1.database import SPANNER_DATA_SCOPE class _CredentialsWithScopes(google.auth.credentials.Scoped): @@ -408,26 +449,16 @@ def with_scopes(self, scopes): expected_scopes = (SPANNER_DATA_SCOPE,) client = _Client() client_info = client._client_info = mock.Mock() - client._client_options = mock.Mock() + client_options = client._client_options = mock.Mock() credentials = client.credentials = _CredentialsWithScopes() instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() database = self._make_one(self.DATABASE_ID, instance, pool=pool) - client_patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - options_patch = mock.patch("google.cloud.spanner_v1.database.ClientOptions") - client.instance_admin_api.get_instance.return_value = admin_v1_pb2.Instance( - endpoint_uris=["test1", "test2"] - ) - - with mock.patch("os.getenv") as getenv: - getenv.return_value = "true" - with options_patch as options: - with client_patch as spanner_client: - api = database.spanner_api + patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - self.assertIs(api, spanner_client.return_value) - self.assertIn(instance.name, client._endpoint_cache) + with patch as spanner_client: + api = database.spanner_api # API instance is cached again = database.spanner_api @@ -437,132 +468,10 @@ def with_scopes(self, scopes): called_args, called_kw = spanner_client.call_args self.assertEqual(called_args, ()) self.assertEqual(called_kw["client_info"], client_info) + self.assertEqual(called_kw["client_options"], client_options) scoped = called_kw["credentials"] self.assertEqual(scoped._scopes, expected_scopes) self.assertIs(scoped._source, credentials) - options.assert_called_with(api_endpoint="test1") - - def test_spanner_api_resource_routing_permissions_error(self): - from google.api_core.exceptions import PermissionDenied - - client = _Client() - client_info = client._client_info = mock.Mock() - client_options = client._client_options = mock.Mock() - client._endpoint_cache = {} - credentials = client.credentials = mock.Mock() - instance = _Instance(self.INSTANCE_NAME, client=client) - pool = _Pool() - database = self._make_one(self.DATABASE_ID, instance, pool=pool) - - patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - client.instance_admin_api.get_instance.side_effect = PermissionDenied("test") - - with mock.patch("os.getenv") as getenv: - getenv.return_value = "true" - with patch as spanner_client: - api = database.spanner_api - - self.assertIs(api, spanner_client.return_value) - - # API instance is cached - again = database.spanner_api - self.assertIs(again, api) - - client.instance_admin_api.get_instance.assert_called_once() - spanner_client.assert_called_once_with( - credentials=credentials, - client_info=client_info, - client_options=client_options, - ) - - def test_spanner_api_disable_resource_routing(self): - client = _Client() - client_info = client._client_info = mock.Mock() - client_options = client._client_options = mock.Mock() - client._endpoint_cache = {} - credentials = client.credentials = mock.Mock() - instance = _Instance(self.INSTANCE_NAME, client=client) - pool = _Pool() - database = self._make_one(self.DATABASE_ID, instance, pool=pool) - - patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - - with mock.patch("os.getenv") as getenv: - getenv.return_value = "false" - with patch as spanner_client: - api = database.spanner_api - - self.assertIs(api, spanner_client.return_value) - - # API instance is cached - again = database.spanner_api - self.assertIs(again, api) - - client.instance_admin_api.get_instance.assert_not_called() - spanner_client.assert_called_once_with( - credentials=credentials, - client_info=client_info, - client_options=client_options, - ) - - def test_spanner_api_cached_endpoint(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) - - client = _Client() - client_info = client._client_info = mock.Mock() - client._client_options = mock.Mock() - client._endpoint_cache = {self.INSTANCE_NAME: "cached"} - credentials = client.credentials = mock.Mock() - instance = _Instance(self.INSTANCE_NAME, client=client) - pool = _Pool() - database = self._make_one(self.DATABASE_ID, instance, pool=pool) - - client_patch = mock.patch("google.cloud.spanner_v1.database.SpannerClient") - options_patch = mock.patch("google.cloud.spanner_v1.database.ClientOptions") - client.instance_admin_api.get_instance.return_value = admin_v1_pb2.Instance( - endpoint_uris=["test1", "test2"] - ) - - with mock.patch("os.getenv") as getenv: - getenv.return_value = "true" - with options_patch as options: - with client_patch as spanner_client: - api = database.spanner_api - - self.assertIs(api, spanner_client.return_value) - - # API instance is cached - again = database.spanner_api - self.assertIs(again, api) - - self.assertEqual(len(spanner_client.call_args_list), 1) - called_args, called_kw = spanner_client.call_args - self.assertEqual(called_args, ()) - self.assertEqual(called_kw["client_info"], client_info) - self.assertEqual(called_kw["credentials"], credentials) - options.assert_called_with(api_endpoint="cached") - - def test_spanner_api_resource_routing_error(self): - from google.api_core.exceptions import GoogleAPIError - - client = _Client() - client._client_info = mock.Mock() - client._client_options = mock.Mock() - client.credentials = mock.Mock() - instance = _Instance(self.INSTANCE_NAME, client=client) - pool = _Pool() - database = self._make_one(self.DATABASE_ID, instance, pool=pool) - - client.instance_admin_api.get_instance.side_effect = GoogleAPIError("test") - - with mock.patch("os.getenv") as getenv: - getenv.return_value = "true" - with self.assertRaises(GoogleAPIError): - database.spanner_api - - client.instance_admin_api.get_instance.assert_called_once() def test_spanner_api_w_emulator_host(self): client = _Client() @@ -593,8 +502,9 @@ def test___eq__(self): self.assertEqual(database1, database2) def test___eq__type_differ(self): + instance = _Instance(self.INSTANCE_NAME) pool = _Pool() - database1 = self._make_one(self.DATABASE_ID, None, pool=pool) + database1 = self._make_one(self.DATABASE_ID, instance, pool=pool) database2 = object() self.assertNotEqual(database1, database2) @@ -607,14 +517,18 @@ def test___ne__same_value(self): self.assertFalse(comparison_val) def test___ne__(self): + instance1, instance2 = _Instance(self.INSTANCE_NAME + "1"), _Instance( + self.INSTANCE_NAME + "2" + ) pool1, pool2 = _Pool(), _Pool() - database1 = self._make_one("database_id1", "instance1", pool=pool1) - database2 = self._make_one("database_id2", "instance2", pool=pool2) + database1 = self._make_one("database_id1", instance1, pool=pool1) + database2 = self._make_one("database_id2", instance2, pool=pool2) self.assertNotEqual(database1, database2) def test_create_grpc_error(self): from google.api_core.exceptions import GoogleAPICallError from google.api_core.exceptions import Unknown + from google.cloud.spanner_admin_database_v1 import CreateDatabaseRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -627,15 +541,27 @@ def test_create_grpc_error(self): with self.assertRaises(GoogleAPICallError): database.create() - api.create_database.assert_called_once_with( + expected_request = CreateDatabaseRequest( parent=self.INSTANCE_NAME, create_statement="CREATE DATABASE {}".format(self.DATABASE_ID), extra_statements=[], - metadata=[("google-cloud-resource-prefix", database.name)], + encryption_config=None, + ) + + api.create_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_create_already_exists(self): from google.cloud.exceptions import Conflict + from google.cloud.spanner_admin_database_v1 import CreateDatabaseRequest DATABASE_ID_HYPHEN = "database-id" client = _Client() @@ -648,15 +574,27 @@ def test_create_already_exists(self): with self.assertRaises(Conflict): database.create() - api.create_database.assert_called_once_with( + expected_request = CreateDatabaseRequest( parent=self.INSTANCE_NAME, create_statement="CREATE DATABASE `{}`".format(DATABASE_ID_HYPHEN), extra_statements=[], - metadata=[("google-cloud-resource-prefix", database.name)], + encryption_config=None, + ) + + api.create_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_create_instance_not_found(self): from google.cloud.exceptions import NotFound + from google.cloud.spanner_admin_database_v1 import CreateDatabaseRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -668,15 +606,28 @@ def test_create_instance_not_found(self): with self.assertRaises(NotFound): database.create() - api.create_database.assert_called_once_with( + expected_request = CreateDatabaseRequest( parent=self.INSTANCE_NAME, create_statement="CREATE DATABASE {}".format(self.DATABASE_ID), extra_statements=[], - metadata=[("google-cloud-resource-prefix", database.name)], + encryption_config=None, + ) + + api.create_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_create_success(self): from tests._fixtures import DDL_STATEMENTS + from google.cloud.spanner_admin_database_v1 import CreateDatabaseRequest + from google.cloud.spanner_admin_database_v1 import EncryptionConfig op_future = object() client = _Client() @@ -684,37 +635,143 @@ def test_create_success(self): api.create_database.return_value = op_future instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() + encryption_config = EncryptionConfig(kms_key_name="kms_key_name") database = self._make_one( - self.DATABASE_ID, instance, ddl_statements=DDL_STATEMENTS, pool=pool + self.DATABASE_ID, + instance, + ddl_statements=DDL_STATEMENTS, + pool=pool, + encryption_config=encryption_config, ) future = database.create() self.assertIs(future, op_future) - api.create_database.assert_called_once_with( + expected_request = CreateDatabaseRequest( parent=self.INSTANCE_NAME, create_statement="CREATE DATABASE {}".format(self.DATABASE_ID), extra_statements=DDL_STATEMENTS, - metadata=[("google-cloud-resource-prefix", database.name)], + encryption_config=encryption_config, ) - def test_exists_grpc_error(self): - from google.api_core.exceptions import Unknown + api.create_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_create_success_w_encryption_config_dict(self): + from tests._fixtures import DDL_STATEMENTS + from google.cloud.spanner_admin_database_v1 import CreateDatabaseRequest + from google.cloud.spanner_admin_database_v1 import EncryptionConfig + op_future = object() client = _Client() api = client.database_admin_api = self._make_database_admin_api() - api.get_database_ddl.side_effect = Unknown("testing") + api.create_database.return_value = op_future instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() - database = self._make_one(self.DATABASE_ID, instance, pool=pool) + encryption_config = {"kms_key_name": "kms_key_name"} + database = self._make_one( + self.DATABASE_ID, + instance, + ddl_statements=DDL_STATEMENTS, + pool=pool, + encryption_config=encryption_config, + ) - with self.assertRaises(Unknown): - database.exists() + future = database.create() + + self.assertIs(future, op_future) + + expected_encryption_config = EncryptionConfig(**encryption_config) + expected_request = CreateDatabaseRequest( + parent=self.INSTANCE_NAME, + create_statement="CREATE DATABASE {}".format(self.DATABASE_ID), + extra_statements=DDL_STATEMENTS, + encryption_config=expected_encryption_config, + ) + + api.create_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_create_success_w_proto_descriptors(self): + from tests._fixtures import DDL_STATEMENTS + from google.cloud.spanner_admin_database_v1 import CreateDatabaseRequest + + op_future = object() + client = _Client() + api = client.database_admin_api = self._make_database_admin_api() + api.create_database.return_value = op_future + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + proto_descriptors = b"" + database = self._make_one( + self.DATABASE_ID, + instance, + ddl_statements=DDL_STATEMENTS, + pool=pool, + proto_descriptors=proto_descriptors, + ) + + future = database.create() + + self.assertIs(future, op_future) + + expected_request = CreateDatabaseRequest( + parent=self.INSTANCE_NAME, + create_statement="CREATE DATABASE {}".format(self.DATABASE_ID), + extra_statements=DDL_STATEMENTS, + proto_descriptors=proto_descriptors, + ) + + api.create_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_exists_grpc_error(self): + from google.api_core.exceptions import Unknown + + client = _Client() + api = client.database_admin_api = self._make_database_admin_api() + api.get_database_ddl.side_effect = Unknown("testing") + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + + with self.assertRaises(Unknown): + database.exists() api.get_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_exists_not_found(self): @@ -730,18 +787,22 @@ def test_exists_not_found(self): self.assertFalse(database.exists()) api.get_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_exists_success(self): - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_database_v1 import GetDatabaseDdlResponse from tests._fixtures import DDL_STATEMENTS client = _Client() - ddl_pb = admin_v1_pb2.GetDatabaseDdlResponse(statements=DDL_STATEMENTS) + ddl_pb = GetDatabaseDdlResponse(statements=DDL_STATEMENTS) api = client.database_admin_api = self._make_database_admin_api() api.get_database_ddl.return_value = ddl_pb instance = _Instance(self.INSTANCE_NAME, client=client) @@ -751,8 +812,14 @@ def test_exists_success(self): self.assertTrue(database.exists()) api.get_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_reload_grpc_error(self): @@ -769,8 +836,14 @@ def test_reload_grpc_error(self): database.reload() api.get_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_reload_not_found(self): @@ -787,29 +860,51 @@ def test_reload_not_found(self): database.reload() api.get_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_reload_success(self): - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2 as admin_v1_pb2, - ) - from google.cloud.spanner_admin_database_v1.gapic import enums + from google.cloud.spanner_admin_database_v1 import Database + from google.cloud.spanner_admin_database_v1 import EncryptionConfig + from google.cloud.spanner_admin_database_v1 import EncryptionInfo + from google.cloud.spanner_admin_database_v1 import GetDatabaseDdlResponse + from google.cloud.spanner_admin_database_v1 import RestoreInfo from google.cloud._helpers import _datetime_to_pb_timestamp from tests._fixtures import DDL_STATEMENTS timestamp = self._make_timestamp() - restore_info = admin_v1_pb2.RestoreInfo() + restore_info = RestoreInfo() client = _Client() - ddl_pb = admin_v1_pb2.GetDatabaseDdlResponse(statements=DDL_STATEMENTS) + ddl_pb = GetDatabaseDdlResponse(statements=DDL_STATEMENTS) + encryption_config = EncryptionConfig(kms_key_name="kms_key") + encryption_info = [ + EncryptionInfo( + encryption_type=EncryptionInfo.Type.CUSTOMER_MANAGED_ENCRYPTION, + kms_key_version="kms_key_version", + ) + ] + default_leader = "us-east4" api = client.database_admin_api = self._make_database_admin_api() api.get_database_ddl.return_value = ddl_pb - db_pb = admin_v1_pb2.Database( + db_pb = Database( state=2, create_time=_datetime_to_pb_timestamp(timestamp), restore_info=restore_info, + version_retention_period="1d", + earliest_version_time=_datetime_to_pb_timestamp(timestamp), + encryption_config=encryption_config, + encryption_info=encryption_info, + default_leader=default_leader, + reconciling=True, + enable_drop_protection=True, ) api.get_database.return_value = db_pb instance = _Instance(self.INSTANCE_NAME, client=client) @@ -817,23 +912,43 @@ def test_reload_success(self): database = self._make_one(self.DATABASE_ID, instance, pool=pool) database.reload() - self.assertEqual(database._state, enums.Database.State.READY) + self.assertEqual(database._state, Database.State.READY) self.assertEqual(database._create_time, timestamp) self.assertEqual(database._restore_info, restore_info) + self.assertEqual(database._version_retention_period, "1d") + self.assertEqual(database._earliest_version_time, timestamp) self.assertEqual(database._ddl_statements, tuple(DDL_STATEMENTS)) + self.assertEqual(database._encryption_config, encryption_config) + self.assertEqual(database._encryption_info, encryption_info) + self.assertEqual(database._default_leader, default_leader) + self.assertEqual(database._reconciling, True) + self.assertEqual(database._enable_drop_protection, True) api.get_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) api.get_database.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + name=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], ) def test_update_ddl_grpc_error(self): from google.api_core.exceptions import Unknown from tests._fixtures import DDL_STATEMENTS + from google.cloud.spanner_admin_database_v1 import UpdateDatabaseDdlRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -845,16 +960,27 @@ def test_update_ddl_grpc_error(self): with self.assertRaises(Unknown): database.update_ddl(DDL_STATEMENTS) + expected_request = UpdateDatabaseDdlRequest( + database=self.DATABASE_NAME, + statements=DDL_STATEMENTS, + operation_id="", + ) + api.update_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - DDL_STATEMENTS, - "", - metadata=[("google-cloud-resource-prefix", database.name)], + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_update_ddl_not_found(self): from google.cloud.exceptions import NotFound from tests._fixtures import DDL_STATEMENTS + from google.cloud.spanner_admin_database_v1 import UpdateDatabaseDdlRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -866,15 +992,26 @@ def test_update_ddl_not_found(self): with self.assertRaises(NotFound): database.update_ddl(DDL_STATEMENTS) + expected_request = UpdateDatabaseDdlRequest( + database=self.DATABASE_NAME, + statements=DDL_STATEMENTS, + operation_id="", + ) + api.update_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - DDL_STATEMENTS, - "", - metadata=[("google-cloud-resource-prefix", database.name)], + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_update_ddl(self): from tests._fixtures import DDL_STATEMENTS + from google.cloud.spanner_admin_database_v1 import UpdateDatabaseDdlRequest op_future = object() client = _Client() @@ -888,15 +1025,26 @@ def test_update_ddl(self): self.assertIs(future, op_future) + expected_request = UpdateDatabaseDdlRequest( + database=self.DATABASE_NAME, + statements=DDL_STATEMENTS, + operation_id="", + ) + api.update_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - DDL_STATEMENTS, - "", - metadata=[("google-cloud-resource-prefix", database.name)], + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_update_ddl_w_operation_id(self): from tests._fixtures import DDL_STATEMENTS + from google.cloud.spanner_admin_database_v1 import UpdateDatabaseDdlRequest op_future = object() client = _Client() @@ -910,11 +1058,87 @@ def test_update_ddl_w_operation_id(self): self.assertIs(future, op_future) + expected_request = UpdateDatabaseDdlRequest( + database=self.DATABASE_NAME, + statements=DDL_STATEMENTS, + operation_id="someOperationId", + ) + + api.update_database_ddl.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_update_success(self): + op_future = object() + client = _Client() + api = client.database_admin_api = self._make_database_admin_api() + api.update_database.return_value = op_future + + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + database = self._make_one( + self.DATABASE_ID, instance, enable_drop_protection=True, pool=pool + ) + + future = database.update(["enable_drop_protection"]) + + self.assertIs(future, op_future) + + expected_database = DatabasePB(name=database.name, enable_drop_protection=True) + + field_mask = FieldMask(paths=["enable_drop_protection"]) + + api.update_database.assert_called_once_with( + database=expected_database, + update_mask=field_mask, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_update_ddl_w_proto_descriptors(self): + from tests._fixtures import DDL_STATEMENTS + from google.cloud.spanner_admin_database_v1 import UpdateDatabaseDdlRequest + + op_future = object() + client = _Client() + api = client.database_admin_api = self._make_database_admin_api() + api.update_database_ddl.return_value = op_future + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + + future = database.update_ddl(DDL_STATEMENTS, proto_descriptors=b"") + + self.assertIs(future, op_future) + + expected_request = UpdateDatabaseDdlRequest( + database=self.DATABASE_NAME, + statements=DDL_STATEMENTS, + operation_id="", + proto_descriptors=b"", + ) + api.update_database_ddl.assert_called_once_with( - self.DATABASE_NAME, - DDL_STATEMENTS, - "someOperationId", - metadata=[("google-cloud-resource-prefix", database.name)], + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_drop_grpc_error(self): @@ -931,8 +1155,14 @@ def test_drop_grpc_error(self): database.drop() api.drop_database.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_drop_not_found(self): @@ -949,8 +1179,14 @@ def test_drop_not_found(self): database.drop() api.drop_database.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_drop_success(self): @@ -966,19 +1202,35 @@ def test_drop_success(self): database.drop() api.drop_database.assert_called_once_with( - self.DATABASE_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + database=self.DATABASE_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def _execute_partitioned_dml_helper( - self, dml, params=None, param_types=None, query_options=None + self, + dml, + params=None, + param_types=None, + query_options=None, + request_options=None, + retried=False, + exclude_txn_from_change_streams=False, ): + import os + from google.api_core.exceptions import Aborted + from google.api_core.retry import Retry from google.protobuf.struct_pb2 import Struct - from google.cloud.spanner_v1.proto.result_set_pb2 import ( + from google.cloud.spanner_v1 import ( PartialResultSet, ResultSetStats, ) - from google.cloud.spanner_v1.proto.transaction_pb2 import ( + from google.cloud.spanner_v1 import ( Transaction as TransactionPB, TransactionSelector, TransactionOptions, @@ -987,6 +1239,11 @@ def _execute_partitioned_dml_helper( _make_value_pb, _merge_query_options, ) + from google.cloud.spanner_v1 import ExecuteSqlRequest + + import collections + + MethodConfig = collections.namedtuple("MethodConfig", ["retry"]) transaction_pb = TransactionPB(id=self.TRANSACTION_ID) @@ -1000,32 +1257,117 @@ def _execute_partitioned_dml_helper( session = _Session() pool.put(session) database = self._make_one(self.DATABASE_ID, instance, pool=pool) + + multiplexed_partitioned_enabled = ( + os.environ.get( + "GOOGLE_CLOUD_SPANNER_MULTIPLEXED_SESSIONS_PARTITIONED_OPS", "true" + ).lower() + != "false" + ) + + if multiplexed_partitioned_enabled: + # When multiplexed sessions are enabled, create a mock multiplexed session + # that the sessions manager will return + multiplexed_session = _Session() + multiplexed_session.name = ( + self.SESSION_NAME + ) # Use the expected session name + multiplexed_session.is_multiplexed = True + # Configure the sessions manager to return the multiplexed session + database._sessions_manager.get_session = mock.Mock( + return_value=multiplexed_session + ) + expected_session = multiplexed_session + else: + # When multiplexed sessions are disabled, use the regular pool session + expected_session = session + api = database._spanner_api = self._make_spanner_api() - api.begin_transaction.return_value = transaction_pb - api.execute_streaming_sql.return_value = iterator + api._method_configs = {"ExecuteStreamingSql": MethodConfig(retry=Retry())} + if retried: + retry_transaction_pb = TransactionPB(id=self.RETRY_TRANSACTION_ID) + api.begin_transaction.side_effect = [transaction_pb, retry_transaction_pb] + api.execute_streaming_sql.side_effect = [Aborted("test"), iterator] + else: + api.begin_transaction.return_value = transaction_pb + api.execute_streaming_sql.return_value = iterator row_count = database.execute_partitioned_dml( - dml, params, param_types, query_options + dml, + params, + param_types, + query_options, + request_options, + exclude_txn_from_change_streams, ) self.assertEqual(row_count, 2) txn_options = TransactionOptions( - partitioned_dml=TransactionOptions.PartitionedDml() - ) - - api.begin_transaction.assert_called_once_with( - session.name, - txn_options, - metadata=[("google-cloud-resource-prefix", database.name)], - ) + partitioned_dml=TransactionOptions.PartitionedDml(), + exclude_txn_from_change_streams=exclude_txn_from_change_streams, + ) + + if retried: + api.begin_transaction.assert_called_with( + session=expected_session.name, + options=txn_options, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.3.1", + ), + ], + ) + self.assertEqual(api.begin_transaction.call_count, 2) + api.begin_transaction.assert_called_with( + session=expected_session.name, + options=txn_options, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + # Please note that this try was by an abort and not from service unavailable. + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.3.1", + ), + ], + ) + else: + api.begin_transaction.assert_called_with( + session=expected_session.name, + options=txn_options, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + self.assertEqual(api.begin_transaction.call_count, 1) + api.begin_transaction.assert_called_with( + session=expected_session.name, + options=txn_options, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) if params: expected_params = Struct( fields={key: _make_value_pb(value) for (key, value) in params.items()} ) else: - expected_params = None + expected_params = {} expected_transaction = TransactionSelector(id=self.TRANSACTION_ID) expected_query_options = client._query_options @@ -1034,39 +1376,130 @@ def _execute_partitioned_dml_helper( expected_query_options, query_options ) - api.execute_streaming_sql.assert_called_once_with( - self.SESSION_NAME, - dml, + if not request_options: + expected_request_options = RequestOptions() + else: + expected_request_options = RequestOptions(request_options) + expected_request_options.transaction_tag = None + expected_request = ExecuteSqlRequest( + session=self.SESSION_NAME, + sql=dml, transaction=expected_transaction, params=expected_params, param_types=param_types, query_options=expected_query_options, - metadata=[("google-cloud-resource-prefix", database.name)], + request_options=expected_request_options, ) + if retried: + expected_retry_transaction = TransactionSelector( + id=self.RETRY_TRANSACTION_ID + ) + expected_request_with_retry = ExecuteSqlRequest( + session=self.SESSION_NAME, + sql=dml, + transaction=expected_retry_transaction, + params=expected_params, + param_types=param_types, + query_options=expected_query_options, + request_options=expected_request_options, + ) + + self.assertEqual( + api.execute_streaming_sql.call_args_list, + [ + mock.call( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ), + mock.call( + request=expected_request_with_retry, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.4.1", + ), + ], + ), + ], + ) + self.assertEqual(api.execute_streaming_sql.call_count, 2) + else: + api.execute_streaming_sql.assert_any_call( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + self.assertEqual(api.execute_streaming_sql.call_count, 1) + + # Verify that the correct session type was used based on environment + if multiplexed_partitioned_enabled: + # Verify that sessions_manager.get_session was called with PARTITIONED transaction type + database._sessions_manager.get_session.assert_called_with( + TransactionType.PARTITIONED + ) + # If multiplexed sessions are not enabled, the regular pool session should be used + def test_execute_partitioned_dml_wo_params(self): self._execute_partitioned_dml_helper(dml=DML_WO_PARAM) - def test_execute_partitioned_dml_w_params_wo_param_types(self): - with self.assertRaises(ValueError): - self._execute_partitioned_dml_helper(dml=DML_W_PARAM, params=PARAMS) - def test_execute_partitioned_dml_w_params_and_param_types(self): self._execute_partitioned_dml_helper( dml=DML_W_PARAM, params=PARAMS, param_types=PARAM_TYPES ) def test_execute_partitioned_dml_w_query_options(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest self._execute_partitioned_dml_helper( dml=DML_W_PARAM, query_options=ExecuteSqlRequest.QueryOptions(optimizer_version="3"), ) - def test_session_factory_defaults(self): - from google.cloud.spanner_v1.session import Session + def test_execute_partitioned_dml_w_request_options(self): + self._execute_partitioned_dml_helper( + dml=DML_W_PARAM, + request_options=RequestOptions( + priority=RequestOptions.Priority.PRIORITY_MEDIUM + ), + ) + + def test_execute_partitioned_dml_w_trx_tag_ignored(self): + self._execute_partitioned_dml_helper( + dml=DML_W_PARAM, + request_options=RequestOptions(transaction_tag="trx-tag"), + ) + + def test_execute_partitioned_dml_w_req_tag_used(self): + self._execute_partitioned_dml_helper( + dml=DML_W_PARAM, + request_options=RequestOptions(request_tag="req-tag"), + ) + + def test_execute_partitioned_dml_wo_params_retry_aborted(self): + self._execute_partitioned_dml_helper(dml=DML_WO_PARAM, retried=True) + + def test_execute_partitioned_dml_w_exclude_txn_from_change_streams(self): + self._execute_partitioned_dml_helper( + dml=DML_WO_PARAM, exclude_txn_from_change_streams=True + ) + def test_session_factory_defaults(self): client = _Client() instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() @@ -1080,8 +1513,6 @@ def test_session_factory_defaults(self): self.assertEqual(session.labels, {}) def test_session_factory_w_labels(self): - from google.cloud.spanner_v1.session import Session - client = _Client() instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() @@ -1097,6 +1528,7 @@ def test_session_factory_w_labels(self): def test_snapshot_defaults(self): from google.cloud.spanner_v1.database import SnapshotCheckout + from google.cloud.spanner_v1.snapshot import Snapshot client = _Client() instance = _Instance(self.INSTANCE_NAME, client=client) @@ -1104,16 +1536,47 @@ def test_snapshot_defaults(self): session = _Session() pool.put(session) database = self._make_one(self.DATABASE_ID, instance, pool=pool) + # Mock the spanner_api to avoid creating a real SpannerClient + database._spanner_api = instance._client._spanner_api + + # Check if multiplexed sessions are enabled for read operations + multiplexed_enabled = is_multiplexed_enabled(TransactionType.READ_ONLY) + + if multiplexed_enabled: + # When multiplexed sessions are enabled, configure the sessions manager + # to return a multiplexed session for read operations + multiplexed_session = _Session() + multiplexed_session.name = self.SESSION_NAME + multiplexed_session.is_multiplexed = True + # Override the side_effect to return the multiplexed session + database._sessions_manager.get_session = mock.Mock( + return_value=multiplexed_session + ) + expected_session = multiplexed_session + else: + expected_session = session checkout = database.snapshot() self.assertIsInstance(checkout, SnapshotCheckout) self.assertIs(checkout._database, database) self.assertEqual(checkout._kw, {}) + with checkout as snapshot: + if not multiplexed_enabled: + self.assertIsNone(pool._session) + self.assertIsInstance(snapshot, Snapshot) + self.assertIs(snapshot._session, expected_session) + self.assertTrue(snapshot._strong) + self.assertFalse(snapshot._multi_use) + + if not multiplexed_enabled: + self.assertIs(pool._session, session) + def test_snapshot_w_read_timestamp_and_multi_use(self): import datetime from google.cloud._helpers import UTC from google.cloud.spanner_v1.database import SnapshotCheckout + from google.cloud.spanner_v1.snapshot import Snapshot now = datetime.datetime.utcnow().replace(tzinfo=UTC) client = _Client() @@ -1123,12 +1586,40 @@ def test_snapshot_w_read_timestamp_and_multi_use(self): pool.put(session) database = self._make_one(self.DATABASE_ID, instance, pool=pool) + # Check if multiplexed sessions are enabled for read operations + multiplexed_enabled = is_multiplexed_enabled(TransactionType.READ_ONLY) + + if multiplexed_enabled: + # When multiplexed sessions are enabled, configure the sessions manager + # to return a multiplexed session for read operations + multiplexed_session = _Session() + multiplexed_session.name = self.SESSION_NAME + multiplexed_session.is_multiplexed = True + # Override the side_effect to return the multiplexed session + database._sessions_manager.get_session = mock.Mock( + return_value=multiplexed_session + ) + expected_session = multiplexed_session + else: + expected_session = session + checkout = database.snapshot(read_timestamp=now, multi_use=True) self.assertIsInstance(checkout, SnapshotCheckout) self.assertIs(checkout._database, database) self.assertEqual(checkout._kw, {"read_timestamp": now, "multi_use": True}) + with checkout as snapshot: + if not multiplexed_enabled: + self.assertIsNone(pool._session) + self.assertIsInstance(snapshot, Snapshot) + self.assertIs(snapshot._session, expected_session) + self.assertEqual(snapshot._read_timestamp, now) + self.assertTrue(snapshot._multi_use) + + if not multiplexed_enabled: + self.assertIs(pool._session, session) + def test_batch(self): from google.cloud.spanner_v1.database import BatchCheckout @@ -1143,10 +1634,25 @@ def test_batch(self): self.assertIsInstance(checkout, BatchCheckout) self.assertIs(checkout._database, database) + def test_mutation_groups(self): + from google.cloud.spanner_v1.database import MutationGroupsCheckout + + client = _Client() + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + session = _Session() + pool.put(session) + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + + checkout = database.mutation_groups() + self.assertIsInstance(checkout, MutationGroupsCheckout) + self.assertIs(checkout._database, database) + def test_batch_snapshot(self): from google.cloud.spanner_v1.database import BatchSnapshot - database = self._make_one(self.DATABASE_ID, instance=object(), pool=_Pool()) + instance = _Instance(self.INSTANCE_NAME) + database = self._make_one(self.DATABASE_ID, instance=instance, pool=_Pool()) batch_txn = database.batch_snapshot() self.assertIsInstance(batch_txn, BatchSnapshot) @@ -1157,7 +1663,8 @@ def test_batch_snapshot(self): def test_batch_snapshot_w_read_timestamp(self): from google.cloud.spanner_v1.database import BatchSnapshot - database = self._make_one(self.DATABASE_ID, instance=object(), pool=_Pool()) + instance = _Instance(self.INSTANCE_NAME) + database = self._make_one(self.DATABASE_ID, instance=instance, pool=_Pool()) timestamp = self._make_timestamp() batch_txn = database.batch_snapshot(read_timestamp=timestamp) @@ -1169,7 +1676,8 @@ def test_batch_snapshot_w_read_timestamp(self): def test_batch_snapshot_w_exact_staleness(self): from google.cloud.spanner_v1.database import BatchSnapshot - database = self._make_one(self.DATABASE_ID, instance=object(), pool=_Pool()) + instance = _Instance(self.INSTANCE_NAME) + database = self._make_one(self.DATABASE_ID, instance=instance, pool=_Pool()) duration = self._make_duration() batch_txn = database.batch_snapshot(exact_staleness=duration) @@ -1182,20 +1690,26 @@ def test_run_in_transaction_wo_args(self): import datetime NOW = datetime.datetime.now() - client = _Client() + client = _Client(observability_options=dict(enable_end_to_end_tracing=True)) instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() session = _Session() pool.put(session) session._committed = NOW database = self._make_one(self.DATABASE_ID, instance, pool=pool) + # Mock the spanner_api to avoid creating a real SpannerClient + database._spanner_api = instance._client._spanner_api - _unit_of_work = object() + def _unit_of_work(txn): + return NOW - committed = database.run_in_transaction(_unit_of_work) + # Mock the transaction commit method to return NOW + with mock.patch( + "google.cloud.spanner_v1.transaction.Transaction.commit", return_value=NOW + ): + committed = database.run_in_transaction(_unit_of_work) - self.assertEqual(committed, NOW) - self.assertEqual(session._retried, (_unit_of_work, (), {})) + self.assertEqual(committed, NOW) def test_run_in_transaction_w_args(self): import datetime @@ -1210,13 +1724,19 @@ def test_run_in_transaction_w_args(self): pool.put(session) session._committed = NOW database = self._make_one(self.DATABASE_ID, instance, pool=pool) + # Mock the spanner_api to avoid creating a real SpannerClient + database._spanner_api = instance._client._spanner_api - _unit_of_work = object() + def _unit_of_work(txn, *args, **kwargs): + return NOW - committed = database.run_in_transaction(_unit_of_work, SINCE, until=UNTIL) + # Mock the transaction commit method to return NOW + with mock.patch( + "google.cloud.spanner_v1.transaction.Transaction.commit", return_value=NOW + ): + committed = database.run_in_transaction(_unit_of_work, SINCE, until=UNTIL) - self.assertEqual(committed, NOW) - self.assertEqual(session._retried, (_unit_of_work, (SINCE,), {"until": UNTIL})) + self.assertEqual(committed, NOW) def test_run_in_transaction_nested(self): from datetime import datetime @@ -1228,12 +1748,14 @@ def test_run_in_transaction_nested(self): session._committed = datetime.now() pool.put(session) database = self._make_one(self.DATABASE_ID, instance, pool=pool) + # Mock the spanner_api to avoid creating a real SpannerClient + database._spanner_api = instance._client._spanner_api # Define the inner function. inner = mock.Mock(spec=()) # Define the nested transaction. - def nested_unit_of_work(): + def nested_unit_of_work(txn): return database.run_in_transaction(inner) # Attempting to run this transaction should raise RuntimeError. @@ -1250,6 +1772,7 @@ def test_restore_backup_unspecified(self): def test_restore_grpc_error(self): from google.api_core.exceptions import Unknown + from google.cloud.spanner_admin_database_v1 import RestoreDatabaseRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -1262,15 +1785,26 @@ def test_restore_grpc_error(self): with self.assertRaises(Unknown): database.restore(backup) - api.restore_database.assert_called_once_with( + expected_request = RestoreDatabaseRequest( parent=self.INSTANCE_NAME, database_id=self.DATABASE_ID, backup=self.BACKUP_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + ) + + api.restore_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_restore_not_found(self): from google.api_core.exceptions import NotFound + from google.cloud.spanner_admin_database_v1 import RestoreDatabaseRequest client = _Client() api = client.database_admin_api = self._make_database_admin_api() @@ -1283,60 +1817,159 @@ def test_restore_not_found(self): with self.assertRaises(NotFound): database.restore(backup) - api.restore_database.assert_called_once_with( + expected_request = RestoreDatabaseRequest( parent=self.INSTANCE_NAME, database_id=self.DATABASE_ID, backup=self.BACKUP_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + ) + + api.restore_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_restore_success(self): + from google.cloud.spanner_admin_database_v1 import ( + RestoreDatabaseEncryptionConfig, + ) + from google.cloud.spanner_admin_database_v1 import RestoreDatabaseRequest + op_future = object() client = _Client() api = client.database_admin_api = self._make_database_admin_api() api.restore_database.return_value = op_future instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() - database = self._make_one(self.DATABASE_ID, instance, pool=pool) + encryption_config = RestoreDatabaseEncryptionConfig( + encryption_type=RestoreDatabaseEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + kms_key_name="kms_key_name", + ) + database = self._make_one( + self.DATABASE_ID, instance, pool=pool, encryption_config=encryption_config + ) backup = _Backup(self.BACKUP_NAME) future = database.restore(backup) self.assertIs(future, op_future) + expected_request = RestoreDatabaseRequest( + parent=self.INSTANCE_NAME, + database_id=self.DATABASE_ID, + backup=self.BACKUP_NAME, + encryption_config=encryption_config, + ) + api.restore_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_restore_success_w_encryption_config_dict(self): + from google.cloud.spanner_admin_database_v1 import ( + RestoreDatabaseEncryptionConfig, + ) + from google.cloud.spanner_admin_database_v1 import RestoreDatabaseRequest + + op_future = object() + client = _Client() + api = client.database_admin_api = self._make_database_admin_api() + api.restore_database.return_value = op_future + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + encryption_config = { + "encryption_type": RestoreDatabaseEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + "kms_key_name": "kms_key_name", + } + database = self._make_one( + self.DATABASE_ID, instance, pool=pool, encryption_config=encryption_config + ) + backup = _Backup(self.BACKUP_NAME) + + future = database.restore(backup) + + self.assertIs(future, op_future) + + expected_encryption_config = RestoreDatabaseEncryptionConfig( + encryption_type=RestoreDatabaseEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + kms_key_name="kms_key_name", + ) + expected_request = RestoreDatabaseRequest( parent=self.INSTANCE_NAME, database_id=self.DATABASE_ID, backup=self.BACKUP_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + encryption_config=expected_encryption_config, + ) + + api.restore_database.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_restore_w_invalid_encryption_config_dict(self): + from google.cloud.spanner_admin_database_v1 import ( + RestoreDatabaseEncryptionConfig, + ) + + client = _Client() + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + encryption_config = { + "encryption_type": RestoreDatabaseEncryptionConfig.EncryptionType.GOOGLE_DEFAULT_ENCRYPTION, + "kms_key_name": "kms_key_name", + } + database = self._make_one( + self.DATABASE_ID, instance, pool=pool, encryption_config=encryption_config ) + backup = _Backup(self.BACKUP_NAME) + + with self.assertRaises(ValueError): + database.restore(backup) def test_is_ready(self): - from google.cloud.spanner_admin_database_v1.gapic import enums + from google.cloud.spanner_admin_database_v1 import Database client = _Client() instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() database = self._make_one(self.DATABASE_ID, instance, pool=pool) - database._state = enums.Database.State.READY + database._state = Database.State.READY self.assertTrue(database.is_ready()) - database._state = enums.Database.State.READY_OPTIMIZING + database._state = Database.State.READY_OPTIMIZING self.assertTrue(database.is_ready()) - database._state = enums.Database.State.CREATING + database._state = Database.State.CREATING self.assertFalse(database.is_ready()) def test_is_optimized(self): - from google.cloud.spanner_admin_database_v1.gapic import enums + from google.cloud.spanner_admin_database_v1 import Database client = _Client() instance = _Instance(self.INSTANCE_NAME, client=client) pool = _Pool() database = self._make_one(self.DATABASE_ID, instance, pool=pool) - database._state = enums.Database.State.READY + database._state = Database.State.READY self.assertTrue(database.is_optimized()) - database._state = enums.Database.State.READY_OPTIMIZING + database._state = Database.State.READY_OPTIMIZING self.assertFalse(database.is_optimized()) - database._state = enums.Database.State.CREATING + database._state = Database.State.CREATING self.assertFalse(database.is_optimized()) def test_list_database_operations_grpc_error(self): @@ -1415,31 +2048,110 @@ def test_list_database_operations_explicit_filter(self): filter_=expected_filter_, page_size=page_size ) + def test_list_database_roles_grpc_error(self): + from google.api_core.exceptions import Unknown + from google.cloud.spanner_admin_database_v1 import ListDatabaseRolesRequest -class TestBatchCheckout(_BaseTest): - def _get_target_class(self): - from google.cloud.spanner_v1.database import BatchCheckout - - return BatchCheckout + client = _Client() + api = client.database_admin_api = self._make_database_admin_api() + api.list_database_roles.side_effect = Unknown("testing") + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) - @staticmethod - def _make_spanner_client(): - from google.cloud.spanner_v1.gapic.spanner_client import SpannerClient + with self.assertRaises(Unknown): + database.list_database_roles() - return mock.create_autospec(SpannerClient) + expected_request = ListDatabaseRolesRequest( + parent=database.name, + ) - def test_ctor(self): - database = _Database(self.DATABASE_NAME) - checkout = self._make_one(database) - self.assertIs(checkout._database, database) + api.list_database_roles.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) - def test_context_mgr_success(self): - import datetime - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionOptions - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - from google.cloud.spanner_v1.batch import Batch + def test_list_database_roles_defaults(self): + from google.cloud.spanner_admin_database_v1 import ListDatabaseRolesRequest + + client = _Client() + api = client.database_admin_api = self._make_database_admin_api() + instance = _Instance(self.INSTANCE_NAME, client=client) + instance.list_database_roles = mock.MagicMock(return_value=[]) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + + resp = database.list_database_roles() + + expected_request = ListDatabaseRolesRequest( + parent=database.name, + ) + + api.list_database_roles.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + self.assertIsNotNone(resp) + + def test_table_factory_defaults(self): + from google.cloud.spanner_v1.table import Table + + client = _Client() + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + database._database_dialect = DatabaseDialect.GOOGLE_STANDARD_SQL + my_table = database.table("my_table") + self.assertIsInstance(my_table, Table) + self.assertIs(my_table._database, database) + self.assertEqual(my_table.table_id, "my_table") + + def test_list_tables(self): + client = _Client() + instance = _Instance(self.INSTANCE_NAME, client=client) + pool = _Pool() + database = self._make_one(self.DATABASE_ID, instance, pool=pool) + tables = database.list_tables() + self.assertIsNotNone(tables) + + +class TestBatchCheckout(_BaseTest): + def _get_target_class(self): + from google.cloud.spanner_v1.database import BatchCheckout + + return BatchCheckout + + @staticmethod + def _make_spanner_client(): + from google.cloud.spanner_v1 import SpannerClient + + return mock.create_autospec(SpannerClient) + + def test_ctor(self): + database = _Database(self.DATABASE_NAME) + checkout = self._make_one(database) + self.assertIs(checkout._database, database) + + def test_context_mgr_success(self): + import datetime + from google.cloud.spanner_v1 import CommitRequest + from google.cloud.spanner_v1 import CommitResponse + from google.cloud.spanner_v1 import TransactionOptions + from google.cloud._helpers import UTC + from google.cloud._helpers import _datetime_to_pb_timestamp + from google.cloud.spanner_v1.batch import Batch now = datetime.datetime.utcnow().replace(tzinfo=UTC) now_pb = _datetime_to_pb_timestamp(now) @@ -1450,6 +2162,59 @@ def test_context_mgr_success(self): pool = database._pool = _Pool() session = _Session(database) pool.put(session) + checkout = self._make_one( + database, request_options={"transaction_tag": self.TRANSACTION_TAG} + ) + + with checkout as batch: + self.assertIsNone(pool._session) + self.assertIsInstance(batch, Batch) + self.assertIs(batch._session, session) + + self.assertIs(pool._session, session) + self.assertEqual(batch.committed, now) + self.assertEqual(batch.transaction_tag, self.TRANSACTION_TAG) + + expected_txn_options = TransactionOptions(read_write={}) + + request = CommitRequest( + session=self.SESSION_NAME, + mutations=[], + single_use_transaction=expected_txn_options, + request_options=RequestOptions(transaction_tag=self.TRANSACTION_TAG), + ) + api.commit.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_context_mgr_w_commit_stats_success(self): + import datetime + from google.cloud.spanner_v1 import CommitRequest + from google.cloud.spanner_v1 import CommitResponse + from google.cloud.spanner_v1 import TransactionOptions + from google.cloud._helpers import UTC + from google.cloud._helpers import _datetime_to_pb_timestamp + from google.cloud.spanner_v1.batch import Batch + + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + commit_stats = CommitResponse.CommitStats(mutation_count=4) + response = CommitResponse(commit_timestamp=now_pb, commit_stats=commit_stats) + database = _Database(self.DATABASE_NAME) + database.log_commit_stats = True + api = database.spanner_api = self._make_spanner_client() + api.commit.return_value = response + pool = database._pool = _Pool() + session = _Session(database) + pool.put(session) checkout = self._make_one(database) with checkout as batch: @@ -1462,12 +2227,75 @@ def test_context_mgr_success(self): expected_txn_options = TransactionOptions(read_write={}) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=[], + single_use_transaction=expected_txn_options, + return_commit_stats=True, + request_options=RequestOptions(), + ) api.commit.assert_called_once_with( - self.SESSION_NAME, + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + database.logger.info.assert_called_once_with( + "CommitStats: mutation_count: 4\n", extra={"commit_stats": commit_stats} + ) + + def test_context_mgr_w_aborted_commit_status(self): + from google.api_core.exceptions import Aborted + from google.cloud.spanner_v1 import CommitRequest + from google.cloud.spanner_v1 import TransactionOptions + from google.cloud.spanner_v1.batch import Batch + + database = _Database(self.DATABASE_NAME) + database.log_commit_stats = True + api = database.spanner_api = self._make_spanner_client() + api.commit.side_effect = Aborted("aborted exception", errors=("Aborted error")) + pool = database._pool = _Pool() + session = _Session(database) + pool.put(session) + checkout = self._make_one(database, timeout_secs=0.1, default_retry_delay=0) + + with self.assertRaises(Aborted): + with checkout as batch: + self.assertIsNone(pool._session) + self.assertIsInstance(batch, Batch) + self.assertIs(batch._session, session) + + self.assertIs(pool._session, session) + + expected_txn_options = TransactionOptions(read_write={}) + + request = CommitRequest( + session=self.SESSION_NAME, mutations=[], single_use_transaction=expected_txn_options, - metadata=[("google-cloud-resource-prefix", database.name)], + return_commit_stats=True, + request_options=RequestOptions(), ) + self.assertGreater(api.commit.call_count, 1) + api.commit.assert_any_call( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + database.logger.info.assert_not_called() def test_context_mgr_failure(self): from google.cloud.spanner_v1.batch import Batch @@ -1564,6 +2392,66 @@ class Testing(Exception): self.assertIs(pool._session, session) + def test_context_mgr_session_not_found_error(self): + from google.cloud.exceptions import NotFound + + database = _Database(self.DATABASE_NAME) + session = _Session(database, name="session-1") + session.exists = mock.MagicMock(return_value=False) + pool = database._pool = _Pool() + new_session = _Session(database, name="session-2") + new_session.create = mock.MagicMock(return_value=[]) + pool._new_session = mock.MagicMock(return_value=new_session) + + pool.put(session) + checkout = self._make_one(database) + + self.assertEqual(pool._session, session) + with self.assertRaises(NotFound): + with checkout as _: + raise NotFound("Session not found") + # Assert that session-1 was removed from pool and new session was added. + self.assertEqual(pool._session, new_session) + + def test_context_mgr_table_not_found_error(self): + from google.cloud.exceptions import NotFound + + database = _Database(self.DATABASE_NAME) + session = _Session(database, name="session-1") + session.exists = mock.MagicMock(return_value=True) + pool = database._pool = _Pool() + pool._new_session = mock.MagicMock(return_value=[]) + + pool.put(session) + checkout = self._make_one(database) + + self.assertEqual(pool._session, session) + with self.assertRaises(NotFound): + with checkout as _: + raise NotFound("Table not found") + # Assert that session-1 was not removed from pool. + self.assertEqual(pool._session, session) + pool._new_session.assert_not_called() + + def test_context_mgr_unknown_error(self): + database = _Database(self.DATABASE_NAME) + session = _Session(database) + pool = database._pool = _Pool() + pool._new_session = mock.MagicMock(return_value=[]) + pool.put(session) + checkout = self._make_one(database) + + class Testing(Exception): + pass + + self.assertEqual(pool._session, session) + with self.assertRaises(Testing): + with checkout as _: + raise Testing("Unknown error.") + # Assert that session-1 was not removed from pool. + self.assertEqual(pool._session, session) + pool._new_session.assert_not_called() + class TestBatchSnapshot(_BaseTest): TABLE = "table_name" @@ -1584,8 +2472,6 @@ def _make_database(**kwargs): @staticmethod def _make_session(**kwargs): - from google.cloud.spanner_v1.session import Session - return mock.create_autospec(Session, instance=True, **kwargs) @staticmethod @@ -1642,20 +2528,22 @@ def test_ctor_w_exact_staleness(self): def test_from_dict(self): klass = self._get_target_class() database = self._make_database() - session = database.session.return_value = self._make_session() - snapshot = session.snapshot.return_value = self._make_snapshot() - api_repr = { - "session_id": self.SESSION_ID, - "transaction_id": self.TRANSACTION_ID, - } + api = database.spanner_api = build_spanner_api() + + batch_txn = klass.from_dict( + database, + { + "session_id": self.SESSION_ID, + "transaction_id": self.TRANSACTION_ID, + }, + ) - batch_txn = klass.from_dict(database, api_repr) self.assertIs(batch_txn._database, database) - self.assertIs(batch_txn._session, session) - self.assertEqual(session._session_id, self.SESSION_ID) - self.assertEqual(snapshot._transaction_id, self.TRANSACTION_ID) - snapshot.begin.assert_not_called() - self.assertIs(batch_txn._snapshot, snapshot) + self.assertEqual(batch_txn._session._session_id, self.SESSION_ID) + self.assertEqual(batch_txn._snapshot._transaction_id, self.TRANSACTION_ID) + + api.create_session.assert_not_called() + api.begin_transaction.assert_not_called() def test_to_dict(self): database = self._make_database() @@ -1677,10 +2565,15 @@ def test__get_session_already(self): def test__get_session_new(self): database = self._make_database() - session = database.session.return_value = self._make_session() + session = self._make_session() + # Configure sessions_manager to return the session for partition operations + database.sessions_manager.get_session.return_value = session batch_txn = self._make_one(database) self.assertIs(batch_txn._get_session(), session) - session.create.assert_called_once_with() + # Verify that sessions_manager.get_session was called with PARTITIONED transaction type + database.sessions_manager.get_session.assert_called_once_with( + TransactionType.PARTITIONED + ) def test__get_snapshot_already(self): database = self._make_database() @@ -1696,7 +2589,10 @@ def test__get_snapshot_new_wo_staleness(self): snapshot = session.snapshot.return_value = self._make_snapshot() self.assertIs(batch_txn._get_snapshot(), snapshot) session.snapshot.assert_called_once_with( - read_timestamp=None, exact_staleness=None, multi_use=True + read_timestamp=None, + exact_staleness=None, + multi_use=True, + transaction_id=None, ) snapshot.begin.assert_called_once_with() @@ -1708,7 +2604,10 @@ def test__get_snapshot_w_read_timestamp(self): snapshot = session.snapshot.return_value = self._make_snapshot() self.assertIs(batch_txn._get_snapshot(), snapshot) session.snapshot.assert_called_once_with( - read_timestamp=timestamp, exact_staleness=None, multi_use=True + read_timestamp=timestamp, + exact_staleness=None, + multi_use=True, + transaction_id=None, ) snapshot.begin.assert_called_once_with() @@ -1720,7 +2619,10 @@ def test__get_snapshot_w_exact_staleness(self): snapshot = session.snapshot.return_value = self._make_snapshot() self.assertIs(batch_txn._get_snapshot(), snapshot) session.snapshot.assert_called_once_with( - read_timestamp=None, exact_staleness=duration, multi_use=True + read_timestamp=None, + exact_staleness=duration, + multi_use=True, + transaction_id=None, ) snapshot.begin.assert_called_once_with() @@ -1771,6 +2673,51 @@ def test_generate_read_batches_w_max_partitions(self): "columns": self.COLUMNS, "keyset": {"all": True}, "index": "", + "data_boost_enabled": False, + "directed_read_options": None, + } + self.assertEqual(len(batches), len(self.TOKENS)) + for batch, token in zip(batches, self.TOKENS): + self.assertEqual(batch["partition"], token) + self.assertEqual(batch["read"], expected_read) + + snapshot.partition_read.assert_called_once_with( + table=self.TABLE, + columns=self.COLUMNS, + keyset=keyset, + index="", + partition_size_bytes=None, + max_partitions=max_partitions, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ) + + def test_generate_read_batches_w_retry_and_timeout_params(self): + max_partitions = len(self.TOKENS) + keyset = self._make_keyset() + database = self._make_database() + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + snapshot.partition_read.return_value = self.TOKENS + retry = Retry(deadline=60) + batches = list( + batch_txn.generate_read_batches( + self.TABLE, + self.COLUMNS, + keyset, + max_partitions=max_partitions, + retry=retry, + timeout=2.0, + ) + ) + + expected_read = { + "table": self.TABLE, + "columns": self.COLUMNS, + "keyset": {"all": True}, + "index": "", + "data_boost_enabled": False, + "directed_read_options": None, } self.assertEqual(len(batches), len(self.TOKENS)) for batch, token in zip(batches, self.TOKENS): @@ -1784,6 +2731,8 @@ def test_generate_read_batches_w_max_partitions(self): index="", partition_size_bytes=None, max_partitions=max_partitions, + retry=retry, + timeout=2.0, ) def test_generate_read_batches_w_index_w_partition_size_bytes(self): @@ -1809,6 +2758,8 @@ def test_generate_read_batches_w_index_w_partition_size_bytes(self): "columns": self.COLUMNS, "keyset": {"all": True}, "index": self.INDEX, + "data_boost_enabled": False, + "directed_read_options": None, } self.assertEqual(len(batches), len(self.TOKENS)) for batch, token in zip(batches, self.TOKENS): @@ -1822,6 +2773,91 @@ def test_generate_read_batches_w_index_w_partition_size_bytes(self): index=self.INDEX, partition_size_bytes=size, max_partitions=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ) + + def test_generate_read_batches_w_data_boost_enabled(self): + data_boost_enabled = True + keyset = self._make_keyset() + database = self._make_database() + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + snapshot.partition_read.return_value = self.TOKENS + + batches = list( + batch_txn.generate_read_batches( + self.TABLE, + self.COLUMNS, + keyset, + index=self.INDEX, + data_boost_enabled=data_boost_enabled, + ) + ) + + expected_read = { + "table": self.TABLE, + "columns": self.COLUMNS, + "keyset": {"all": True}, + "index": self.INDEX, + "data_boost_enabled": True, + "directed_read_options": None, + } + self.assertEqual(len(batches), len(self.TOKENS)) + for batch, token in zip(batches, self.TOKENS): + self.assertEqual(batch["partition"], token) + self.assertEqual(batch["read"], expected_read) + + snapshot.partition_read.assert_called_once_with( + table=self.TABLE, + columns=self.COLUMNS, + keyset=keyset, + index=self.INDEX, + partition_size_bytes=None, + max_partitions=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ) + + def test_generate_read_batches_w_directed_read_options(self): + keyset = self._make_keyset() + database = self._make_database() + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + snapshot.partition_read.return_value = self.TOKENS + + batches = list( + batch_txn.generate_read_batches( + self.TABLE, + self.COLUMNS, + keyset, + index=self.INDEX, + directed_read_options=DIRECTED_READ_OPTIONS, + ) + ) + + expected_read = { + "table": self.TABLE, + "columns": self.COLUMNS, + "keyset": {"all": True}, + "index": self.INDEX, + "data_boost_enabled": False, + "directed_read_options": DIRECTED_READ_OPTIONS, + } + self.assertEqual(len(batches), len(self.TOKENS)) + for batch, token in zip(batches, self.TOKENS): + self.assertEqual(batch["partition"], token) + self.assertEqual(batch["read"], expected_read) + + snapshot.partition_read.assert_called_once_with( + table=self.TABLE, + columns=self.COLUMNS, + keyset=keyset, + index=self.INDEX, + partition_size_bytes=None, + max_partitions=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ) def test_process_read_batch(self): @@ -1851,23 +2887,61 @@ def test_process_read_batch(self): keyset=keyset, index=self.INDEX, partition=token, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ) - def test_generate_query_batches_w_max_partitions(self): - sql = "SELECT COUNT(*) FROM table_name" - max_partitions = len(self.TOKENS) - client = _Client(self.PROJECT_ID) - instance = _Instance(self.INSTANCE_NAME, client=client) - database = _Database(self.DATABASE_NAME, instance=instance) - batch_txn = self._make_one(database) - snapshot = batch_txn._snapshot = self._make_snapshot() - snapshot.partition_query.return_value = self.TOKENS - - batches = list( + def test_process_read_batch_w_retry_timeout(self): + keyset = self._make_keyset() + token = b"TOKEN" + batch = { + "partition": token, + "read": { + "table": self.TABLE, + "columns": self.COLUMNS, + "keyset": {"all": True}, + "index": self.INDEX, + }, + } + database = self._make_database() + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + expected = snapshot.read.return_value = object() + retry = Retry(deadline=60) + found = batch_txn.process_read_batch(batch, retry=retry, timeout=2.0) + + self.assertIs(found, expected) + + snapshot.read.assert_called_once_with( + table=self.TABLE, + columns=self.COLUMNS, + keyset=keyset, + index=self.INDEX, + partition=token, + retry=retry, + timeout=2.0, + ) + + def test_generate_query_batches_w_max_partitions(self): + sql = "SELECT COUNT(*) FROM table_name" + max_partitions = len(self.TOKENS) + client = _Client(self.PROJECT_ID) + instance = _Instance(self.INSTANCE_NAME, client=client) + database = _Database(self.DATABASE_NAME, instance=instance) + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + snapshot.partition_query.return_value = self.TOKENS + + batches = list( batch_txn.generate_query_batches(sql, max_partitions=max_partitions) ) - expected_query = {"sql": sql, "query_options": client._query_options} + expected_query = { + "sql": sql, + "data_boost_enabled": False, + "query_options": client._query_options, + "directed_read_options": None, + } self.assertEqual(len(batches), len(self.TOKENS)) for batch, token in zip(batches, self.TOKENS): self.assertEqual(batch["partition"], token) @@ -1879,6 +2953,8 @@ def test_generate_query_batches_w_max_partitions(self): param_types=None, partition_size_bytes=None, max_partitions=max_partitions, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ) def test_generate_query_batches_w_params_w_partition_size_bytes(self): @@ -1903,9 +2979,11 @@ def test_generate_query_batches_w_params_w_partition_size_bytes(self): expected_query = { "sql": sql, + "data_boost_enabled": False, "params": params, "param_types": param_types, "query_options": client._query_options, + "directed_read_options": None, } self.assertEqual(len(batches), len(self.TOKENS)) for batch, token in zip(batches, self.TOKENS): @@ -1918,6 +2996,124 @@ def test_generate_query_batches_w_params_w_partition_size_bytes(self): param_types=param_types, partition_size_bytes=size, max_partitions=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ) + + def test_generate_query_batches_w_retry_and_timeout_params(self): + sql = ( + "SELECT first_name, last_name, email FROM citizens " "WHERE age <= @max_age" + ) + params = {"max_age": 30} + param_types = {"max_age": "INT64"} + size = 1 << 20 + client = _Client(self.PROJECT_ID) + instance = _Instance(self.INSTANCE_NAME, client=client) + database = _Database(self.DATABASE_NAME, instance=instance) + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + snapshot.partition_query.return_value = self.TOKENS + retry = Retry(deadline=60) + batches = list( + batch_txn.generate_query_batches( + sql, + params=params, + param_types=param_types, + partition_size_bytes=size, + retry=retry, + timeout=2.0, + ) + ) + + expected_query = { + "sql": sql, + "data_boost_enabled": False, + "params": params, + "param_types": param_types, + "query_options": client._query_options, + "directed_read_options": None, + } + self.assertEqual(len(batches), len(self.TOKENS)) + for batch, token in zip(batches, self.TOKENS): + self.assertEqual(batch["partition"], token) + self.assertEqual(batch["query"], expected_query) + + snapshot.partition_query.assert_called_once_with( + sql=sql, + params=params, + param_types=param_types, + partition_size_bytes=size, + max_partitions=None, + retry=retry, + timeout=2.0, + ) + + def test_generate_query_batches_w_data_boost_enabled(self): + sql = "SELECT COUNT(*) FROM table_name" + client = _Client(self.PROJECT_ID) + instance = _Instance(self.INSTANCE_NAME, client=client) + database = _Database(self.DATABASE_NAME, instance=instance) + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + snapshot.partition_query.return_value = self.TOKENS + + batches = list(batch_txn.generate_query_batches(sql, data_boost_enabled=True)) + + expected_query = { + "sql": sql, + "data_boost_enabled": True, + "query_options": client._query_options, + "directed_read_options": None, + } + self.assertEqual(len(batches), len(self.TOKENS)) + for batch, token in zip(batches, self.TOKENS): + self.assertEqual(batch["partition"], token) + self.assertEqual(batch["query"], expected_query) + + snapshot.partition_query.assert_called_once_with( + sql=sql, + params=None, + param_types=None, + partition_size_bytes=None, + max_partitions=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ) + + def test_generate_query_batches_w_directed_read_options(self): + sql = "SELECT COUNT(*) FROM table_name" + client = _Client(self.PROJECT_ID) + instance = _Instance(self.INSTANCE_NAME, client=client) + database = _Database(self.DATABASE_NAME, instance=instance) + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + snapshot.partition_query.return_value = self.TOKENS + + batches = list( + batch_txn.generate_query_batches( + sql, directed_read_options=DIRECTED_READ_OPTIONS + ) + ) + + expected_query = { + "sql": sql, + "data_boost_enabled": False, + "query_options": client._query_options, + "directed_read_options": DIRECTED_READ_OPTIONS, + } + self.assertEqual(len(batches), len(self.TOKENS)) + for batch, token in zip(batches, self.TOKENS): + self.assertEqual(batch["partition"], token) + self.assertEqual(batch["query"], expected_query) + + snapshot.partition_query.assert_called_once_with( + sql=sql, + params=None, + param_types=None, + partition_size_bytes=None, + max_partitions=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ) def test_process_query_batch(self): @@ -1941,7 +3137,65 @@ def test_process_query_batch(self): self.assertIs(found, expected) snapshot.execute_sql.assert_called_once_with( - sql=sql, params=params, param_types=param_types, partition=token + sql=sql, + params=params, + param_types=param_types, + partition=token, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ) + + def test_process_query_batch_w_retry_timeout(self): + sql = ( + "SELECT first_name, last_name, email FROM citizens " "WHERE age <= @max_age" + ) + params = {"max_age": 30} + param_types = {"max_age": "INT64"} + token = b"TOKEN" + batch = { + "partition": token, + "query": {"sql": sql, "params": params, "param_types": param_types}, + } + database = self._make_database() + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + expected = snapshot.execute_sql.return_value = object() + retry = Retry(deadline=60) + found = batch_txn.process_query_batch(batch, retry=retry, timeout=2.0) + + self.assertIs(found, expected) + + snapshot.execute_sql.assert_called_once_with( + sql=sql, + params=params, + param_types=param_types, + partition=token, + retry=retry, + timeout=2.0, + ) + + def test_process_query_batch_w_directed_read_options(self): + sql = "SELECT first_name, last_name, email FROM citizens" + token = b"TOKEN" + batch = { + "partition": token, + "query": {"sql": sql, "directed_read_options": DIRECTED_READ_OPTIONS}, + } + database = self._make_database() + batch_txn = self._make_one(database) + snapshot = batch_txn._snapshot = self._make_snapshot() + expected = snapshot.execute_sql.return_value = object() + + found = batch_txn.process_query_batch(batch) + + self.assertIs(found, expected) + + snapshot.execute_sql.assert_called_once_with( + sql=sql, + partition=token, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + directed_read_options=DIRECTED_READ_OPTIONS, ) def test_close_wo_session(self): @@ -1954,11 +3208,25 @@ def test_close_w_session(self): database = self._make_database() batch_txn = self._make_one(database) session = batch_txn._session = self._make_session() + # Configure session as non-multiplexed (default behavior) + session.is_multiplexed = False batch_txn.close() session.delete.assert_called_once_with() + def test_close_w_multiplexed_session(self): + database = self._make_database() + batch_txn = self._make_one(database) + session = batch_txn._session = self._make_session() + # Configure session as multiplexed + session.is_multiplexed = True + + batch_txn.close() + + # Multiplexed sessions should not be deleted + session.delete.assert_not_called() + def test_process_w_invalid_batch(self): token = b"TOKEN" batch = {"partition": token, "bogus": b"BOGUS"} @@ -1995,6 +3263,8 @@ def test_process_w_read_batch(self): keyset=keyset, index=self.INDEX, partition=token, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ) def test_process_w_query_batch(self): @@ -2018,57 +3288,264 @@ def test_process_w_query_batch(self): self.assertIs(found, expected) snapshot.execute_sql.assert_called_once_with( - sql=sql, params=params, param_types=param_types, partition=token + sql=sql, + params=params, + param_types=param_types, + partition=token, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ) -def _make_instance_api(): - from google.cloud.spanner_admin_instance_v1.gapic.instance_admin_client import ( - InstanceAdminClient, - ) +class TestMutationGroupsCheckout(_BaseTest): + def _get_target_class(self): + from google.cloud.spanner_v1.database import MutationGroupsCheckout - return mock.create_autospec(InstanceAdminClient) + return MutationGroupsCheckout + @staticmethod + def _make_spanner_client(): + from google.cloud.spanner_v1 import SpannerClient -class TestRestoreInfo(_BaseTest): - def test_from_pb(self): - from google.cloud.spanner_v1.database import RestoreInfo - from google.cloud.spanner_admin_database_v1.gapic import enums - from google.cloud.spanner_admin_database_v1.proto import ( - backup_pb2, - spanner_database_admin_pb2 as admin_v1_pb2, - ) + return mock.create_autospec(SpannerClient) + + def test_ctor(self): + from google.cloud.spanner_v1.batch import MutationGroups + + database = _Database(self.DATABASE_NAME) + pool = database._pool = _Pool() + session = _Session(database) + pool.put(session) + checkout = self._make_one(database) + self.assertIs(checkout._database, database) + + with checkout as groups: + self.assertIsNone(pool._session) + self.assertIsInstance(groups, MutationGroups) + self.assertIs(groups._session, session) + + self.assertIs(pool._session, session) + + def test_context_mgr_success(self): + import datetime + from google.cloud.spanner_v1._helpers import _make_list_value_pbs + from google.cloud.spanner_v1 import BatchWriteRequest + from google.cloud.spanner_v1 import BatchWriteResponse + from google.cloud.spanner_v1 import Mutation + from google.cloud._helpers import UTC from google.cloud._helpers import _datetime_to_pb_timestamp + from google.cloud.spanner_v1.batch import MutationGroups + from google.rpc.status_pb2 import Status - timestamp = self._make_timestamp() - restore_pb = admin_v1_pb2.RestoreInfo( - source_type=1, - backup_info=backup_pb2.BackupInfo( - backup="backup_path", - create_time=_datetime_to_pb_timestamp(timestamp), - source_database="database_path", - ), + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + status_pb = Status(code=200) + response = BatchWriteResponse( + commit_timestamp=now_pb, indexes=[0], status=status_pb + ) + database = _Database(self.DATABASE_NAME) + api = database.spanner_api = self._make_spanner_client() + api.batch_write.return_value = [response] + pool = database._pool = _Pool() + session = _Session(database) + pool.put(session) + checkout = self._make_one(database) + + request_options = RequestOptions(transaction_tag=self.TRANSACTION_TAG) + request = BatchWriteRequest( + session=self.SESSION_NAME, + mutation_groups=[ + BatchWriteRequest.MutationGroup( + mutations=[ + Mutation( + insert=Mutation.Write( + table="table", + columns=["col"], + values=_make_list_value_pbs([["val"]]), + ) + ) + ] + ) + ], + request_options=request_options, + ) + with checkout as groups: + self.assertIsNone(pool._session) + self.assertIsInstance(groups, MutationGroups) + self.assertIs(groups._session, session) + group = groups.group() + group.insert("table", ["col"], [["val"]]) + groups.batch_write(request_options) + self.assertEqual(groups.committed, True) + + self.assertIs(pool._session, session) + + api.batch_write.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) - restore_info = RestoreInfo.from_pb(restore_pb) - self.assertEqual(restore_info.source_type, enums.RestoreSourceType.BACKUP) - self.assertEqual(restore_info.backup_info.create_time, timestamp) + + def test_context_mgr_failure(self): + from google.cloud.spanner_v1.batch import MutationGroups + + database = _Database(self.DATABASE_NAME) + pool = database._pool = _Pool() + session = _Session(database) + pool.put(session) + checkout = self._make_one(database) + + class Testing(Exception): + pass + + with self.assertRaises(Testing): + with checkout as groups: + self.assertIsNone(pool._session) + self.assertIsInstance(groups, MutationGroups) + self.assertIs(groups._session, session) + raise Testing() + + self.assertIs(pool._session, session) + + def test_context_mgr_session_not_found_error(self): + from google.cloud.exceptions import NotFound + + database = _Database(self.DATABASE_NAME) + session = _Session(database, name="session-1") + session.exists = mock.MagicMock(return_value=False) + pool = database._pool = _Pool() + new_session = _Session(database, name="session-2") + new_session.create = mock.MagicMock(return_value=[]) + pool._new_session = mock.MagicMock(return_value=new_session) + + pool.put(session) + checkout = self._make_one(database) + + self.assertEqual(pool._session, session) + with self.assertRaises(NotFound): + with checkout as _: + raise NotFound("Session not found") + # Assert that session-1 was removed from pool and new session was added. + self.assertEqual(pool._session, new_session) + + def test_context_mgr_table_not_found_error(self): + from google.cloud.exceptions import NotFound + + database = _Database(self.DATABASE_NAME) + session = _Session(database, name="session-1") + session.exists = mock.MagicMock(return_value=True) + pool = database._pool = _Pool() + pool._new_session = mock.MagicMock(return_value=[]) + + pool.put(session) + checkout = self._make_one(database) + + self.assertEqual(pool._session, session) + with self.assertRaises(NotFound): + with checkout as _: + raise NotFound("Table not found") + # Assert that session-1 was not removed from pool. + self.assertEqual(pool._session, session) + pool._new_session.assert_not_called() + + def test_context_mgr_unknown_error(self): + database = _Database(self.DATABASE_NAME) + session = _Session(database) + pool = database._pool = _Pool() + pool._new_session = mock.MagicMock(return_value=[]) + pool.put(session) + checkout = self._make_one(database) + + class Testing(Exception): + pass + + self.assertEqual(pool._session, session) + with self.assertRaises(Testing): + with checkout as _: + raise Testing("Unknown error.") + # Assert that session-1 was not removed from pool. + self.assertEqual(pool._session, session) + pool._new_session.assert_not_called() + + +def _make_instance_api(): + from google.cloud.spanner_admin_instance_v1 import InstanceAdminClient + + return mock.create_autospec(InstanceAdminClient) + + +def _make_database_admin_api(): + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + + return mock.create_autospec(DatabaseAdminClient) class _Client(object): - def __init__(self, project=TestDatabase.PROJECT_ID): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + NTH_CLIENT = AtomicCounter() + + def __init__( + self, + project=TestDatabase.PROJECT_ID, + route_to_leader_enabled=True, + directed_read_options=None, + default_transaction_options=DefaultTransactionOptions(), + observability_options=None, + ): + from google.cloud.spanner_v1 import ExecuteSqlRequest self.project = project self.project_name = "projects/" + self.project self._endpoint_cache = {} + self.database_admin_api = _make_database_admin_api() self.instance_admin_api = _make_instance_api() self._client_info = mock.Mock() self._client_options = mock.Mock() + self._client_options.universe_domain = "googleapis.com" + self._client_options.api_key = None + self._client_options.client_cert_source = None + self._client_options.credentials_file = None + self._client_options.scopes = None + self._client_options.quota_project_id = None + self._client_options.api_audience = None + self._client_options.api_endpoint = "spanner.googleapis.com" self._query_options = ExecuteSqlRequest.QueryOptions(optimizer_version="1") + self.route_to_leader_enabled = route_to_leader_enabled + self.directed_read_options = directed_read_options + self.default_transaction_options = default_transaction_options + self.observability_options = observability_options + self._nth_client_id = _Client.NTH_CLIENT.increment() + self._nth_request = AtomicCounter() + + # Mock credentials with proper attributes + self.credentials = mock.Mock() + self.credentials.token = "mock_token" + self.credentials.expiry = None + self.credentials.valid = True + + # Mock the spanner API to return proper session names + self._spanner_api = mock.Mock() + + # Configure create_session to return a proper session with string name + def mock_create_session(request, **kwargs): + session_response = mock.Mock() + session_response.name = f"projects/{self.project}/instances/instance-id/databases/database-id/sessions/session-{self._nth_request.increment()}" + return session_response + + self._spanner_api.create_session = mock_create_session + + @property + def _next_nth_request(self): + return self._nth_request.increment() class _Instance(object): - def __init__(self, name, client=None, emulator_host=None): + def __init__(self, name, client=_Client(), emulator_host=None): self.name = name self.instance_id = name.rsplit("/", 1)[1] self._client = client @@ -2081,10 +3558,64 @@ def __init__(self, name): class _Database(object): + log_commit_stats = False + _route_to_leader_enabled = True + NTH_CLIENT_ID = AtomicCounter() + def __init__(self, name, instance=None): self.name = name self.database_id = name.rsplit("/", 1)[1] self._instance = instance + from logging import Logger + + self.logger = mock.create_autospec(Logger, instance=True) + self._directed_read_options = None + self.default_transaction_options = DefaultTransactionOptions() + self._nth_request = AtomicCounter() + self._nth_client_id = _Database.NTH_CLIENT_ID.increment() + + # Mock sessions manager for multiplexed sessions support + self._sessions_manager = mock.Mock() + # Configure get_session to return sessions from the pool + self._sessions_manager.get_session = mock.Mock( + side_effect=lambda tx_type: self._pool.get() + if hasattr(self, "_pool") and self._pool + else None + ) + self._sessions_manager.put_session = mock.Mock( + side_effect=lambda session: self._pool.put(session) + if hasattr(self, "_pool") and self._pool + else None + ) + + @property + def sessions_manager(self): + """Returns the database sessions manager. + + :rtype: Mock + :returns: The mock sessions manager for this database. + """ + return self._sessions_manager + + @property + def _next_nth_request(self): + return self._nth_request.increment() + + def metadata_with_request_id( + self, nth_request, nth_attempt, prior_metadata=[], span=None + ): + return _metadata_with_request_id( + self._nth_client_id, + self._channel_id, + nth_request, + nth_attempt, + prior_metadata, + span, + ) + + @property + def _channel_id(self): + return 1 class _Pool(object): @@ -2102,7 +3633,6 @@ def put(self, session): class _Session(object): - _rows = () _created = False _transaction = None @@ -2114,13 +3644,20 @@ def __init__( self._database = database self.name = name self._run_transaction_function = run_transaction_function + self.is_multiplexed = False # Default to non-multiplexed for tests def run_in_transaction(self, func, *args, **kw): if self._run_transaction_function: - func(*args, **kw) + mock_txn = mock.Mock() + mock_txn._transaction_id = b"mock_transaction_id" + func(mock_txn, *args, **kw) self._retried = (func, args, kw) return self._committed + @property + def session_id(self): + return self.name + class _MockIterator(object): def __init__(self, *values, **kw): diff --git a/tests/unit/test_database_session_manager.py b/tests/unit/test_database_session_manager.py new file mode 100644 index 0000000000..c6156b5e8c --- /dev/null +++ b/tests/unit/test_database_session_manager.py @@ -0,0 +1,321 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from datetime import timedelta +from mock import Mock, patch +from os import environ +from time import time, sleep +from typing import Callable +from unittest import TestCase + +from google.api_core.exceptions import BadRequest, FailedPrecondition +from google.cloud.spanner_v1.database_sessions_manager import DatabaseSessionsManager +from google.cloud.spanner_v1.database_sessions_manager import TransactionType +from tests._builders import build_database + + +# Shorten polling and refresh intervals for testing. +@patch.multiple( + DatabaseSessionsManager, + _MAINTENANCE_THREAD_POLLING_INTERVAL=timedelta(seconds=1), + _MAINTENANCE_THREAD_REFRESH_INTERVAL=timedelta(seconds=2), +) +class TestDatabaseSessionManager(TestCase): + @classmethod + def setUpClass(cls): + # Save the original environment variables. + cls._original_env = dict(environ) + + @classmethod + def tearDownClass(cls): + # Restore environment variables. + environ.clear() + environ.update(cls._original_env) + + def setUp(self): + # Build session manager. + database = build_database() + self._manager = database._sessions_manager + + # Mock the session pool. + pool = self._manager._pool + pool.get = Mock(wraps=pool.get) + pool.put = Mock(wraps=pool.put) + + def tearDown(self): + # If the maintenance thread is still alive, set the event and wait + # for the thread to terminate. We need to do this to ensure that the + # thread does not interfere with other tests. + manager = self._manager + thread = manager._multiplexed_session_thread + + if thread and thread.is_alive(): + manager._multiplexed_session_terminate_event.set() + self._assert_true_with_timeout(lambda: not thread.is_alive()) + + def test_read_only_pooled(self): + manager = self._manager + pool = manager._pool + + self._disable_multiplexed_sessions() + + # Get session from pool. + session = manager.get_session(TransactionType.READ_ONLY) + self.assertFalse(session.is_multiplexed) + pool.get.assert_called_once() + + # Return session to pool. + manager.put_session(session) + pool.put.assert_called_once_with(session) + + def test_read_only_multiplexed(self): + manager = self._manager + pool = manager._pool + + self._enable_multiplexed_sessions() + + # Session is created. + session_1 = manager.get_session(TransactionType.READ_ONLY) + self.assertTrue(session_1.is_multiplexed) + manager.put_session(session_1) + + # Session is re-used. + session_2 = manager.get_session(TransactionType.READ_ONLY) + self.assertEqual(session_1, session_2) + manager.put_session(session_2) + + # Verify that pool was not used. + pool.get.assert_not_called() + pool.put.assert_not_called() + + # Verify logger calls. + info = manager._database.logger.info + info.assert_called_once_with("Created multiplexed session.") + + def test_partitioned_pooled(self): + manager = self._manager + pool = manager._pool + + self._disable_multiplexed_sessions() + + # Get session from pool. + session = manager.get_session(TransactionType.PARTITIONED) + self.assertFalse(session.is_multiplexed) + pool.get.assert_called_once() + + # Return session to pool. + manager.put_session(session) + pool.put.assert_called_once_with(session) + + def test_partitioned_multiplexed(self): + manager = self._manager + pool = manager._pool + + self._enable_multiplexed_sessions() + + # Session is created. + session_1 = manager.get_session(TransactionType.PARTITIONED) + self.assertTrue(session_1.is_multiplexed) + manager.put_session(session_1) + + # Session is re-used. + session_2 = manager.get_session(TransactionType.PARTITIONED) + self.assertEqual(session_1, session_2) + manager.put_session(session_2) + + # Verify that pool was not used. + pool.get.assert_not_called() + pool.put.assert_not_called() + + # Verify logger calls. + info = manager._database.logger.info + info.assert_called_once_with("Created multiplexed session.") + + def test_read_write_pooled(self): + manager = self._manager + pool = manager._pool + + self._disable_multiplexed_sessions() + + # Get session from pool. + session = manager.get_session(TransactionType.READ_WRITE) + self.assertFalse(session.is_multiplexed) + pool.get.assert_called_once() + + # Return session to pool. + manager.put_session(session) + pool.put.assert_called_once_with(session) + + def test_read_write_multiplexed(self): + manager = self._manager + pool = manager._pool + + self._enable_multiplexed_sessions() + + # Session is created. + session_1 = manager.get_session(TransactionType.READ_WRITE) + self.assertTrue(session_1.is_multiplexed) + manager.put_session(session_1) + + # Session is re-used. + session_2 = manager.get_session(TransactionType.READ_WRITE) + self.assertEqual(session_1, session_2) + manager.put_session(session_2) + + # Verify that pool was not used. + pool.get.assert_not_called() + pool.put.assert_not_called() + + # Verify logger calls. + info = manager._database.logger.info + info.assert_called_once_with("Created multiplexed session.") + + def test_multiplexed_maintenance(self): + manager = self._manager + self._enable_multiplexed_sessions() + + # Maintenance thread is started. + session_1 = manager.get_session(TransactionType.READ_ONLY) + self.assertTrue(session_1.is_multiplexed) + self.assertTrue(manager._multiplexed_session_thread.is_alive()) + + # Wait for maintenance thread to execute. + self._assert_true_with_timeout( + lambda: manager._database.spanner_api.create_session.call_count > 1 + ) + + # Verify that maintenance thread created new multiplexed session. + session_2 = manager.get_session(TransactionType.READ_ONLY) + self.assertTrue(session_2.is_multiplexed) + self.assertNotEqual(session_1, session_2) + + # Verify logger calls. + info = manager._database.logger.info + info.assert_called_with("Created multiplexed session.") + + def test_exception_bad_request(self): + manager = self._manager + api = manager._database.spanner_api + api.create_session.side_effect = BadRequest("") + + with self.assertRaises(BadRequest): + manager.get_session(TransactionType.READ_ONLY) + + def test_exception_failed_precondition(self): + manager = self._manager + api = manager._database.spanner_api + api.create_session.side_effect = FailedPrecondition("") + + with self.assertRaises(FailedPrecondition): + manager.get_session(TransactionType.READ_ONLY) + + def test__use_multiplexed_read_only(self): + transaction_type = TransactionType.READ_ONLY + + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED] = "false" + self.assertFalse(DatabaseSessionsManager._use_multiplexed(transaction_type)) + + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED] = "true" + self.assertTrue(DatabaseSessionsManager._use_multiplexed(transaction_type)) + + def test__use_multiplexed_partitioned(self): + transaction_type = TransactionType.PARTITIONED + + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_PARTITIONED] = "false" + self.assertFalse(DatabaseSessionsManager._use_multiplexed(transaction_type)) + + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_PARTITIONED] = "true" + self.assertTrue(DatabaseSessionsManager._use_multiplexed(transaction_type)) + + # Test default behavior (should be enabled) + del environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_PARTITIONED] + self.assertTrue(DatabaseSessionsManager._use_multiplexed(transaction_type)) + + def test__use_multiplexed_read_write(self): + transaction_type = TransactionType.READ_WRITE + + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_READ_WRITE] = "false" + self.assertFalse(DatabaseSessionsManager._use_multiplexed(transaction_type)) + + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_READ_WRITE] = "true" + self.assertTrue(DatabaseSessionsManager._use_multiplexed(transaction_type)) + + # Test default behavior (should be enabled) + del environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_READ_WRITE] + self.assertTrue(DatabaseSessionsManager._use_multiplexed(transaction_type)) + + def test__use_multiplexed_unsupported_transaction_type(self): + unsupported_type = "UNSUPPORTED_TRANSACTION_TYPE" + + with self.assertRaises(ValueError): + DatabaseSessionsManager._use_multiplexed(unsupported_type) + + def test__getenv(self): + true_values = ["1", " 1", " 1", "true", "True", "TRUE", " true "] + for value in true_values: + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED] = value + self.assertTrue( + DatabaseSessionsManager._use_multiplexed(TransactionType.READ_ONLY) + ) + + false_values = ["false", "False", "FALSE", " false "] + for value in false_values: + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED] = value + self.assertFalse( + DatabaseSessionsManager._use_multiplexed(TransactionType.READ_ONLY) + ) + + # Test that empty string and "0" are now treated as true (default enabled) + default_true_values = ["", "0", "anything", "random"] + for value in default_true_values: + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED] = value + self.assertTrue( + DatabaseSessionsManager._use_multiplexed(TransactionType.READ_ONLY) + ) + + del environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED] + self.assertTrue( + DatabaseSessionsManager._use_multiplexed(TransactionType.READ_ONLY) + ) + + def _assert_true_with_timeout(self, condition: Callable) -> None: + """Asserts that the given condition is met within a timeout period. + + :type condition: Callable + :param condition: A callable that returns a boolean indicating whether the condition is met. + """ + + sleep_seconds = 0.1 + timeout_seconds = 10 + + start_time = time() + while not condition() and time() - start_time < timeout_seconds: + sleep(sleep_seconds) + + self.assertTrue(condition()) + + @staticmethod + def _disable_multiplexed_sessions() -> None: + """Sets environment variables to disable multiplexed sessions for all transactions types.""" + + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED] = "false" + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_PARTITIONED] = "false" + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_READ_WRITE] = "false" + + @staticmethod + def _enable_multiplexed_sessions() -> None: + """Sets environment variables to enable multiplexed sessions for all transaction types.""" + + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED] = "true" + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_PARTITIONED] = "true" + environ[DatabaseSessionsManager._ENV_VAR_MULTIPLEXED_READ_WRITE] = "true" diff --git a/tests/unit/test_datatypes.py b/tests/unit/test_datatypes.py new file mode 100644 index 0000000000..65ccacb4ff --- /dev/null +++ b/tests/unit/test_datatypes.py @@ -0,0 +1,98 @@ +# Copyright 2024 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +import unittest + +import json +from google.cloud.spanner_v1.data_types import JsonObject + + +class Test_JsonObject_serde(unittest.TestCase): + def test_w_dict(self): + data = {"foo": "bar"} + expected = json.dumps(data, sort_keys=True, separators=(",", ":")) + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_list_of_dict(self): + data = [{"foo1": "bar1"}, {"foo2": "bar2"}] + expected = json.dumps(data, sort_keys=True, separators=(",", ":")) + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_JsonObject_of_dict(self): + data = {"foo": "bar"} + expected = json.dumps(data, sort_keys=True, separators=(",", ":")) + data_jsonobject = JsonObject(JsonObject(data)) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_JsonObject_of_list_of_dict(self): + data = [{"foo1": "bar1"}, {"foo2": "bar2"}] + expected = json.dumps(data, sort_keys=True, separators=(",", ":")) + data_jsonobject = JsonObject(JsonObject(data)) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_simple_float_JsonData(self): + data = 1.1 + expected = json.dumps(data) + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_simple_str_JsonData(self): + data = "foo" + expected = json.dumps(data) + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_empty_str_JsonData(self): + data = "" + expected = json.dumps(data) + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_None_JsonData(self): + data = None + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), None) + + def test_w_list_of_simple_JsonData(self): + data = [1.1, "foo"] + expected = json.dumps(data, sort_keys=True, separators=(",", ":")) + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_empty_list(self): + data = [] + expected = json.dumps(data) + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_empty_dict(self): + data = [{}] + expected = json.dumps(data) + data_jsonobject = JsonObject(data) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_JsonObject_of_simple_JsonData(self): + data = 1.1 + expected = json.dumps(data) + data_jsonobject = JsonObject(JsonObject(data)) + self.assertEqual(data_jsonobject.serialize(), expected) + + def test_w_JsonObject_of_list_of_simple_JsonData(self): + data = [1.1, "foo"] + expected = json.dumps(data, sort_keys=True, separators=(",", ":")) + data_jsonobject = JsonObject(JsonObject(data)) + self.assertEqual(data_jsonobject.serialize(), expected) diff --git a/tests/unit/test_instance.py b/tests/unit/test_instance.py index c1a0b187ac..f3bf6726c0 100644 --- a/tests/unit/test_instance.py +++ b/tests/unit/test_instance.py @@ -13,12 +13,13 @@ # limitations under the License. import unittest - import mock +from google.auth.credentials import AnonymousCredentials +from google.cloud.spanner_v1 import DefaultTransactionOptions -class TestInstance(unittest.TestCase): +class TestInstance(unittest.TestCase): PROJECT = "project" PARENT = "projects/" + PROJECT INSTANCE_ID = "instance-id" @@ -27,6 +28,7 @@ class TestInstance(unittest.TestCase): LOCATION = "projects/" + PROJECT + "/locations/" + CONFIG_NAME DISPLAY_NAME = "display_name" NODE_COUNT = 5 + PROCESSING_UNITS = 5000 OP_ID = 8915 OP_NAME = "operations/projects/%s/instances/%soperations/%d" % ( PROJECT, @@ -38,6 +40,8 @@ class TestInstance(unittest.TestCase): TIMEOUT_SECONDS = 1 DATABASE_ID = "database_id" DATABASE_NAME = "%s/databases/%s" % (INSTANCE_NAME, DATABASE_ID) + LABELS = {"test": "true"} + FIELD_MASK = ["config", "display_name", "processing_units", "labels"] def _getTargetClass(self): from google.cloud.spanner_v1.instance import Instance @@ -57,6 +61,7 @@ def test_constructor_defaults(self): self.assertIs(instance.configuration_name, None) self.assertEqual(instance.node_count, DEFAULT_NODE_COUNT) self.assertEqual(instance.display_name, self.INSTANCE_ID) + self.assertEqual(instance.labels, {}) def test_constructor_non_default(self): DISPLAY_NAME = "display_name" @@ -68,12 +73,14 @@ def test_constructor_non_default(self): configuration_name=self.CONFIG_NAME, node_count=self.NODE_COUNT, display_name=DISPLAY_NAME, + labels=self.LABELS, ) self.assertEqual(instance.instance_id, self.INSTANCE_ID) self.assertIs(instance._client, client) self.assertEqual(instance.configuration_name, self.CONFIG_NAME) self.assertEqual(instance.node_count, self.NODE_COUNT) self.assertEqual(instance.display_name, DISPLAY_NAME) + self.assertEqual(instance.labels, self.LABELS) def test_copy(self): DISPLAY_NAME = "display_name" @@ -92,12 +99,10 @@ def test_copy(self): self.assertEqual(instance, new_instance) def test__update_from_pb_success(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_instance_v1 import Instance display_name = "display_name" - instance_pb = admin_v1_pb2.Instance(display_name=display_name) + instance_pb = Instance(display_name=display_name) instance = self._make_one(None, None, None, None) self.assertEqual(instance.display_name, None) @@ -105,11 +110,9 @@ def test__update_from_pb_success(self): self.assertEqual(instance.display_name, display_name) def test__update_from_pb_no_display_name(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_instance_v1 import Instance - instance_pb = admin_v1_pb2.Instance() + instance_pb = Instance() instance = self._make_one(None, None, None, None) self.assertEqual(instance.display_name, None) with self.assertRaises(ValueError): @@ -117,44 +120,39 @@ def test__update_from_pb_no_display_name(self): self.assertEqual(instance.display_name, None) def test_from_pb_bad_instance_name(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_instance_v1 import Instance instance_name = "INCORRECT_FORMAT" - instance_pb = admin_v1_pb2.Instance(name=instance_name) + instance_pb = Instance(name=instance_name) klass = self._getTargetClass() with self.assertRaises(ValueError): klass.from_pb(instance_pb, None) def test_from_pb_project_mistmatch(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_instance_v1 import Instance ALT_PROJECT = "ALT_PROJECT" client = _Client(project=ALT_PROJECT) self.assertNotEqual(self.PROJECT, ALT_PROJECT) - instance_pb = admin_v1_pb2.Instance(name=self.INSTANCE_NAME) + instance_pb = Instance(name=self.INSTANCE_NAME) klass = self._getTargetClass() with self.assertRaises(ValueError): klass.from_pb(instance_pb, client) def test_from_pb_success(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_instance_v1 import Instance client = _Client(project=self.PROJECT) - instance_pb = admin_v1_pb2.Instance( + instance_pb = Instance( name=self.INSTANCE_NAME, config=self.CONFIG_NAME, display_name=self.INSTANCE_ID, + labels=self.LABELS, ) klass = self._getTargetClass() @@ -163,6 +161,7 @@ def test_from_pb_success(self): self.assertEqual(instance._client, client) self.assertEqual(instance.instance_id, self.INSTANCE_ID) self.assertEqual(instance.configuration_name, self.CONFIG_NAME) + self.assertEqual(instance.labels, self.LABELS) def test_name_property(self): client = _Client(project=self.PROJECT) @@ -170,6 +169,14 @@ def test_name_property(self): instance = self._make_one(self.INSTANCE_ID, client, self.CONFIG_NAME) self.assertEqual(instance.name, self.INSTANCE_NAME) + def test_labels_property(self): + client = _Client(project=self.PROJECT) + + instance = self._make_one( + self.INSTANCE_ID, client, self.CONFIG_NAME, labels=self.LABELS + ) + self.assertEqual(instance.labels, self.LABELS) + def test___eq__(self): client = object() instance1 = self._make_one(self.INSTANCE_ID, client, self.CONFIG_NAME) @@ -226,7 +233,7 @@ def test_create_already_exists(self): self.assertEqual(instance.name, self.INSTANCE_NAME) self.assertEqual(instance.config, self.CONFIG_NAME) self.assertEqual(instance.display_name, self.INSTANCE_ID) - self.assertEqual(instance.node_count, 1) + self.assertEqual(instance.processing_units, 1000) self.assertEqual(metadata, [("google-cloud-resource-prefix", instance.name)]) def test_create_success(self): @@ -241,6 +248,7 @@ def test_create_success(self): configuration_name=self.CONFIG_NAME, display_name=self.DISPLAY_NAME, node_count=self.NODE_COUNT, + labels=self.LABELS, ) future = instance.create() @@ -253,7 +261,37 @@ def test_create_success(self): self.assertEqual(instance.name, self.INSTANCE_NAME) self.assertEqual(instance.config, self.CONFIG_NAME) self.assertEqual(instance.display_name, self.DISPLAY_NAME) - self.assertEqual(instance.node_count, self.NODE_COUNT) + self.assertEqual(instance.processing_units, self.PROCESSING_UNITS) + self.assertEqual(instance.labels, self.LABELS) + self.assertEqual(metadata, [("google-cloud-resource-prefix", instance.name)]) + + def test_create_with_processing_units(self): + op_future = _FauxOperationFuture() + client = _Client(self.PROJECT) + api = client.instance_admin_api = _FauxInstanceAdminAPI( + _create_instance_response=op_future + ) + instance = self._make_one( + self.INSTANCE_ID, + client, + configuration_name=self.CONFIG_NAME, + display_name=self.DISPLAY_NAME, + processing_units=self.PROCESSING_UNITS, + labels=self.LABELS, + ) + + future = instance.create() + + self.assertIs(future, op_future) + + (parent, instance_id, instance, metadata) = api._created_instance + self.assertEqual(parent, self.PARENT) + self.assertEqual(instance_id, self.INSTANCE_ID) + self.assertEqual(instance.name, self.INSTANCE_NAME) + self.assertEqual(instance.config, self.CONFIG_NAME) + self.assertEqual(instance.display_name, self.DISPLAY_NAME) + self.assertEqual(instance.processing_units, self.PROCESSING_UNITS) + self.assertEqual(instance.labels, self.LABELS) self.assertEqual(metadata, [("google-cloud-resource-prefix", instance.name)]) def test_exists_instance_grpc_error(self): @@ -281,12 +319,10 @@ def test_exists_instance_not_found(self): self.assertEqual(metadata, [("google-cloud-resource-prefix", instance.name)]) def test_exists_success(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_instance_v1 import Instance client = _Client(self.PROJECT) - instance_pb = admin_v1_pb2.Instance( + instance_pb = Instance( name=self.INSTANCE_NAME, config=self.CONFIG_NAME, display_name=self.DISPLAY_NAME, @@ -331,16 +367,15 @@ def test_reload_instance_not_found(self): self.assertEqual(metadata, [("google-cloud-resource-prefix", instance.name)]) def test_reload_success(self): - from google.cloud.spanner_admin_instance_v1.proto import ( - spanner_instance_admin_pb2 as admin_v1_pb2, - ) + from google.cloud.spanner_admin_instance_v1 import Instance client = _Client(self.PROJECT) - instance_pb = admin_v1_pb2.Instance( + instance_pb = Instance( name=self.INSTANCE_NAME, config=self.CONFIG_NAME, display_name=self.DISPLAY_NAME, node_count=self.NODE_COUNT, + labels=self.LABELS, ) api = client.instance_admin_api = _FauxInstanceAdminAPI( _get_instance_response=instance_pb @@ -352,6 +387,7 @@ def test_reload_success(self): self.assertEqual(instance.configuration_name, self.CONFIG_NAME) self.assertEqual(instance.node_count, self.NODE_COUNT) self.assertEqual(instance.display_name, self.DISPLAY_NAME) + self.assertEqual(instance.labels, self.LABELS) name, metadata = api._got_instance self.assertEqual(name, self.INSTANCE_NAME) @@ -385,7 +421,7 @@ def test_update_not_found(self): instance.update() instance, field_mask, metadata = api._updated_instance - self.assertEqual(field_mask.paths, ["config", "display_name", "node_count"]) + self.assertEqual(field_mask.paths, self.FIELD_MASK) self.assertEqual(instance.name, self.INSTANCE_NAME) self.assertEqual(instance.config, self.CONFIG_NAME) self.assertEqual(instance.display_name, self.INSTANCE_ID) @@ -404,6 +440,7 @@ def test_update_success(self): configuration_name=self.CONFIG_NAME, node_count=self.NODE_COUNT, display_name=self.DISPLAY_NAME, + labels=self.LABELS, ) future = instance.update() @@ -411,11 +448,42 @@ def test_update_success(self): self.assertIs(future, op_future) instance, field_mask, metadata = api._updated_instance - self.assertEqual(field_mask.paths, ["config", "display_name", "node_count"]) + self.assertEqual(field_mask.paths, self.FIELD_MASK) self.assertEqual(instance.name, self.INSTANCE_NAME) self.assertEqual(instance.config, self.CONFIG_NAME) self.assertEqual(instance.display_name, self.DISPLAY_NAME) self.assertEqual(instance.node_count, self.NODE_COUNT) + self.assertEqual(instance.labels, self.LABELS) + self.assertEqual(metadata, [("google-cloud-resource-prefix", instance.name)]) + + def test_update_success_with_processing_units(self): + op_future = _FauxOperationFuture() + client = _Client(self.PROJECT) + api = client.instance_admin_api = _FauxInstanceAdminAPI( + _update_instance_response=op_future + ) + instance = self._make_one( + self.INSTANCE_ID, + client, + configuration_name=self.CONFIG_NAME, + processing_units=self.PROCESSING_UNITS, + display_name=self.DISPLAY_NAME, + labels=self.LABELS, + ) + + future = instance.update() + + self.assertIs(future, op_future) + + instance, field_mask, metadata = api._updated_instance + self.assertEqual( + field_mask.paths, ["config", "display_name", "processing_units", "labels"] + ) + self.assertEqual(instance.name, self.INSTANCE_NAME) + self.assertEqual(instance.config, self.CONFIG_NAME) + self.assertEqual(instance.display_name, self.DISPLAY_NAME) + self.assertEqual(instance.processing_units, self.PROCESSING_UNITS) + self.assertEqual(instance.labels, self.LABELS) self.assertEqual(metadata, [("google-cloud-resource-prefix", instance.name)]) def test_delete_grpc_error(self): @@ -474,20 +542,33 @@ def test_database_factory_defaults(self): self.assertIs(database._instance, instance) self.assertEqual(list(database.ddl_statements), []) self.assertIsInstance(database._pool, BurstyPool) + self.assertIsNone(database._logger) pool = database._pool self.assertIs(pool._database, database) + self.assertIsNone(database.database_role) def test_database_factory_explicit(self): + from logging import Logger from google.cloud.spanner_v1.database import Database from tests._fixtures import DDL_STATEMENTS client = _Client(self.PROJECT) instance = self._make_one(self.INSTANCE_ID, client, self.CONFIG_NAME) DATABASE_ID = "database-id" + DATABASE_ROLE = "dummy-role" pool = _Pool() + logger = mock.create_autospec(Logger, instance=True) + encryption_config = {"kms_key_name": "kms_key_name"} + proto_descriptors = b"" database = instance.database( - DATABASE_ID, ddl_statements=DDL_STATEMENTS, pool=pool + DATABASE_ID, + ddl_statements=DDL_STATEMENTS, + pool=pool, + logger=logger, + encryption_config=encryption_config, + database_role=DATABASE_ROLE, + proto_descriptors=proto_descriptors, ) self.assertIsInstance(database, Database) @@ -495,85 +576,80 @@ def test_database_factory_explicit(self): self.assertIs(database._instance, instance) self.assertEqual(list(database.ddl_statements), DDL_STATEMENTS) self.assertIs(database._pool, pool) + self.assertIs(database._logger, logger) self.assertIs(pool._bound, database) + self.assertIs(database._encryption_config, encryption_config) + self.assertIs(database.database_role, DATABASE_ROLE) + self.assertIs(database._proto_descriptors, proto_descriptors) def test_list_databases(self): - from google.cloud.spanner_admin_database_v1.gapic import database_admin_client - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2, - ) - from google.cloud.spanner_v1.database import Database + from google.cloud.spanner_admin_database_v1 import Database as DatabasePB + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import ListDatabasesRequest + from google.cloud.spanner_admin_database_v1 import ListDatabasesResponse - api = database_admin_client.DatabaseAdminClient(mock.Mock()) + api = DatabaseAdminClient(credentials=AnonymousCredentials()) client = _Client(self.PROJECT) client.database_admin_api = api instance = self._make_one(self.INSTANCE_ID, client) - databases_pb = spanner_database_admin_pb2.ListDatabasesResponse( + databases_pb = ListDatabasesResponse( databases=[ - spanner_database_admin_pb2.Database( - name="{}/databases/aa".format(self.INSTANCE_NAME) - ), - spanner_database_admin_pb2.Database( - name="{}/databases/bb".format(self.INSTANCE_NAME) - ), + DatabasePB(name="{}/databases/aa".format(self.INSTANCE_NAME)), + DatabasePB(name="{}/databases/bb".format(self.INSTANCE_NAME)), ] ) - ld_api = api._inner_api_calls["list_databases"] = mock.Mock( - return_value=databases_pb - ) + ld_api = api._transport._wrapped_methods[ + api._transport.list_databases + ] = mock.Mock(return_value=databases_pb) response = instance.list_databases() databases = list(response) - self.assertIsInstance(databases[0], Database) + self.assertIsInstance(databases[0], DatabasePB) self.assertTrue(databases[0].name.endswith("/aa")) self.assertTrue(databases[1].name.endswith("/bb")) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", instance.name), ("x-goog-request-params", "parent={}".format(instance.name)), - ] + ) ld_api.assert_called_once_with( - spanner_database_admin_pb2.ListDatabasesRequest(parent=self.INSTANCE_NAME), + ListDatabasesRequest(parent=self.INSTANCE_NAME), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, ) def test_list_databases_w_options(self): - from google.cloud.spanner_admin_database_v1.gapic import database_admin_client - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2, - ) + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import ListDatabasesRequest + from google.cloud.spanner_admin_database_v1 import ListDatabasesResponse - api = database_admin_client.DatabaseAdminClient(mock.Mock()) + api = DatabaseAdminClient(credentials=AnonymousCredentials()) client = _Client(self.PROJECT) client.database_admin_api = api instance = self._make_one(self.INSTANCE_ID, client) - databases_pb = spanner_database_admin_pb2.ListDatabasesResponse(databases=[]) + databases_pb = ListDatabasesResponse(databases=[]) - ld_api = api._inner_api_calls["list_databases"] = mock.Mock( - return_value=databases_pb - ) + ld_api = api._transport._wrapped_methods[ + api._transport.list_databases + ] = mock.Mock(return_value=databases_pb) page_size = 42 - page_token = "token" - response = instance.list_databases(page_size=page_size, page_token=page_token) + response = instance.list_databases(page_size=page_size) databases = list(response) self.assertEqual(databases, []) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", instance.name), ("x-goog-request-params", "parent={}".format(instance.name)), - ] + ) ld_api.assert_called_once_with( - spanner_database_admin_pb2.ListDatabasesRequest( - parent=self.INSTANCE_NAME, page_size=page_size, page_token=page_token - ), + ListDatabasesRequest(parent=self.INSTANCE_NAME, page_size=page_size), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, @@ -598,15 +674,23 @@ def test_backup_factory_explicit(self): import datetime from google.cloud._helpers import UTC from google.cloud.spanner_v1.backup import Backup + from google.cloud.spanner_admin_database_v1 import CreateBackupEncryptionConfig client = _Client(self.PROJECT) instance = self._make_one(self.INSTANCE_ID, client, self.CONFIG_NAME) BACKUP_ID = "backup-id" DATABASE_NAME = "database-name" timestamp = datetime.datetime.utcnow().replace(tzinfo=UTC) + encryption_config = CreateBackupEncryptionConfig( + encryption_type=CreateBackupEncryptionConfig.EncryptionType.CUSTOMER_MANAGED_ENCRYPTION, + kms_key_name="kms_key_name", + ) backup = instance.backup( - BACKUP_ID, database=DATABASE_NAME, expire_time=timestamp + BACKUP_ID, + database=DATABASE_NAME, + expire_time=timestamp, + encryption_config=encryption_config, ) self.assertIsInstance(backup, Backup) @@ -614,78 +698,81 @@ def test_backup_factory_explicit(self): self.assertIs(backup._instance, instance) self.assertEqual(backup._database, DATABASE_NAME) self.assertIs(backup._expire_time, timestamp) + self.assertEqual(backup._encryption_config, encryption_config) def test_list_backups_defaults(self): - from google.cloud.spanner_admin_database_v1.gapic import database_admin_client - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 - from google.cloud.spanner_v1.backup import Backup + from google.cloud.spanner_admin_database_v1 import Backup as BackupPB + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import ListBackupsRequest + from google.cloud.spanner_admin_database_v1 import ListBackupsResponse - api = database_admin_client.DatabaseAdminClient(mock.Mock()) + api = DatabaseAdminClient(credentials=AnonymousCredentials()) client = _Client(self.PROJECT) client.database_admin_api = api instance = self._make_one(self.INSTANCE_ID, client) - backups_pb = backup_pb2.ListBackupsResponse( + backups_pb = ListBackupsResponse( backups=[ - backup_pb2.Backup(name=instance.name + "/backups/op1"), - backup_pb2.Backup(name=instance.name + "/backups/op2"), - backup_pb2.Backup(name=instance.name + "/backups/op3"), + BackupPB(name=instance.name + "/backups/op1"), + BackupPB(name=instance.name + "/backups/op2"), + BackupPB(name=instance.name + "/backups/op3"), ] ) - ldo_api = api._inner_api_calls["list_backups"] = mock.Mock( - return_value=backups_pb - ) + lbo_api = api._transport._wrapped_methods[ + api._transport.list_backups + ] = mock.Mock(return_value=backups_pb) backups = instance.list_backups() for backup in backups: - self.assertIsInstance(backup, Backup) + self.assertIsInstance(backup, BackupPB) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", instance.name), ("x-goog-request-params", "parent={}".format(instance.name)), - ] - ldo_api.assert_called_once_with( - backup_pb2.ListBackupsRequest(parent=self.INSTANCE_NAME), + ) + lbo_api.assert_called_once_with( + ListBackupsRequest(parent=self.INSTANCE_NAME), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, ) def test_list_backups_w_options(self): - from google.cloud.spanner_admin_database_v1.gapic import database_admin_client - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 - from google.cloud.spanner_v1.backup import Backup + from google.cloud.spanner_admin_database_v1 import Backup as BackupPB + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import ListBackupsRequest + from google.cloud.spanner_admin_database_v1 import ListBackupsResponse - api = database_admin_client.DatabaseAdminClient(mock.Mock()) + api = DatabaseAdminClient(credentials=AnonymousCredentials()) client = _Client(self.PROJECT) client.database_admin_api = api instance = self._make_one(self.INSTANCE_ID, client) - backups_pb = backup_pb2.ListBackupsResponse( + backups_pb = ListBackupsResponse( backups=[ - backup_pb2.Backup(name=instance.name + "/backups/op1"), - backup_pb2.Backup(name=instance.name + "/backups/op2"), - backup_pb2.Backup(name=instance.name + "/backups/op3"), + BackupPB(name=instance.name + "/backups/op1"), + BackupPB(name=instance.name + "/backups/op2"), + BackupPB(name=instance.name + "/backups/op3"), ] ) - ldo_api = api._inner_api_calls["list_backups"] = mock.Mock( - return_value=backups_pb - ) + ldo_api = api._transport._wrapped_methods[ + api._transport.list_backups + ] = mock.Mock(return_value=backups_pb) backups = instance.list_backups(filter_="filter", page_size=10) for backup in backups: - self.assertIsInstance(backup, Backup) + self.assertIsInstance(backup, BackupPB) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", instance.name), ("x-goog-request-params", "parent={}".format(instance.name)), - ] + ) ldo_api.assert_called_once_with( - backup_pb2.ListBackupsRequest( + ListBackupsRequest( parent=self.INSTANCE_NAME, filter="filter", page_size=10 ), metadata=expected_metadata, @@ -695,113 +782,128 @@ def test_list_backups_w_options(self): def test_list_backup_operations_defaults(self): from google.api_core.operation import Operation - from google.cloud.spanner_admin_database_v1.gapic import database_admin_client - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 + from google.cloud.spanner_admin_database_v1 import CreateBackupMetadata + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import ListBackupOperationsRequest + from google.cloud.spanner_admin_database_v1 import ListBackupOperationsResponse from google.longrunning import operations_pb2 from google.protobuf.any_pb2 import Any - api = database_admin_client.DatabaseAdminClient(mock.Mock()) + api = DatabaseAdminClient(credentials=AnonymousCredentials()) client = _Client(self.PROJECT) client.database_admin_api = api instance = self._make_one(self.INSTANCE_ID, client) create_backup_metadata = Any() - create_backup_metadata.Pack(backup_pb2.CreateBackupMetadata()) + create_backup_metadata.Pack( + CreateBackupMetadata.pb( + CreateBackupMetadata(name="backup", database="database") + ) + ) - operations_pb = backup_pb2.ListBackupOperationsResponse( + operations_pb = ListBackupOperationsResponse( operations=[ operations_pb2.Operation(name="op1", metadata=create_backup_metadata) ] ) - ldo_api = api._inner_api_calls["list_backup_operations"] = mock.Mock( - return_value=operations_pb - ) - - operations = instance.list_backup_operations() + ldo_api = api._transport._wrapped_methods[ + api._transport.list_backup_operations + ] = mock.Mock(return_value=operations_pb) - for op in operations: - self.assertIsInstance(op, Operation) + ops = instance.list_backup_operations() - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", instance.name), ("x-goog-request-params", "parent={}".format(instance.name)), - ] + ) ldo_api.assert_called_once_with( - backup_pb2.ListBackupOperationsRequest(parent=self.INSTANCE_NAME), + ListBackupOperationsRequest(parent=self.INSTANCE_NAME), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, ) + self.assertTrue(all([type(op) is Operation for op in ops])) def test_list_backup_operations_w_options(self): from google.api_core.operation import Operation - from google.cloud.spanner_admin_database_v1.gapic import database_admin_client - from google.cloud.spanner_admin_database_v1.proto import backup_pb2 + from google.cloud.spanner_admin_database_v1 import CreateBackupMetadata + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import ListBackupOperationsRequest + from google.cloud.spanner_admin_database_v1 import ListBackupOperationsResponse from google.longrunning import operations_pb2 from google.protobuf.any_pb2 import Any - api = database_admin_client.DatabaseAdminClient(mock.Mock()) + api = DatabaseAdminClient(credentials=AnonymousCredentials()) client = _Client(self.PROJECT) client.database_admin_api = api instance = self._make_one(self.INSTANCE_ID, client) create_backup_metadata = Any() - create_backup_metadata.Pack(backup_pb2.CreateBackupMetadata()) + create_backup_metadata.Pack( + CreateBackupMetadata.pb( + CreateBackupMetadata(name="backup", database="database") + ) + ) - operations_pb = backup_pb2.ListBackupOperationsResponse( + operations_pb = ListBackupOperationsResponse( operations=[ operations_pb2.Operation(name="op1", metadata=create_backup_metadata) ] ) - ldo_api = api._inner_api_calls["list_backup_operations"] = mock.Mock( - return_value=operations_pb - ) + ldo_api = api._transport._wrapped_methods[ + api._transport.list_backup_operations + ] = mock.Mock(return_value=operations_pb) - operations = instance.list_backup_operations(filter_="filter", page_size=10) + ops = instance.list_backup_operations(filter_="filter", page_size=10) - for op in operations: - self.assertIsInstance(op, Operation) - - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", instance.name), ("x-goog-request-params", "parent={}".format(instance.name)), - ] + ) ldo_api.assert_called_once_with( - backup_pb2.ListBackupOperationsRequest( + ListBackupOperationsRequest( parent=self.INSTANCE_NAME, filter="filter", page_size=10 ), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, ) + self.assertTrue(all([type(op) is Operation for op in ops])) def test_list_database_operations_defaults(self): from google.api_core.operation import Operation - from google.cloud.spanner_admin_database_v1.gapic import database_admin_client - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2, + from google.cloud.spanner_admin_database_v1 import CreateDatabaseMetadata + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import ListDatabaseOperationsRequest + from google.cloud.spanner_admin_database_v1 import ( + ListDatabaseOperationsResponse, + ) + from google.cloud.spanner_admin_database_v1 import ( + OptimizeRestoredDatabaseMetadata, ) from google.longrunning import operations_pb2 from google.protobuf.any_pb2 import Any - api = database_admin_client.DatabaseAdminClient(mock.Mock()) + api = DatabaseAdminClient(credentials=AnonymousCredentials()) client = _Client(self.PROJECT) client.database_admin_api = api instance = self._make_one(self.INSTANCE_ID, client) create_database_metadata = Any() create_database_metadata.Pack( - spanner_database_admin_pb2.CreateDatabaseMetadata() + CreateDatabaseMetadata.pb(CreateDatabaseMetadata(database="database")) ) optimize_database_metadata = Any() optimize_database_metadata.Pack( - spanner_database_admin_pb2.OptimizeRestoredDatabaseMetadata() + OptimizeRestoredDatabaseMetadata.pb( + OptimizeRestoredDatabaseMetadata(name="database") + ) ) - databases_pb = spanner_database_admin_pb2.ListDatabaseOperationsResponse( + databases_pb = ListDatabaseOperationsResponse( operations=[ operations_pb2.Operation(name="op1", metadata=create_database_metadata), operations_pb2.Operation( @@ -810,53 +912,61 @@ def test_list_database_operations_defaults(self): ] ) - ldo_api = api._inner_api_calls["list_database_operations"] = mock.Mock( - return_value=databases_pb - ) - - operations = instance.list_database_operations() + ldo_api = api._transport._wrapped_methods[ + api._transport.list_database_operations + ] = mock.Mock(return_value=databases_pb) - for op in operations: - self.assertIsInstance(op, Operation) + ops = instance.list_database_operations() - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", instance.name), ("x-goog-request-params", "parent={}".format(instance.name)), - ] + ) ldo_api.assert_called_once_with( - spanner_database_admin_pb2.ListDatabaseOperationsRequest( - parent=self.INSTANCE_NAME - ), + ListDatabaseOperationsRequest(parent=self.INSTANCE_NAME), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, ) + self.assertTrue(all([type(op) is Operation for op in ops])) def test_list_database_operations_w_options(self): from google.api_core.operation import Operation - from google.cloud.spanner_admin_database_v1.gapic import database_admin_client - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2, - ) + from google.cloud.spanner_admin_database_v1 import DatabaseAdminClient + from google.cloud.spanner_admin_database_v1 import ListDatabaseOperationsRequest + from google.cloud.spanner_admin_database_v1 import ( + ListDatabaseOperationsResponse, + ) + from google.cloud.spanner_admin_database_v1 import RestoreDatabaseMetadata + from google.cloud.spanner_admin_database_v1 import RestoreSourceType + from google.cloud.spanner_admin_database_v1 import UpdateDatabaseDdlMetadata from google.longrunning import operations_pb2 from google.protobuf.any_pb2 import Any - api = database_admin_client.DatabaseAdminClient(mock.Mock()) + api = DatabaseAdminClient(credentials=AnonymousCredentials()) client = _Client(self.PROJECT) client.database_admin_api = api instance = self._make_one(self.INSTANCE_ID, client) restore_database_metadata = Any() restore_database_metadata.Pack( - spanner_database_admin_pb2.RestoreDatabaseMetadata() + RestoreDatabaseMetadata.pb( + RestoreDatabaseMetadata( + name="database", source_type=RestoreSourceType.BACKUP + ) + ) ) update_database_metadata = Any() update_database_metadata.Pack( - spanner_database_admin_pb2.UpdateDatabaseDdlMetadata() + UpdateDatabaseDdlMetadata.pb( + UpdateDatabaseDdlMetadata( + database="database", statements=["statements"] + ) + ) ) - databases_pb = spanner_database_admin_pb2.ListDatabaseOperationsResponse( + databases_pb = ListDatabaseOperationsResponse( operations=[ operations_pb2.Operation( name="op1", metadata=restore_database_metadata @@ -865,31 +975,29 @@ def test_list_database_operations_w_options(self): ] ) - ldo_api = api._inner_api_calls["list_database_operations"] = mock.Mock( - return_value=databases_pb - ) - - operations = instance.list_database_operations(filter_="filter", page_size=10) + ldo_api = api._transport._wrapped_methods[ + api._transport.list_database_operations + ] = mock.Mock(return_value=databases_pb) - for op in operations: - self.assertIsInstance(op, Operation) + ops = instance.list_database_operations(filter_="filter", page_size=10) - expected_metadata = [ + expected_metadata = ( ("google-cloud-resource-prefix", instance.name), ("x-goog-request-params", "parent={}".format(instance.name)), - ] + ) ldo_api.assert_called_once_with( - spanner_database_admin_pb2.ListDatabaseOperationsRequest( + ListDatabaseOperationsRequest( parent=self.INSTANCE_NAME, filter="filter", page_size=10 ), metadata=expected_metadata, retry=mock.ANY, timeout=mock.ANY, ) + self.assertTrue(all([type(op) is Operation for op in ops])) def test_type_string_to_type_pb_hit(self): - from google.cloud.spanner_admin_database_v1.proto import ( - spanner_database_admin_pb2, + from google.cloud.spanner_admin_database_v1 import ( + OptimizeRestoredDatabaseMetadata, ) from google.cloud.spanner_v1 import instance @@ -897,7 +1005,7 @@ def test_type_string_to_type_pb_hit(self): self.assertIn(type_string, instance._OPERATION_METADATA_TYPES) self.assertEqual( instance._type_string_to_type_pb(type_string), - spanner_database_admin_pb2.OptimizeRestoredDatabaseMetadata, + OptimizeRestoredDatabaseMetadata, ) def test_type_string_to_type_pb_miss(self): @@ -912,6 +1020,9 @@ def __init__(self, project, timeout_seconds=None): self.project = project self.project_name = "projects/" + self.project self.timeout_seconds = timeout_seconds + self.route_to_leader_enabled = True + self.directed_read_options = None + self.default_transaction_options = DefaultTransactionOptions() def copy(self): from copy import deepcopy @@ -927,7 +1038,6 @@ def __eq__(self, other): class _FauxInstanceAdminAPI(object): - _create_instance_conflict = False _instance_not_found = False _rpc_error = False diff --git a/tests/unit/test_keyset.py b/tests/unit/test_keyset.py index ed1473bf01..8fc743e075 100644 --- a/tests/unit/test_keyset.py +++ b/tests/unit/test_keyset.py @@ -30,19 +30,19 @@ def test_ctor_no_start_no_end(self): self._make_one() def test_ctor_w_start_open_and_start_closed(self): - KEY_1 = [u"key_1"] - KEY_2 = [u"key_2"] + KEY_1 = ["key_1"] + KEY_2 = ["key_2"] with self.assertRaises(ValueError): self._make_one(start_open=KEY_1, start_closed=KEY_2) def test_ctor_w_end_open_and_end_closed(self): - KEY_1 = [u"key_1"] - KEY_2 = [u"key_2"] + KEY_1 = ["key_1"] + KEY_2 = ["key_2"] with self.assertRaises(ValueError): self._make_one(end_open=KEY_1, end_closed=KEY_2) def test_ctor_w_only_start_open(self): - KEY_1 = [u"key_1"] + KEY_1 = ["key_1"] krange = self._make_one(start_open=KEY_1) self.assertEqual(krange.start_open, KEY_1) self.assertEqual(krange.start_closed, None) @@ -50,7 +50,7 @@ def test_ctor_w_only_start_open(self): self.assertEqual(krange.end_closed, []) def test_ctor_w_only_start_closed(self): - KEY_1 = [u"key_1"] + KEY_1 = ["key_1"] krange = self._make_one(start_closed=KEY_1) self.assertEqual(krange.start_open, None) self.assertEqual(krange.start_closed, KEY_1) @@ -58,7 +58,7 @@ def test_ctor_w_only_start_closed(self): self.assertEqual(krange.end_closed, []) def test_ctor_w_only_end_open(self): - KEY_1 = [u"key_1"] + KEY_1 = ["key_1"] krange = self._make_one(end_open=KEY_1) self.assertEqual(krange.start_open, None) self.assertEqual(krange.start_closed, []) @@ -66,7 +66,7 @@ def test_ctor_w_only_end_open(self): self.assertEqual(krange.end_closed, None) def test_ctor_w_only_end_closed(self): - KEY_1 = [u"key_1"] + KEY_1 = ["key_1"] krange = self._make_one(end_closed=KEY_1) self.assertEqual(krange.start_open, None) self.assertEqual(krange.start_closed, []) @@ -74,8 +74,8 @@ def test_ctor_w_only_end_closed(self): self.assertEqual(krange.end_closed, KEY_1) def test_ctor_w_start_open_and_end_closed(self): - KEY_1 = [u"key_1"] - KEY_2 = [u"key_2"] + KEY_1 = ["key_1"] + KEY_2 = ["key_2"] krange = self._make_one(start_open=KEY_1, end_closed=KEY_2) self.assertEqual(krange.start_open, KEY_1) self.assertEqual(krange.start_closed, None) @@ -83,8 +83,8 @@ def test_ctor_w_start_open_and_end_closed(self): self.assertEqual(krange.end_closed, KEY_2) def test_ctor_w_start_closed_and_end_open(self): - KEY_1 = [u"key_1"] - KEY_2 = [u"key_2"] + KEY_1 = ["key_1"] + KEY_2 = ["key_2"] krange = self._make_one(start_closed=KEY_1, end_open=KEY_2) self.assertEqual(krange.start_open, None) self.assertEqual(krange.start_closed, KEY_1) @@ -92,88 +92,76 @@ def test_ctor_w_start_closed_and_end_open(self): self.assertEqual(krange.end_closed, None) def test___eq___self(self): - key_1 = [u"key_1"] + key_1 = ["key_1"] krange = self._make_one(end_open=key_1) self.assertEqual(krange, krange) def test___eq___other_type(self): - key_1 = [u"key_1"] + key_1 = ["key_1"] krange = self._make_one(end_open=key_1) self.assertNotEqual(krange, object()) def test___eq___other_hit(self): - key_1 = [u"key_1"] + key_1 = ["key_1"] krange = self._make_one(end_open=key_1) other = self._make_one(end_open=key_1) self.assertEqual(krange, other) def test___eq___other(self): - key_1 = [u"key_1"] - key_2 = [u"key_2"] + key_1 = ["key_1"] + key_2 = ["key_2"] krange = self._make_one(end_open=key_1) other = self._make_one(start_closed=key_2, end_open=key_1) self.assertNotEqual(krange, other) def test_to_pb_w_start_closed_and_end_open(self): - from google.protobuf.struct_pb2 import ListValue - from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.keys_pb2 import KeyRange + from google.cloud.spanner_v1.types.keys import KeyRange as KeyRangePB - key1 = u"key_1" - key2 = u"key_2" + key1 = "key_1" + key2 = "key_2" key_range = self._make_one(start_closed=[key1], end_open=[key2]) key_range_pb = key_range._to_pb() - expected = KeyRange( - start_closed=ListValue(values=[Value(string_value=key1)]), - end_open=ListValue(values=[Value(string_value=key2)]), + expected = KeyRangePB( + start_closed=[key1], + end_open=[key2], ) self.assertEqual(key_range_pb, expected) def test_to_pb_w_start_open_and_end_closed(self): - from google.protobuf.struct_pb2 import ListValue - from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.keys_pb2 import KeyRange + from google.cloud.spanner_v1.types.keys import KeyRange as KeyRangePB - key1 = u"key_1" - key2 = u"key_2" + key1 = "key_1" + key2 = "key_2" key_range = self._make_one(start_open=[key1], end_closed=[key2]) key_range_pb = key_range._to_pb() - expected = KeyRange( - start_open=ListValue(values=[Value(string_value=key1)]), - end_closed=ListValue(values=[Value(string_value=key2)]), - ) + expected = KeyRangePB(start_open=[key1], end_closed=[key2]) self.assertEqual(key_range_pb, expected) def test_to_pb_w_empty_list(self): - from google.protobuf.struct_pb2 import ListValue - from google.protobuf.struct_pb2 import Value - from google.cloud.spanner_v1.proto.keys_pb2 import KeyRange + from google.cloud.spanner_v1.types.keys import KeyRange as KeyRangePB - key = u"key" + key = "key" key_range = self._make_one(start_closed=[], end_closed=[key]) key_range_pb = key_range._to_pb() - expected = KeyRange( - start_closed=ListValue(values=[]), - end_closed=ListValue(values=[Value(string_value=key)]), - ) + expected = KeyRangePB(start_closed=[], end_closed=[key]) self.assertEqual(key_range_pb, expected) def test_to_dict_w_start_closed_and_end_open(self): - key1 = u"key_1" - key2 = u"key_2" + key1 = "key_1" + key2 = "key_2" key_range = self._make_one(start_closed=[key1], end_open=[key2]) expected = {"start_closed": [key1], "end_open": [key2]} self.assertEqual(key_range._to_dict(), expected) def test_to_dict_w_start_open_and_end_closed(self): - key1 = u"key_1" - key2 = u"key_2" + key1 = "key_1" + key2 = "key_2" key_range = self._make_one(start_open=[key1], end_closed=[key2]) expected = {"start_open": [key1], "end_closed": [key2]} self.assertEqual(key_range._to_dict(), expected) def test_to_dict_w_end_closed(self): - key = u"key" + key = "key" key_range = self._make_one(end_closed=[key]) expected = {"end_closed": [key]} self.assertEqual(key_range._to_dict(), expected) @@ -196,7 +184,7 @@ def test_ctor_w_all(self): self.assertEqual(keyset.ranges, []) def test_ctor_w_keys(self): - KEYS = [[u"key1"], [u"key2"]] + KEYS = [["key1"], ["key2"]] keyset = self._make_one(keys=KEYS) @@ -207,8 +195,8 @@ def test_ctor_w_keys(self): def test_ctor_w_ranges(self): from google.cloud.spanner_v1.keyset import KeyRange - range_1 = KeyRange(start_closed=[u"key1"], end_open=[u"key3"]) - range_2 = KeyRange(start_open=[u"key5"], end_closed=[u"key6"]) + range_1 = KeyRange(start_closed=["key1"], end_open=["key3"]) + range_2 = KeyRange(start_open=["key5"], end_closed=["key6"]) keyset = self._make_one(ranges=[range_1, range_2]) @@ -217,15 +205,14 @@ def test_ctor_w_ranges(self): self.assertEqual(keyset.ranges, [range_1, range_2]) def test_ctor_w_all_and_keys(self): - with self.assertRaises(ValueError): self._make_one(all_=True, keys=[["key1"], ["key2"]]) def test_ctor_w_all_and_ranges(self): from google.cloud.spanner_v1.keyset import KeyRange - range_1 = KeyRange(start_closed=[u"key1"], end_open=[u"key3"]) - range_2 = KeyRange(start_open=[u"key5"], end_closed=[u"key6"]) + range_1 = KeyRange(start_closed=["key1"], end_open=["key3"]) + range_2 = KeyRange(start_open=["key5"], end_closed=["key6"]) with self.assertRaises(ValueError): self._make_one(all_=True, ranges=[range_1, range_2]) @@ -244,13 +231,13 @@ def test___eq___w_all_hit(self): self.assertEqual(keyset, other) def test___eq___w_all_miss(self): - keys = [[u"key1"], [u"key2"]] + keys = [["key1"], ["key2"]] keyset = self._make_one(all_=True) other = self._make_one(keys=keys) self.assertNotEqual(keyset, other) def test___eq___w_keys_hit(self): - keys = [[u"key1"], [u"key2"]] + keys = [["key1"], ["key2"]] keyset = self._make_one(keys=keys) other = self._make_one(keys=keys) @@ -258,7 +245,7 @@ def test___eq___w_keys_hit(self): self.assertEqual(keyset, other) def test___eq___w_keys_miss(self): - keys = [[u"key1"], [u"key2"]] + keys = [["key1"], ["key2"]] keyset = self._make_one(keys=keys[:1]) other = self._make_one(keys=keys[1:]) @@ -268,8 +255,8 @@ def test___eq___w_keys_miss(self): def test___eq___w_ranges_hit(self): from google.cloud.spanner_v1.keyset import KeyRange - range_1 = KeyRange(start_closed=[u"key1"], end_open=[u"key3"]) - range_2 = KeyRange(start_open=[u"key5"], end_closed=[u"key6"]) + range_1 = KeyRange(start_closed=["key1"], end_open=["key3"]) + range_2 = KeyRange(start_open=["key5"], end_closed=["key6"]) keyset = self._make_one(ranges=[range_1, range_2]) other = self._make_one(ranges=[range_1, range_2]) @@ -279,8 +266,8 @@ def test___eq___w_ranges_hit(self): def test___eq___w_ranges_miss(self): from google.cloud.spanner_v1.keyset import KeyRange - range_1 = KeyRange(start_closed=[u"key1"], end_open=[u"key3"]) - range_2 = KeyRange(start_open=[u"key5"], end_closed=[u"key6"]) + range_1 = KeyRange(start_closed=["key1"], end_open=["key3"]) + range_2 = KeyRange(start_open=["key5"], end_closed=["key6"]) keyset = self._make_one(ranges=[range_1]) other = self._make_one(ranges=[range_2]) @@ -288,43 +275,44 @@ def test___eq___w_ranges_miss(self): self.assertNotEqual(keyset, other) def test_to_pb_w_all(self): - from google.cloud.spanner_v1.proto.keys_pb2 import KeySet + from google.cloud.spanner_v1 import KeySetPB keyset = self._make_one(all_=True) result = keyset._to_pb() - self.assertIsInstance(result, KeySet) - self.assertTrue(result.all) + self.assertIsInstance(result, KeySetPB) + self.assertTrue(result.all_) self.assertEqual(len(result.keys), 0) self.assertEqual(len(result.ranges), 0) def test_to_pb_w_only_keys(self): - from google.cloud.spanner_v1.proto.keys_pb2 import KeySet + from google.cloud.spanner_v1 import KeySetPB - KEYS = [[u"key1"], [u"key2"]] + KEYS = [["key1"], ["key2"]] keyset = self._make_one(keys=KEYS) result = keyset._to_pb() - self.assertIsInstance(result, KeySet) - self.assertFalse(result.all) + self.assertIsInstance(result, KeySetPB) + self.assertFalse(result.all_) self.assertEqual(len(result.keys), len(KEYS)) for found, expected in zip(result.keys, KEYS): self.assertEqual(len(found), len(expected)) - self.assertEqual(found.values[0].string_value, expected[0]) + self.assertEqual(found[0], expected[0]) self.assertEqual(len(result.ranges), 0) def test_to_pb_w_only_ranges(self): - from google.cloud.spanner_v1.proto.keys_pb2 import KeySet + from google.cloud.spanner_v1 import KeyRangePB + from google.cloud.spanner_v1 import KeySetPB from google.cloud.spanner_v1.keyset import KeyRange - KEY_1 = u"KEY_1" - KEY_2 = u"KEY_2" - KEY_3 = u"KEY_3" - KEY_4 = u"KEY_4" + KEY_1 = "KEY_1" + KEY_2 = "KEY_2" + KEY_3 = "KEY_3" + KEY_4 = "KEY_4" RANGES = [ KeyRange(start_open=KEY_1, end_closed=KEY_2), KeyRange(start_closed=KEY_3, end_open=KEY_4), @@ -333,13 +321,17 @@ def test_to_pb_w_only_ranges(self): result = keyset._to_pb() - self.assertIsInstance(result, KeySet) - self.assertFalse(result.all) + self.assertIsInstance(result, KeySetPB) + self.assertFalse(result.all_) self.assertEqual(len(result.keys), 0) self.assertEqual(len(result.ranges), len(RANGES)) - for found, expected in zip(result.ranges, RANGES): - self.assertEqual(found, expected._to_pb()) + expected_ranges = [ + KeyRangePB(start_open=KEY_1, end_closed=KEY_2), + KeyRangePB(start_closed=KEY_3, end_open=KEY_4), + ] + for found, expected in zip(result.ranges, expected_ranges): + self.assertEqual(found, expected) def test_to_dict_w_all(self): keyset = self._make_one(all_=True) @@ -347,7 +339,7 @@ def test_to_dict_w_all(self): self.assertEqual(keyset._to_dict(), expected) def test_to_dict_w_only_keys(self): - KEYS = [[u"key1"], [u"key2"]] + KEYS = [["key1"], ["key2"]] keyset = self._make_one(keys=KEYS) expected = {"keys": KEYS, "ranges": []} @@ -356,10 +348,10 @@ def test_to_dict_w_only_keys(self): def test_to_dict_w_only_ranges(self): from google.cloud.spanner_v1.keyset import KeyRange - key_1 = u"KEY_1" - key_2 = u"KEY_2" - key_3 = u"KEY_3" - key_4 = u"KEY_4" + key_1 = "KEY_1" + key_2 = "KEY_2" + key_3 = "KEY_3" + key_4 = "KEY_4" ranges = [ KeyRange(start_open=[key_1], end_closed=[key_2]), KeyRange(start_closed=[key_3], end_open=[key_4]), @@ -387,7 +379,7 @@ def test_from_dict_w_all(self): def test_from_dict_w_keys(self): klass = self._get_target_class() - keys = [[u"key1"], [u"key2"]] + keys = [["key1"], ["key2"]] mapping = {"keys": keys} keyset = klass._from_dict(mapping) @@ -400,10 +392,10 @@ def test_from_dict_w_ranges(self): from google.cloud.spanner_v1.keyset import KeyRange klass = self._get_target_class() - key_1 = u"KEY_1" - key_2 = u"KEY_2" - key_3 = u"KEY_3" - key_4 = u"KEY_4" + key_1 = "KEY_1" + key_2 = "KEY_2" + key_3 = "KEY_3" + key_4 = "KEY_4" mapping = { "ranges": [ {"start_open": [key_1], "end_closed": [key_2]}, diff --git a/tests/unit/test_metrics.py b/tests/unit/test_metrics.py new file mode 100644 index 0000000000..5e37e7cfe2 --- /dev/null +++ b/tests/unit/test_metrics.py @@ -0,0 +1,116 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest +from unittest.mock import MagicMock +from google.api_core.exceptions import ServiceUnavailable +from google.auth import exceptions +from google.auth.credentials import Credentials + +from google.cloud.spanner_v1.client import Client +from unittest.mock import patch +from grpc._interceptor import _UnaryOutcome +from google.cloud.spanner_v1.metrics.spanner_metrics_tracer_factory import ( + SpannerMetricsTracerFactory, +) +from opentelemetry import metrics + +pytest.importorskip("opentelemetry") +# Skip if semconv attributes are not present, as tracing won't be enabled either +# pytest.importorskip("opentelemetry.semconv.attributes.otel_attributes") + + +class TestCredentials(Credentials): + @property + def expired(self): + return False + + @property + def valid(self): + return True + + def refresh(self, request): + raise exceptions.InvalidOperation("Anonymous credentials cannot be refreshed.") + + def apply(self, headers, token=None): + if token is not None: + raise exceptions.InvalidValue("Anonymous credentials don't support tokens.") + + def before_request(self, request, method, url, headers): + """Anonymous credentials do nothing to the request.""" + + +@pytest.fixture(autouse=True) +def patched_client(monkeypatch): + monkeypatch.setenv("SPANNER_ENABLE_BUILTIN_METRICS", "true") + metrics.set_meter_provider(metrics.NoOpMeterProvider()) + + # Remove the Tracer factory to avoid previously disabled factory polluting from other tests + if SpannerMetricsTracerFactory._metrics_tracer_factory is not None: + SpannerMetricsTracerFactory._metrics_tracer_factory = None + + client = Client( + project="test", + credentials=TestCredentials(), + # client_options={"api_endpoint": "none"} + ) + yield client + + # Resetting + metrics.set_meter_provider(metrics.NoOpMeterProvider()) + SpannerMetricsTracerFactory._metrics_tracer_factory = None + SpannerMetricsTracerFactory.current_metrics_tracer = None + + +def test_metrics_emission_with_failure_attempt(patched_client): + instance = patched_client.instance("test-instance") + database = instance.database("example-db") + factory = SpannerMetricsTracerFactory() + + assert factory.enabled + + transport = database.spanner_api._transport + metrics_interceptor = transport._metrics_interceptor + original_intercept = metrics_interceptor.intercept + first_attempt = True + + def mocked_raise(*args, **kwargs): + raise ServiceUnavailable("Service Unavailable") + + def mocked_call(*args, **kwargs): + return _UnaryOutcome(MagicMock(), MagicMock()) + + def intercept_wrapper(invoked_method, request_or_iterator, call_details): + nonlocal first_attempt + invoked_method = mocked_call + if first_attempt: + first_attempt = False + invoked_method = mocked_raise + response = original_intercept( + invoked_method=invoked_method, + request_or_iterator=request_or_iterator, + call_details=call_details, + ) + return response + + metrics_interceptor.intercept = intercept_wrapper + patch_path = "google.cloud.spanner_v1.metrics.metrics_exporter.CloudMonitoringMetricsExporter.export" + with patch(patch_path): + with database.snapshot(): + pass + + # Verify that the attempt count increased from the failed initial attempt + assert ( + SpannerMetricsTracerFactory.current_metrics_tracer.current_op.attempt_count + ) == 2 diff --git a/tests/unit/test_metrics_capture.py b/tests/unit/test_metrics_capture.py new file mode 100644 index 0000000000..107e9daeb4 --- /dev/null +++ b/tests/unit/test_metrics_capture.py @@ -0,0 +1,50 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest +from unittest import mock +from google.cloud.spanner_v1.metrics.metrics_capture import MetricsCapture +from google.cloud.spanner_v1.metrics.metrics_tracer_factory import MetricsTracerFactory +from google.cloud.spanner_v1.metrics.spanner_metrics_tracer_factory import ( + SpannerMetricsTracerFactory, +) + + +@pytest.fixture +def mock_tracer_factory(): + SpannerMetricsTracerFactory(enabled=True) + with mock.patch.object( + MetricsTracerFactory, "create_metrics_tracer" + ) as mock_create: + yield mock_create + + +def test_metrics_capture_enter(mock_tracer_factory): + mock_tracer = mock.Mock() + mock_tracer_factory.return_value = mock_tracer + + with MetricsCapture() as capture: + assert capture is not None + mock_tracer_factory.assert_called_once() + mock_tracer.record_operation_start.assert_called_once() + + +def test_metrics_capture_exit(mock_tracer_factory): + mock_tracer = mock.Mock() + mock_tracer_factory.return_value = mock_tracer + + with MetricsCapture(): + pass + + mock_tracer.record_operation_completion.assert_called_once() diff --git a/tests/unit/test_metrics_exporter.py b/tests/unit/test_metrics_exporter.py new file mode 100644 index 0000000000..f57984ec66 --- /dev/null +++ b/tests/unit/test_metrics_exporter.py @@ -0,0 +1,501 @@ +# Copyright 2025 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest +from unittest.mock import patch, MagicMock, Mock + +from google.auth.credentials import AnonymousCredentials + +from google.cloud.spanner_v1.metrics.metrics_exporter import ( + CloudMonitoringMetricsExporter, + _normalize_label_key, +) +from google.api.metric_pb2 import MetricDescriptor +from opentelemetry.sdk.metrics import MeterProvider +from opentelemetry.sdk.metrics.export import ( + InMemoryMetricReader, + Sum, + Gauge, + Histogram, + NumberDataPoint, + HistogramDataPoint, + AggregationTemporality, +) +from google.cloud.spanner_v1.metrics.constants import METRIC_NAME_OPERATION_COUNT + +from tests._helpers import ( + HAS_OPENTELEMETRY_INSTALLED, +) + + +# Test Constants +PROJECT_ID = "fake-project-id" +INSTANCE_ID = "fake-instance-id" +DATABASE_ID = "fake-database-id" +SCOPE_NAME = "gax-python" + +# Skip tests if opentelemetry is not installed +if HAS_OPENTELEMETRY_INSTALLED: + + class TestMetricsExporter(unittest.TestCase): + @classmethod + def setUpClass(cls): + cls.metric_attributes = { + "project_id": PROJECT_ID, + "instance_id": INSTANCE_ID, + "instance_config": "test_config", + "location": "test_location", + "client_hash": "test_hash", + "client_uid": "test_uid", + "client_name": "test_name", + "database": DATABASE_ID, + "method": "test_method", + "status": "test_status", + "directpath_enabled": "true", + "directpath_used": "false", + "other": "ignored", + } + + def setUp(self): + self.metric_reader = InMemoryMetricReader() + self.provider = MeterProvider(metric_readers=[self.metric_reader]) + self.meter = self.provider.get_meter(SCOPE_NAME) + self.operation_count = self.meter.create_counter( + name=METRIC_NAME_OPERATION_COUNT, + description="A test counter", + unit="counts", + ) + + def test_normalize_label_key(self): + """Test label key normalization""" + test_cases = [ + ("simple", "simple"), + ("with space", "with_space"), + ("with-dash", "with_dash"), + ("123_number_prefix", "key_123_number_prefix"), + ("special!characters@", "special_characters_"), + ] + + for input_key, expected_output in test_cases: + self.assertEqual(_normalize_label_key(input_key), expected_output) + + def test_to_metric_kind(self): + """Test conversion of different metric types to GCM metric kinds""" + # Test monotonic Sum returns CUMULATIVE + metric_sum = Mock( + data=Sum( + data_points=[], + aggregation_temporality=AggregationTemporality.UNSPECIFIED, + is_monotonic=True, + ) + ) + self.assertEqual( + CloudMonitoringMetricsExporter._to_metric_kind(metric_sum), + MetricDescriptor.MetricKind.CUMULATIVE, + ) + + # Test non-monotonic Sum returns GAUGE + metric_sum_non_monotonic = Mock( + data=Sum( + data_points=[], + aggregation_temporality=AggregationTemporality.UNSPECIFIED, + is_monotonic=False, + ) + ) + self.assertEqual( + CloudMonitoringMetricsExporter._to_metric_kind( + metric_sum_non_monotonic + ), + MetricDescriptor.MetricKind.GAUGE, + ) + + # Test Gauge returns GAUGE + metric_gauge = Mock(data=Gauge(data_points=[])) + self.assertEqual( + CloudMonitoringMetricsExporter._to_metric_kind(metric_gauge), + MetricDescriptor.MetricKind.GAUGE, + ) + + # Test Histogram returns CUMULATIVE + metric_histogram = Mock( + data=Histogram( + data_points=[], + aggregation_temporality=AggregationTemporality.UNSPECIFIED, + ) + ) + self.assertEqual( + CloudMonitoringMetricsExporter._to_metric_kind(metric_histogram), + MetricDescriptor.MetricKind.CUMULATIVE, + ) + + # Test Unknown data type warns + metric_unknown = Mock(data=Mock()) + with self.assertLogs( + "google.cloud.spanner_v1.metrics.metrics_exporter", level="WARNING" + ) as log: + self.assertIsNone( + CloudMonitoringMetricsExporter._to_metric_kind(metric_unknown) + ) + self.assertIn( + "WARNING:google.cloud.spanner_v1.metrics.metrics_exporter:Unsupported metric data type Mock, ignoring it", + log.output, + ) + + def test_extract_metric_labels(self): + """Test extraction of metric and resource labels""" + import time + + data_point = NumberDataPoint( + attributes={ + # Metric labels + "client_uid": "test-client-uid", + "client_name": "test-client-name", + "database": "test-db", + "method": "test-method", + "status": "test-status", + "directpath_enabled": "test-directpath-enabled", + "directpath_used": "test-directpath-used", + # Monitored Resource label + "project_id": "test-project-id", + "instance_id": "test-instance-id", + "instance_config": "test-instance-config", + "location": "test-location", + "client_hash": "test-client-hash", + # All other labels ignored + "unknown": "ignored", + "Client_UID": "ignored", + }, + start_time_unix_nano=time.time_ns(), + time_unix_nano=time.time_ns(), + value=0, + ) + + ( + metric_labels, + resource_labels, + ) = CloudMonitoringMetricsExporter._extract_metric_labels(data_point) + + # Verify that the attributes are properly distributed and reassigned + + ## Metric Labels + self.assertIn("client_uid", metric_labels) + self.assertEqual(metric_labels["client_uid"], "test-client-uid") + self.assertIn("client_name", metric_labels) + self.assertEqual(metric_labels["client_name"], "test-client-name") + self.assertIn("database", metric_labels) + self.assertEqual(metric_labels["database"], "test-db") + self.assertIn("method", metric_labels) + self.assertEqual(metric_labels["method"], "test-method") + self.assertIn("status", metric_labels) + self.assertEqual(metric_labels["status"], "test-status") + self.assertIn("directpath_enabled", metric_labels) + self.assertEqual( + metric_labels["directpath_enabled"], "test-directpath-enabled" + ) + self.assertIn("directpath_used", metric_labels) + self.assertEqual(metric_labels["directpath_used"], "test-directpath-used") + + ## Metric Resource Labels + self.assertIn("project_id", resource_labels) + self.assertEqual(resource_labels["project_id"], "test-project-id") + self.assertIn("instance_id", resource_labels) + self.assertEqual(resource_labels["instance_id"], "test-instance-id") + self.assertIn("instance_config", resource_labels) + self.assertEqual(resource_labels["instance_config"], "test-instance-config") + self.assertIn("location", resource_labels) + self.assertEqual(resource_labels["location"], "test-location") + self.assertIn("client_hash", resource_labels) + self.assertEqual(resource_labels["client_hash"], "test-client-hash") + + # Other attributes are ignored + self.assertNotIn("unknown", metric_labels) + self.assertNotIn("unknown", resource_labels) + ## including case sensitive keys + self.assertNotIn("Client_UID", metric_labels) + self.assertNotIn("Client_UID", resource_labels) + + def test_metric_timeseries_conversion(self): + """Test to verify conversion from OTEL Metrics to GCM Time Series.""" + # Add metrics + self.operation_count.add(1, attributes=self.metric_attributes) + self.operation_count.add(2, attributes=self.metric_attributes) + + # Export metrics + metrics = self.metric_reader.get_metrics_data() + self.assertTrue(metrics is not None) + + exporter = CloudMonitoringMetricsExporter( + PROJECT_ID, credentials=AnonymousCredentials() + ) + timeseries = exporter._resource_metrics_to_timeseries_pb(metrics) + + # Both counter values should be summed together + self.assertEqual(len(timeseries), 1) + self.assertEqual(timeseries[0].points.pop(0).value.int64_value, 3) + + def test_metric_timeseries_scope_filtering(self): + """Test to verify that metrics without the `gax-python` scope are filtered out.""" + # Create metric instruments + meter = self.provider.get_meter("WRONG_SCOPE") + counter = meter.create_counter( + name="operation_latencies", description="A test counter", unit="ms" + ) + + # Add metrics + counter.add(1, attributes=self.metric_attributes) + counter.add(2, attributes=self.metric_attributes) + + # Export metrics + metrics = self.metric_reader.get_metrics_data() + exporter = CloudMonitoringMetricsExporter( + PROJECT_ID, credentials=AnonymousCredentials() + ) + timeseries = exporter._resource_metrics_to_timeseries_pb(metrics) + + # Metris with incorrect sope should be filtered out + self.assertEqual(len(timeseries), 0) + + def test_batch_write(self): + """Verify that writes happen in batches of 200""" + from google.protobuf.timestamp_pb2 import Timestamp + from google.cloud.monitoring_v3 import MetricServiceClient + from google.api.monitored_resource_pb2 import MonitoredResource + from google.api.metric_pb2 import Metric as GMetric + import random + from google.cloud.monitoring_v3 import ( + TimeSeries, + Point, + TimeInterval, + TypedValue, + ) + + mockClient = MagicMock(spec=MetricServiceClient) + mockClient.create_service_time_series = Mock(return_value=None) + exporter = CloudMonitoringMetricsExporter(PROJECT_ID, mockClient) + + # Create timestamps for the time series + start_time = Timestamp() + start_time.FromSeconds(1234567890) + end_time = Timestamp() + end_time.FromSeconds(1234567900) + + # Create test time series + timeseries = [] + for i in range(400): + timeseries.append( + TimeSeries( + metric=GMetric( + type=f"custom.googleapis.com/spanner/test_metric_{i}", + labels={"client_uid": "test-client", "database": "test-db"}, + ), + resource=MonitoredResource( + type="spanner_instance", + labels={ + "project_id": PROJECT_ID, + "instance_id": INSTANCE_ID, + "location": "test-location", + }, + ), + metric_kind=MetricDescriptor.MetricKind.CUMULATIVE, + points=[ + Point( + interval=TimeInterval( + start_time=start_time, end_time=end_time + ), + value=TypedValue(int64_value=random.randint(1, 100)), + ) + ], + ), + ) + + # Define a side effect to extract time series data passed to mocked CreatetimeSeriesRquest + tsr_timeseries = [] + + def create_tsr_side_effect(name, time_series): + nonlocal tsr_timeseries + tsr_timeseries = time_series + + patch_path = "google.cloud.spanner_v1.metrics.metrics_exporter.CreateTimeSeriesRequest" + with patch(patch_path, side_effect=create_tsr_side_effect): + exporter._batch_write(timeseries, 10000) + # Verify that the Create Time Series calls happen in batches of max 200 elements + self.assertTrue(len(tsr_timeseries) > 0 and len(tsr_timeseries) <= 200) + + # Verify the mock was called with the correct arguments + self.assertEqual(len(mockClient.create_service_time_series.mock_calls), 2) + + @patch( + "google.cloud.spanner_v1.metrics.metrics_exporter.HAS_OPENTELEMETRY_INSTALLED", + False, + ) + def test_export_early_exit_if_extras_not_installed(self): + """Verify that Export will early exit and return None if OpenTelemetry and/or Google Cloud Monitoring extra modules are not installed.""" + # Suppress expected warning log + with self.assertLogs( + "google.cloud.spanner_v1.metrics.metrics_exporter", level="WARNING" + ) as log: + exporter = CloudMonitoringMetricsExporter( + PROJECT_ID, credentials=AnonymousCredentials() + ) + self.assertFalse(exporter.export([])) + self.assertIn( + "WARNING:google.cloud.spanner_v1.metrics.metrics_exporter:Metric exporter called without dependencies installed.", + log.output, + ) + + def test_export(self): + """Verify that the export call will convert and send the requests out.""" + # Create metric instruments + meter = self.provider.get_meter("gax-python") + counter = meter.create_counter( + name="attempt_count", description="A test counter", unit="count" + ) + latency = meter.create_counter( + name="attempt_latencies", description="test latencies", unit="ms" + ) + + # Add metrics + counter.add(10, attributes=self.metric_attributes) + counter.add(25, attributes=self.metric_attributes) + latency.add(30, attributes=self.metric_attributes) + latency.add(45, attributes=self.metric_attributes) + + # Export metrics + metrics = self.metric_reader.get_metrics_data() + mock_client = Mock() + exporter = CloudMonitoringMetricsExporter(PROJECT_ID, mock_client) + patch_path = "google.cloud.spanner_v1.metrics.metrics_exporter.CloudMonitoringMetricsExporter._batch_write" + with patch(patch_path) as mock_batch_write: + exporter.export(metrics) + + # Verify metrics passed to be sent to Google Cloud Monitoring + mock_batch_write.assert_called_once() + batch_args, _ = mock_batch_write.call_args + timeseries = batch_args[0] + self.assertEqual(len(timeseries), 2) + + def test_force_flush(self): + """Verify that the unimplemented force flush can be called.""" + exporter = CloudMonitoringMetricsExporter( + PROJECT_ID, credentials=AnonymousCredentials() + ) + self.assertTrue(exporter.force_flush()) + + def test_shutdown(self): + """Verify that the unimplemented shutdown can be called.""" + exporter = CloudMonitoringMetricsExporter( + project_id="test", credentials=AnonymousCredentials() + ) + try: + exporter.shutdown() + except Exception as e: + self.fail(f"Shutdown() raised an exception: {e}") + + def test_data_point_to_timeseries_early_exit(self): + """Early exit function if an unknown metric name is supplied.""" + metric = Mock(name="TestMetricName") + self.assertIsNone( + CloudMonitoringMetricsExporter._data_point_to_timeseries_pb( + None, metric, None, None + ) + ) + + @patch( + "google.cloud.spanner_v1.metrics.metrics_exporter.CloudMonitoringMetricsExporter._data_point_to_timeseries_pb" + ) + def test_metrics_to_time_series_empty_input( + self, mocked_data_point_to_timeseries_pb + ): + """Verify that metric entries with no timeseries data do not return a time series entry.""" + exporter = CloudMonitoringMetricsExporter( + project_id="test", credentials=AnonymousCredentials() + ) + data_point = Mock() + metric = Mock(data_points=[data_point]) + scope_metric = Mock( + metrics=[metric], scope=Mock(name="operation_latencies") + ) + resource_metric = Mock(scope_metrics=[scope_metric]) + metrics_data = Mock(resource_metrics=[resource_metric]) + + exporter._resource_metrics_to_timeseries_pb(metrics_data) + + def test_to_point(self): + """Verify conversion of datapoints.""" + exporter = CloudMonitoringMetricsExporter( + project_id="test", credentials=AnonymousCredentials() + ) + + number_point = NumberDataPoint( + attributes=[], start_time_unix_nano=0, time_unix_nano=0, value=9 + ) + + # Test that provided int number point values are set to the converted int data point + converted_num_point = exporter._to_point( + MetricDescriptor.MetricKind.CUMULATIVE, number_point + ) + + self.assertEqual(converted_num_point.value.int64_value, 9) + + # Test that provided float number point values are set to converted double data point + float_number_point = NumberDataPoint( + attributes=[], start_time_unix_nano=0, time_unix_nano=0, value=12.20 + ) + converted_float_num_point = exporter._to_point( + MetricDescriptor.MetricKind.CUMULATIVE, float_number_point + ) + self.assertEqual(converted_float_num_point.value.double_value, 12.20) + + hist_point = HistogramDataPoint( + attributes=[], + start_time_unix_nano=123, + time_unix_nano=456, + count=1, + sum=2, + bucket_counts=[3], + explicit_bounds=[4], + min=5.0, + max=6.0, + ) + + # Test that provided histogram point values are set to the converted data point + converted_hist_point = exporter._to_point( + MetricDescriptor.MetricKind.CUMULATIVE, hist_point + ) + self.assertEqual(converted_hist_point.value.distribution_value.count, 1) + self.assertEqual(converted_hist_point.value.distribution_value.mean, 2) + + hist_point_missing_count = HistogramDataPoint( + attributes=[], + start_time_unix_nano=123, + time_unix_nano=456, + count=None, + sum=2, + bucket_counts=[3], + explicit_bounds=[4], + min=5.0, + max=6.0, + ) + + # Test that histogram points missing a count value has mean defaulted to 0 + # and that non cmulative / delta kinds default to single timepoint interval + converted_hist_point_no_count = exporter._to_point( + MetricDescriptor.MetricKind.METRIC_KIND_UNSPECIFIED, + hist_point_missing_count, + ) + self.assertEqual( + converted_hist_point_no_count.value.distribution_value.mean, 0 + ) + self.assertIsNone(converted_hist_point_no_count.interval.start_time) + self.assertIsNotNone(converted_hist_point_no_count.interval.end_time) diff --git a/tests/unit/test_metrics_interceptor.py b/tests/unit/test_metrics_interceptor.py new file mode 100644 index 0000000000..e32003537f --- /dev/null +++ b/tests/unit/test_metrics_interceptor.py @@ -0,0 +1,128 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest +from google.cloud.spanner_v1.metrics.metrics_interceptor import MetricsInterceptor +from google.cloud.spanner_v1.metrics.spanner_metrics_tracer_factory import ( + SpannerMetricsTracerFactory, +) +from unittest.mock import MagicMock + + +@pytest.fixture +def interceptor(): + SpannerMetricsTracerFactory(enabled=True) + return MetricsInterceptor() + + +def test_parse_resource_path_valid(interceptor): + path = "projects/my_project/instances/my_instance/databases/my_database" + expected = { + "project": "my_project", + "instance": "my_instance", + "database": "my_database", + } + assert interceptor._parse_resource_path(path) == expected + + +def test_parse_resource_path_invalid(interceptor): + path = "invalid/path" + expected = {} + assert interceptor._parse_resource_path(path) == expected + + +def test_extract_resource_from_path(interceptor): + metadata = [ + ( + "google-cloud-resource-prefix", + "projects/my_project/instances/my_instance/databases/my_database", + ) + ] + expected = { + "project": "my_project", + "instance": "my_instance", + "database": "my_database", + } + assert interceptor._extract_resource_from_path(metadata) == expected + + +def test_set_metrics_tracer_attributes(interceptor): + SpannerMetricsTracerFactory.current_metrics_tracer = MockMetricTracer() + resources = { + "project": "my_project", + "instance": "my_instance", + "database": "my_database", + } + + interceptor._set_metrics_tracer_attributes(resources) + assert SpannerMetricsTracerFactory.current_metrics_tracer.project == "my_project" + assert SpannerMetricsTracerFactory.current_metrics_tracer.instance == "my_instance" + assert SpannerMetricsTracerFactory.current_metrics_tracer.database == "my_database" + + +def test_intercept_with_tracer(interceptor): + SpannerMetricsTracerFactory.current_metrics_tracer = MockMetricTracer() + SpannerMetricsTracerFactory.current_metrics_tracer.record_attempt_start = ( + MagicMock() + ) + SpannerMetricsTracerFactory.current_metrics_tracer.record_attempt_completion = ( + MagicMock() + ) + SpannerMetricsTracerFactory.current_metrics_tracer.gfe_enabled = False + + invoked_response = MagicMock() + invoked_response.initial_metadata.return_value = {} + + mock_invoked_method = MagicMock(return_value=invoked_response) + call_details = MagicMock( + method="spanner.someMethod", + metadata=[ + ( + "google-cloud-resource-prefix", + "projects/my_project/instances/my_instance/databases/my_database", + ) + ], + ) + + response = interceptor.intercept(mock_invoked_method, "request", call_details) + assert response == invoked_response + SpannerMetricsTracerFactory.current_metrics_tracer.record_attempt_start.assert_called_once() + SpannerMetricsTracerFactory.current_metrics_tracer.record_attempt_completion.assert_called_once() + mock_invoked_method.assert_called_once_with("request", call_details) + + +class MockMetricTracer: + def __init__(self): + self.project = None + self.instance = None + self.database = None + self.method = None + + def set_project(self, project): + self.project = project + + def set_instance(self, instance): + self.instance = instance + + def set_database(self, database): + self.database = database + + def set_method(self, method): + self.method = method + + def record_attempt_start(self): + pass + + def record_attempt_completion(self): + pass diff --git a/tests/unit/test_metrics_tracer.py b/tests/unit/test_metrics_tracer.py new file mode 100644 index 0000000000..70491ef5b2 --- /dev/null +++ b/tests/unit/test_metrics_tracer.py @@ -0,0 +1,265 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +from google.cloud.spanner_v1.metrics.metrics_tracer import MetricsTracer, MetricOpTracer +import mock +from opentelemetry.metrics import Counter, Histogram +from datetime import datetime + +pytest.importorskip("opentelemetry") + + +@pytest.fixture +def metrics_tracer(): + mock_attempt_counter = mock.create_autospec(Counter, instance=True) + mock_attempt_latency = mock.create_autospec(Histogram, instance=True) + mock_operation_counter = mock.create_autospec(Counter, instance=True) + mock_operation_latency = mock.create_autospec(Histogram, instance=True) + return MetricsTracer( + enabled=True, + instrument_attempt_latency=mock_attempt_latency, + instrument_attempt_counter=mock_attempt_counter, + instrument_operation_latency=mock_operation_latency, + instrument_operation_counter=mock_operation_counter, + client_attributes={"project_id": "test_project"}, + ) + + +def test_record_attempt_start(metrics_tracer): + metrics_tracer.record_attempt_start() + assert metrics_tracer.current_op.current_attempt is not None + assert metrics_tracer.current_op.current_attempt.start_time is not None + assert metrics_tracer.current_op.attempt_count == 1 + + +def test_record_operation_start(metrics_tracer): + metrics_tracer.record_operation_start() + assert metrics_tracer.current_op.start_time is not None + + +def test_record_attempt_completion(metrics_tracer): + metrics_tracer.record_attempt_start() + metrics_tracer.record_attempt_completion() + assert metrics_tracer.current_op.current_attempt.status == "OK" + + +def test_record_operation_completion(metrics_tracer): + metrics_tracer.record_operation_start() + metrics_tracer.record_attempt_start() + metrics_tracer.record_attempt_completion() + metrics_tracer.record_operation_completion() + assert metrics_tracer.instrument_attempt_counter.add.call_count == 1 + assert metrics_tracer.instrument_attempt_latency.record.call_count == 1 + assert metrics_tracer.instrument_operation_latency.record.call_count == 1 + assert metrics_tracer.instrument_operation_counter.add.call_count == 1 + + +def test_atempt_otel_attributes(metrics_tracer): + from google.cloud.spanner_v1.metrics.constants import ( + METRIC_LABEL_KEY_DIRECT_PATH_USED, + ) + + metrics_tracer.current_op._current_attempt = None + attributes = metrics_tracer._create_attempt_otel_attributes() + assert METRIC_LABEL_KEY_DIRECT_PATH_USED not in attributes + + +def test_disabled(metrics_tracer): + mock_operation = mock.create_autospec(MetricOpTracer, instance=True) + metrics_tracer.enabled = False + metrics_tracer._current_op = mock_operation + + # Attempt start should be skipped + metrics_tracer.record_attempt_start() + assert mock_operation.new_attempt.call_count == 0 + + # Attempt completion should also be skipped + metrics_tracer.record_attempt_completion() + assert metrics_tracer.instrument_attempt_latency.record.call_count == 0 + + # Operation start should be skipped + metrics_tracer.record_operation_start() + assert mock_operation.start.call_count == 0 + + # Operation completion should also skip all metric logic + metrics_tracer.record_operation_completion() + assert metrics_tracer.instrument_attempt_counter.add.call_count == 0 + assert metrics_tracer.instrument_operation_latency.record.call_count == 0 + assert metrics_tracer.instrument_operation_counter.add.call_count == 0 + assert not metrics_tracer._create_operation_otel_attributes() + assert not metrics_tracer._create_attempt_otel_attributes() + + +def test_get_ms_time_diff(): + # Create two datetime objects + start_time = datetime(2025, 1, 1, 12, 0, 0) + end_time = datetime(2025, 1, 1, 12, 0, 1) # 1 second later + + # Calculate expected milliseconds difference + expected_diff = 1000.0 # 1 second in milliseconds + + # Call the static method + actual_diff = MetricsTracer._get_ms_time_diff(start_time, end_time) + + # Assert the expected and actual values are equal + assert actual_diff == expected_diff + + +def test_get_ms_time_diff_negative(): + # Create two datetime objects where end is before start + start_time = datetime(2025, 1, 1, 12, 0, 1) + end_time = datetime(2025, 1, 1, 12, 0, 0) # 1 second earlier + + # Calculate expected milliseconds difference + expected_diff = -1000.0 # -1 second in milliseconds + + # Call the static method + actual_diff = MetricsTracer._get_ms_time_diff(start_time, end_time) + + # Assert the expected and actual values are equal + assert actual_diff == expected_diff + + +def test_set_project(metrics_tracer): + metrics_tracer.set_project("test_project") + assert metrics_tracer.client_attributes["project_id"] == "test_project" + + # Ensure it does not overwrite + metrics_tracer.set_project("new_project") + assert metrics_tracer.client_attributes["project_id"] == "test_project" + + +def test_set_instance(metrics_tracer): + metrics_tracer.set_instance("test_instance") + assert metrics_tracer.client_attributes["instance_id"] == "test_instance" + + # Ensure it does not overwrite + metrics_tracer.set_instance("new_instance") + assert metrics_tracer.client_attributes["instance_id"] == "test_instance" + + +def test_set_instance_config(metrics_tracer): + metrics_tracer.set_instance_config("test_config") + assert metrics_tracer.client_attributes["instance_config"] == "test_config" + + # Ensure it does not overwrite + metrics_tracer.set_instance_config("new_config") + assert metrics_tracer.client_attributes["instance_config"] == "test_config" + + +def test_set_location(metrics_tracer): + metrics_tracer.set_location("test_location") + assert metrics_tracer.client_attributes["location"] == "test_location" + + # Ensure it does not overwrite + metrics_tracer.set_location("new_location") + assert metrics_tracer.client_attributes["location"] == "test_location" + + +def test_set_client_hash(metrics_tracer): + metrics_tracer.set_client_hash("test_hash") + assert metrics_tracer.client_attributes["client_hash"] == "test_hash" + + # Ensure it does not overwrite + metrics_tracer.set_client_hash("new_hash") + assert metrics_tracer.client_attributes["client_hash"] == "test_hash" + + +def test_set_client_uid(metrics_tracer): + metrics_tracer.set_client_uid("test_uid") + assert metrics_tracer.client_attributes["client_uid"] == "test_uid" + + # Ensure it does not overwrite + metrics_tracer.set_client_uid("new_uid") + assert metrics_tracer.client_attributes["client_uid"] == "test_uid" + + +def test_set_client_name(metrics_tracer): + metrics_tracer.set_client_name("test_name") + assert metrics_tracer.client_attributes["client_name"] == "test_name" + + # Ensure it does not overwrite + metrics_tracer.set_client_name("new_name") + assert metrics_tracer.client_attributes["client_name"] == "test_name" + + +def test_set_database(metrics_tracer): + metrics_tracer.set_database("test_db") + assert metrics_tracer.client_attributes["database"] == "test_db" + + # Ensure it does not overwrite + metrics_tracer.set_database("new_db") + assert metrics_tracer.client_attributes["database"] == "test_db" + + +def test_enable_direct_path(metrics_tracer): + metrics_tracer.enable_direct_path(True) + assert metrics_tracer.client_attributes["directpath_enabled"] == "True" + + # Ensure it does not overwrite + metrics_tracer.enable_direct_path(False) + assert metrics_tracer.client_attributes["directpath_enabled"] == "True" + + +def test_set_method(metrics_tracer): + metrics_tracer.set_method("test_method") + assert metrics_tracer.client_attributes["method"] == "test_method" + + # Ensure it does not overwrite + metrics_tracer.set_method("new_method") + assert metrics_tracer.client_attributes["method"] == "test_method" + + +def test_record_gfe_latency(metrics_tracer): + mock_gfe_latency = mock.create_autospec(Histogram, instance=True) + metrics_tracer._instrument_gfe_latency = mock_gfe_latency + metrics_tracer.gfe_enabled = True # Ensure GFE is enabled + + # Test when tracing is enabled + metrics_tracer.record_gfe_latency(100) + assert mock_gfe_latency.record.call_count == 1 + assert mock_gfe_latency.record.call_args[1]["amount"] == 100 + assert ( + mock_gfe_latency.record.call_args[1]["attributes"] + == metrics_tracer.client_attributes + ) + + # Test when tracing is disabled + metrics_tracer.enabled = False + metrics_tracer.record_gfe_latency(200) + assert mock_gfe_latency.record.call_count == 1 # Should not increment + metrics_tracer.enabled = True # Reset for next test + + +def test_record_gfe_missing_header_count(metrics_tracer): + mock_gfe_missing_header_count = mock.create_autospec(Counter, instance=True) + metrics_tracer._instrument_gfe_missing_header_count = mock_gfe_missing_header_count + metrics_tracer.gfe_enabled = True # Ensure GFE is enabled + + # Test when tracing is enabled + metrics_tracer.record_gfe_missing_header_count() + assert mock_gfe_missing_header_count.add.call_count == 1 + assert mock_gfe_missing_header_count.add.call_args[1]["amount"] == 1 + assert ( + mock_gfe_missing_header_count.add.call_args[1]["attributes"] + == metrics_tracer.client_attributes + ) + + # Test when tracing is disabled + metrics_tracer.enabled = False + metrics_tracer.record_gfe_missing_header_count() + assert mock_gfe_missing_header_count.add.call_count == 1 # Should not increment + metrics_tracer.enabled = True # Reset for next test diff --git a/tests/unit/test_metrics_tracer_factory.py b/tests/unit/test_metrics_tracer_factory.py new file mode 100644 index 0000000000..64fb4d83d1 --- /dev/null +++ b/tests/unit/test_metrics_tracer_factory.py @@ -0,0 +1,58 @@ +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import pytest + +from google.cloud.spanner_v1.metrics.metrics_tracer_factory import MetricsTracerFactory +from google.cloud.spanner_v1.metrics.metrics_tracer import MetricsTracer + +pytest.importorskip("opentelemetry") + + +@pytest.fixture +def metrics_tracer_factory(): + factory = MetricsTracerFactory( + enabled=True, + service_name="test_service", + ) + factory.set_project("test_project").set_instance( + "test_instance" + ).set_instance_config("test_config").set_location("test_location").set_client_hash( + "test_hash" + ).set_client_uid( + "test_uid" + ).set_client_name( + "test_name" + ).set_database( + "test_db" + ).enable_direct_path( + False + ) + return factory + + +def test_initialization(metrics_tracer_factory): + assert metrics_tracer_factory.enabled is True + assert metrics_tracer_factory.client_attributes["project_id"] == "test_project" + + +def test_create_metrics_tracer(metrics_tracer_factory): + tracer = metrics_tracer_factory.create_metrics_tracer() + assert isinstance(tracer, MetricsTracer) + + +def test_client_attributes(metrics_tracer_factory): + attributes = metrics_tracer_factory.client_attributes + assert attributes["project_id"] == "test_project" + assert attributes["instance_id"] == "test_instance" diff --git a/tests/unit/test_packaging.py b/tests/unit/test_packaging.py new file mode 100644 index 0000000000..998a02ac2d --- /dev/null +++ b/tests/unit/test_packaging.py @@ -0,0 +1,37 @@ +# Copyright 2022 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import subprocess +import sys + + +def test_namespace_package_compat(tmp_path): + # The ``google`` namespace package should not be masked + # by the presence of ``google-cloud-spanner``. + google = tmp_path / "google" + google.mkdir() + google.joinpath("othermod.py").write_text("") + env = dict(os.environ, PYTHONPATH=str(tmp_path)) + cmd = [sys.executable, "-m", "google.othermod"] + subprocess.check_call(cmd, env=env) + + # The ``google.cloud`` namespace package should not be masked + # by the presence of ``google-cloud-spanner``. + google_cloud = tmp_path / "google" / "cloud" + google_cloud.mkdir() + google_cloud.joinpath("othermod.py").write_text("") + env = dict(os.environ, PYTHONPATH=str(tmp_path)) + cmd = [sys.executable, "-m", "google.cloud.othermod"] + subprocess.check_call(cmd, env=env) diff --git a/tests/unit/test_param_types.py b/tests/unit/test_param_types.py index cb1c548af9..1b0660614a 100644 --- a/tests/unit/test_param_types.py +++ b/tests/unit/test_param_types.py @@ -18,11 +18,10 @@ class Test_ArrayParamType(unittest.TestCase): def test_it(self): - from google.cloud.spanner_v1.proto import type_pb2 - from google.cloud.spanner_v1 import param_types + from google.cloud.spanner_v1 import Type, TypeCode, param_types - expected = type_pb2.Type( - code=type_pb2.ARRAY, array_element_type=type_pb2.Type(code=type_pb2.INT64) + expected = Type( + code=TypeCode.ARRAY, array_element_type=Type(code=TypeCode.INT64) ) found = param_types.Array(param_types.INT64) @@ -32,26 +31,93 @@ def test_it(self): class Test_Struct(unittest.TestCase): def test_it(self): - from google.cloud.spanner_v1.proto import type_pb2 - from google.cloud.spanner_v1 import param_types + from google.cloud.spanner_v1 import StructType, Type, TypeCode, param_types - struct_type = type_pb2.StructType( + struct_type = StructType( fields=[ - type_pb2.StructType.Field( - name="name", type=type_pb2.Type(code=type_pb2.STRING) - ), - type_pb2.StructType.Field( - name="count", type=type_pb2.Type(code=type_pb2.INT64) - ), + StructType.Field(name="name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="count", type_=Type(code=TypeCode.INT64)), + StructType.Field(name="float32", type_=Type(code=TypeCode.FLOAT32)), ] ) - expected = type_pb2.Type(code=type_pb2.STRUCT, struct_type=struct_type) + expected = Type(code=TypeCode.STRUCT, struct_type=struct_type) found = param_types.Struct( [ param_types.StructField("name", param_types.STRING), param_types.StructField("count", param_types.INT64), + param_types.StructField("float32", param_types.FLOAT32), ] ) self.assertEqual(found, expected) + + +class Test_JsonbParamType(unittest.TestCase): + def test_it(self): + from google.cloud.spanner_v1 import ( + Type, + TypeAnnotationCode, + TypeCode, + param_types, + ) + + expected = Type( + code=TypeCode.JSON, + type_annotation=TypeAnnotationCode(TypeAnnotationCode.PG_JSONB), + ) + + found = param_types.PG_JSONB + + self.assertEqual(found, expected) + + +class Test_OidParamType(unittest.TestCase): + def test_it(self): + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + from google.cloud.spanner_v1 import TypeAnnotationCode + from google.cloud.spanner_v1 import param_types + + expected = Type( + code=TypeCode.INT64, + type_annotation=TypeAnnotationCode.PG_OID, + ) + + found = param_types.PG_OID + + self.assertEqual(found, expected) + + +class Test_ProtoMessageParamType(unittest.TestCase): + def test_it(self): + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + from google.cloud.spanner_v1 import param_types + from .testdata import singer_pb2 + + singer_info = singer_pb2.SingerInfo() + expected = Type( + code=TypeCode.PROTO, proto_type_fqn=singer_info.DESCRIPTOR.full_name + ) + + found = param_types.ProtoMessage(singer_info) + + self.assertEqual(found, expected) + + +class Test_ProtoEnumParamType(unittest.TestCase): + def test_it(self): + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + from google.cloud.spanner_v1 import param_types + from .testdata import singer_pb2 + + singer_genre = singer_pb2.Genre + expected = Type( + code=TypeCode.ENUM, proto_type_fqn=singer_genre.DESCRIPTOR.full_name + ) + + found = param_types.ProtoEnum(singer_genre) + + self.assertEqual(found, expected) diff --git a/tests/unit/test_pool.py b/tests/unit/test_pool.py index b6786a7f0e..409f4b043b 100644 --- a/tests/unit/test_pool.py +++ b/tests/unit/test_pool.py @@ -14,9 +14,26 @@ from functools import total_ordering +import time import unittest +from datetime import datetime, timedelta import mock +from google.cloud.spanner_v1._helpers import ( + _metadata_with_request_id, + AtomicCounter, +) +from google.cloud.spanner_v1.request_id_header import REQ_RAND_PROCESS_ID + +from google.cloud.spanner_v1._opentelemetry_tracing import trace_call +from tests._builders import build_database +from tests._helpers import ( + OpenTelemetryBase, + LIB_VERSION, + StatusCode, + enrich_with_otel_scope, + HAS_OPENTELEMETRY_INSTALLED, +) def _make_database(name="name"): @@ -44,12 +61,15 @@ def test_ctor_defaults(self): pool = self._make_one() self.assertIsNone(pool._database) self.assertEqual(pool.labels, {}) + self.assertIsNone(pool.database_role) def test_ctor_explicit(self): labels = {"foo": "bar"} - pool = self._make_one(labels=labels) + database_role = "dummy-role" + pool = self._make_one(labels=labels, database_role=database_role) self.assertIsNone(pool._database) self.assertEqual(pool.labels, labels) + self.assertEqual(pool.database_role, database_role) def test_bind_abstract(self): pool = self._make_one() @@ -75,26 +95,35 @@ def test_clear_abstract(self): def test__new_session_wo_labels(self): pool = self._make_one() - database = pool._database = _make_database("name") - session = _make_session() - database.session.return_value = session + database = pool._database = build_database() new_session = pool._new_session() - self.assertIs(new_session, session) - database.session.assert_called_once_with() + self.assertEqual(new_session._database, database) + self.assertEqual(new_session.labels, {}) + self.assertIsNone(new_session.database_role) def test__new_session_w_labels(self): labels = {"foo": "bar"} pool = self._make_one(labels=labels) - database = pool._database = _make_database("name") - session = _make_session() - database.session.return_value = session + database = pool._database = build_database() + + new_session = pool._new_session() + + self.assertEqual(new_session._database, database) + self.assertEqual(new_session.labels, labels) + self.assertIsNone(new_session.database_role) + + def test__new_session_w_database_role(self): + database_role = "dummy-role" + pool = self._make_one(database_role=database_role) + database = pool._database = build_database() new_session = pool._new_session() - self.assertIs(new_session, session) - database.session.assert_called_once_with(labels=labels) + self.assertEqual(new_session._database, database) + self.assertEqual(new_session.labels, {}) + self.assertEqual(new_session.database_role, database_role) def test_session_wo_kwargs(self): from google.cloud.spanner_v1.pool import SessionCheckout @@ -117,7 +146,18 @@ def test_session_w_kwargs(self): self.assertEqual(checkout._kwargs, {"foo": "bar"}) -class TestFixedSizePool(unittest.TestCase): +class TestFixedSizePool(OpenTelemetryBase): + BASE_ATTRIBUTES = { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": "name", + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + } + enrich_with_otel_scope(BASE_ATTRIBUTES) + def _getTargetClass(self): from google.cloud.spanner_v1.pool import FixedSizePool @@ -133,26 +173,34 @@ def test_ctor_defaults(self): self.assertEqual(pool.default_timeout, 10) self.assertTrue(pool._sessions.empty()) self.assertEqual(pool.labels, {}) + self.assertIsNone(pool.database_role) def test_ctor_explicit(self): labels = {"foo": "bar"} - pool = self._make_one(size=4, default_timeout=30, labels=labels) + database_role = "dummy-role" + pool = self._make_one( + size=4, default_timeout=30, labels=labels, database_role=database_role + ) self.assertIsNone(pool._database) self.assertEqual(pool.size, 4) self.assertEqual(pool.default_timeout, 30) self.assertTrue(pool._sessions.empty()) self.assertEqual(pool.labels, labels) + self.assertEqual(pool.database_role, database_role) def test_bind(self): + database_role = "dummy-role" pool = self._make_one() database = _Database("name") SESSIONS = [_Session(database)] * 10 + database._database_role = database_role database._sessions.extend(SESSIONS) pool.bind(database) self.assertIs(pool._database, database) self.assertEqual(pool.size, 10) + self.assertEqual(pool.database_role, database_role) self.assertEqual(pool.default_timeout, 10) self.assertTrue(pool._sessions.full()) @@ -161,11 +209,28 @@ def test_bind(self): for session in SESSIONS: session.create.assert_not_called() - def test_get_non_expired(self): + def test_get_active(self): pool = self._make_one(size=4) database = _Database("name") SESSIONS = sorted([_Session(database) for i in range(0, 4)]) - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) + pool.bind(database) + + # check if sessions returned in LIFO order + for i in (3, 2, 1, 0): + session = pool.get() + self.assertIs(session, SESSIONS[i]) + self.assertFalse(session._exists_checked) + self.assertFalse(pool._sessions.full()) + + def test_get_non_expired(self): + pool = self._make_one(size=4) + database = _Database("name") + last_use_time = datetime.utcnow() - timedelta(minutes=56) + SESSIONS = sorted( + [_Session(database, last_use_time=last_use_time) for i in range(0, 4)] + ) + pool._new_session = mock.Mock(side_effect=SESSIONS) pool.bind(database) # check if sessions returned in LIFO order @@ -175,12 +240,176 @@ def test_get_non_expired(self): self.assertTrue(session._exists_checked) self.assertFalse(pool._sessions.full()) + def test_spans_bind_get(self): + if not HAS_OPENTELEMETRY_INSTALLED: + return + + # This tests retrieving 1 out of 4 sessions from the session pool. + pool = self._make_one(size=4) + database = _Database("name") + SESSIONS = sorted([_Session(database) for i in range(0, 4)]) + database._sessions.extend(SESSIONS) + pool.bind(database) + + with trace_call("pool.Get", SESSIONS[0]): + pool.get() + + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + want_span_names = ["CloudSpanner.FixedPool.BatchCreateSessions", "pool.Get"] + assert got_span_names == want_span_names + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id - 1}.{database._channel_id}.{_Database.NTH_REQUEST.value}.1" + attrs = dict( + TestFixedSizePool.BASE_ATTRIBUTES.copy(), x_goog_spanner_request_id=req_id + ) + + # Check for the overall spans. + self.assertSpanAttributes( + "CloudSpanner.FixedPool.BatchCreateSessions", + status=StatusCode.OK, + attributes=attrs, + span=span_list[0], + ) + + self.assertSpanAttributes( + "pool.Get", + status=StatusCode.OK, + attributes=TestFixedSizePool.BASE_ATTRIBUTES, + span=span_list[-1], + ) + wantEventNames = [ + "Acquiring session", + "Waiting for a session to become available", + "Acquired session", + ] + self.assertSpanEvents("pool.Get", wantEventNames, span_list[-1]) + + def test_spans_bind_get_empty_pool(self): + if not HAS_OPENTELEMETRY_INSTALLED: + return + + # Tests trying to invoke pool.get() from an empty pool. + pool = self._make_one(size=0, default_timeout=0.1) + database = _Database("name") + session1 = _Session(database) + with trace_call("pool.Get", session1): + try: + pool.bind(database) + database._sessions = database._sessions[:0] + pool.get() + except Exception: + pass + + wantEventNames = [ + "Invalid session pool size(0) <= 0", + "Acquiring session", + "Waiting for a session to become available", + "No sessions available in the pool", + ] + self.assertSpanEvents("pool.Get", wantEventNames) + + # Check for the overall spans too. + self.assertSpanNames(["pool.Get"]) + self.assertSpanAttributes( + "pool.Get", + attributes=TestFixedSizePool.BASE_ATTRIBUTES, + ) + + span_list = self.get_finished_spans() + got_all_events = [] + for span in span_list: + for event in span.events: + got_all_events.append((event.name, event.attributes)) + want_all_events = [ + ("Invalid session pool size(0) <= 0", {"kind": "FixedSizePool"}), + ("Acquiring session", {"kind": "FixedSizePool"}), + ("Waiting for a session to become available", {"kind": "FixedSizePool"}), + ("No sessions available in the pool", {"kind": "FixedSizePool"}), + ] + assert got_all_events == want_all_events + + def test_spans_pool_bind(self): + if not HAS_OPENTELEMETRY_INSTALLED: + return + + # Tests the exception generated from invoking pool.bind when + # you have an empty pool. + pool = self._make_one(size=1) + database = _Database("name") + pool._new_session = mock.Mock(side_effect=Exception("test")) + fauxSession = mock.Mock() + setattr(fauxSession, "_database", database) + try: + with trace_call("testBind", fauxSession): + pool.bind(database) + except Exception: + pass + + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + want_span_names = ["testBind", "CloudSpanner.FixedPool.BatchCreateSessions"] + assert got_span_names == want_span_names + + wantEventNames = [ + "Requesting 1 sessions", + "exception", + ] + self.assertSpanEvents("testBind", wantEventNames, span_list[0]) + + self.assertSpanAttributes( + "testBind", + status=StatusCode.ERROR, + attributes=TestFixedSizePool.BASE_ATTRIBUTES, + span=span_list[0], + ) + + got_all_events = [] + + # Some event attributes are noisy/highly ephemeral + # and can't be directly compared against. + imprecise_event_attributes = ["exception.stacktrace", "delay_seconds", "cause"] + for span in span_list: + for event in span.events: + evt_attributes = event.attributes.copy() + for attr_name in imprecise_event_attributes: + if attr_name in evt_attributes: + evt_attributes[attr_name] = "EPHEMERAL" + + got_all_events.append((event.name, evt_attributes)) + + want_all_events = [ + ("Requesting 1 sessions", {"kind": "FixedSizePool"}), + ( + "exception", + { + "exception.type": "Exception", + "exception.message": "test", + "exception.stacktrace": "EPHEMERAL", + "exception.escaped": "False", + }, + ), + ("Creating 1 sessions", {"kind": "FixedSizePool"}), + ("Created sessions", {"count": 1}), + ( + "exception", + { + "exception.type": "Exception", + "exception.message": "test", + "exception.stacktrace": "EPHEMERAL", + "exception.escaped": "False", + }, + ), + ] + assert got_all_events == want_all_events + def test_get_expired(self): pool = self._make_one(size=4) database = _Database("name") - SESSIONS = [_Session(database)] * 5 + last_use_time = datetime.utcnow() - timedelta(minutes=65) + SESSIONS = [_Session(database, last_use_time=last_use_time)] * 5 SESSIONS[0]._exists = False - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) pool.bind(database) session = pool.get() @@ -191,37 +420,38 @@ def test_get_expired(self): self.assertFalse(pool._sessions.full()) def test_get_empty_default_timeout(self): - from six.moves.queue import Empty + import queue pool = self._make_one(size=1) - queue = pool._sessions = _Queue() + session_queue = pool._sessions = _Queue() - with self.assertRaises(Empty): + with self.assertRaises(queue.Empty): pool.get() - self.assertEqual(queue._got, {"block": True, "timeout": 10}) + self.assertEqual(session_queue._got, {"block": True, "timeout": 10}) def test_get_empty_explicit_timeout(self): - from six.moves.queue import Empty + import queue pool = self._make_one(size=1, default_timeout=0.1) - queue = pool._sessions = _Queue() + session_queue = pool._sessions = _Queue() - with self.assertRaises(Empty): + with self.assertRaises(queue.Empty): pool.get(timeout=1) - self.assertEqual(queue._got, {"block": True, "timeout": 1}) + self.assertEqual(session_queue._got, {"block": True, "timeout": 1}) def test_put_full(self): - from six.moves.queue import Full + import queue pool = self._make_one(size=4) database = _Database("name") SESSIONS = [_Session(database)] * 4 database._sessions.extend(SESSIONS) pool.bind(database) + self.reset() - with self.assertRaises(Full): + with self.assertRaises(queue.Full): pool.put(_Session(database)) self.assertTrue(pool._sessions.full()) @@ -242,7 +472,7 @@ def test_clear(self): pool = self._make_one() database = _Database("name") SESSIONS = [_Session(database)] * 10 - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) pool.bind(database) self.assertTrue(pool._sessions.full()) @@ -257,7 +487,18 @@ def test_clear(self): self.assertTrue(session._deleted) -class TestBurstyPool(unittest.TestCase): +class TestBurstyPool(OpenTelemetryBase): + BASE_ATTRIBUTES = { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": "name", + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + } + enrich_with_otel_scope(BASE_ATTRIBUTES) + def _getTargetClass(self): from google.cloud.spanner_v1.pool import BurstyPool @@ -272,19 +513,30 @@ def test_ctor_defaults(self): self.assertEqual(pool.target_size, 10) self.assertTrue(pool._sessions.empty()) self.assertEqual(pool.labels, {}) + self.assertIsNone(pool.database_role) def test_ctor_explicit(self): labels = {"foo": "bar"} - pool = self._make_one(target_size=4, labels=labels) + database_role = "dummy-role" + pool = self._make_one(target_size=4, labels=labels, database_role=database_role) self.assertIsNone(pool._database) self.assertEqual(pool.target_size, 4) self.assertTrue(pool._sessions.empty()) self.assertEqual(pool.labels, labels) + self.assertEqual(pool.database_role, database_role) + + def test_ctor_explicit_w_database_role_in_db(self): + database_role = "dummy-role" + pool = self._make_one() + database = pool._database = _Database("name") + database._database_role = database_role + pool.bind(database) + self.assertEqual(pool.database_role, database_role) def test_get_empty(self): pool = self._make_one() database = _Database("name") - database._sessions.append(_Session(database)) + pool._new_session = mock.Mock(return_value=_Session(database)) pool.bind(database) session = pool.get() @@ -294,6 +546,44 @@ def test_get_empty(self): session.create.assert_called() self.assertTrue(pool._sessions.empty()) + def test_spans_get_empty_pool(self): + if not HAS_OPENTELEMETRY_INSTALLED: + return + + # This scenario tests a pool that hasn't been filled up + # and pool.get() acquires from a pool, waiting for a session + # to become available. + pool = self._make_one() + database = _Database("name") + session1 = _Session(database) + pool._new_session = mock.Mock(return_value=session1) + pool.bind(database) + + with trace_call("pool.Get", session1): + session = pool.get() + self.assertIsInstance(session, _Session) + self.assertIs(session._database, database) + session.create.assert_called() + self.assertTrue(pool._sessions.empty()) + + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + want_span_names = ["pool.Get"] + assert got_span_names == want_span_names + + create_span = span_list[-1] + self.assertSpanAttributes( + "pool.Get", + attributes=TestBurstyPool.BASE_ATTRIBUTES, + span=create_span, + ) + wantEventNames = [ + "Acquiring session", + "Waiting for a session to become available", + "No sessions available in pool. Creating session", + ] + self.assertSpanEvents("pool.Get", wantEventNames, span=create_span) + def test_get_non_empty_session_exists(self): pool = self._make_one() database = _Database("name") @@ -308,12 +598,36 @@ def test_get_non_empty_session_exists(self): self.assertTrue(session._exists_checked) self.assertTrue(pool._sessions.empty()) + def test_spans_get_non_empty_session_exists(self): + # Tests the spans produces when you invoke pool.bind + # and then insert a session into the pool. + pool = self._make_one() + database = _Database("name") + previous = _Session(database) + pool.bind(database) + with trace_call("pool.Get", previous): + pool.put(previous) + session = pool.get() + self.assertIs(session, previous) + session.create.assert_not_called() + self.assertTrue(session._exists_checked) + self.assertTrue(pool._sessions.empty()) + + self.assertSpanAttributes( + "pool.Get", + attributes=TestBurstyPool.BASE_ATTRIBUTES, + ) + self.assertSpanEvents( + "pool.Get", + ["Acquiring session", "Waiting for a session to become available"], + ) + def test_get_non_empty_session_expired(self): pool = self._make_one() database = _Database("name") previous = _Session(database, exists=False) newborn = _Session(database) - database._sessions.append(newborn) + pool._new_session = mock.Mock(return_value=newborn) pool.bind(database) pool.put(previous) @@ -335,6 +649,22 @@ def test_put_empty(self): self.assertFalse(pool._sessions.empty()) + def test_spans_put_empty(self): + # Tests the spans produced when you put sessions into an empty pool. + pool = self._make_one() + database = _Database("name") + pool.bind(database) + session = _Session(database) + + with trace_call("pool.put", session): + pool.put(session) + self.assertFalse(pool._sessions.empty()) + + self.assertSpanAttributes( + "pool.put", + attributes=TestBurstyPool.BASE_ATTRIBUTES, + ) + def test_put_full(self): pool = self._make_one(target_size=1) database = _Database("name") @@ -349,6 +679,28 @@ def test_put_full(self): self.assertTrue(younger._deleted) self.assertIs(pool.get(), older) + def test_spans_put_full(self): + # This scenario tests the spans produced from putting an older + # session into a pool that is already full. + pool = self._make_one(target_size=1) + database = _Database("name") + pool.bind(database) + older = _Session(database) + with trace_call("pool.put", older): + pool.put(older) + self.assertFalse(pool._sessions.empty()) + + younger = _Session(database) + pool.put(younger) # discarded silently + + self.assertTrue(younger._deleted) + self.assertIs(pool.get(), older) + + self.assertSpanAttributes( + "pool.put", + attributes=TestBurstyPool.BASE_ATTRIBUTES, + ) + def test_put_full_expired(self): pool = self._make_one(target_size=1) database = _Database("name") @@ -373,9 +725,21 @@ def test_clear(self): pool.clear() self.assertTrue(previous._deleted) + self.assertNoSpans() -class TestPingingPool(unittest.TestCase): +class TestPingingPool(OpenTelemetryBase): + BASE_ATTRIBUTES = { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": "name", + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + } + enrich_with_otel_scope(BASE_ATTRIBUTES) + def _getTargetClass(self): from google.cloud.spanner_v1.pool import PingingPool @@ -392,11 +756,17 @@ def test_ctor_defaults(self): self.assertEqual(pool._delta.seconds, 3000) self.assertTrue(pool._sessions.empty()) self.assertEqual(pool.labels, {}) + self.assertIsNone(pool.database_role) def test_ctor_explicit(self): labels = {"foo": "bar"} + database_role = "dummy-role" pool = self._make_one( - size=4, default_timeout=30, ping_interval=1800, labels=labels + size=4, + default_timeout=30, + ping_interval=1800, + labels=labels, + database_role=database_role, ) self.assertIsNone(pool._database) self.assertEqual(pool.size, 4) @@ -404,6 +774,17 @@ def test_ctor_explicit(self): self.assertEqual(pool._delta.seconds, 1800) self.assertTrue(pool._sessions.empty()) self.assertEqual(pool.labels, labels) + self.assertEqual(pool.database_role, database_role) + + def test_ctor_explicit_w_database_role_in_db(self): + database_role = "dummy-role" + pool = self._make_one() + database = pool._database = _Database("name") + SESSIONS = [_Session(database)] * 10 + database._sessions.extend(SESSIONS) + database._database_role = database_role + pool.bind(database) + self.assertEqual(pool.database_role, database_role) def test_bind(self): pool = self._make_one() @@ -427,14 +808,16 @@ def test_get_hit_no_ping(self): pool = self._make_one(size=4) database = _Database("name") SESSIONS = [_Session(database)] * 4 - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) pool.bind(database) + self.reset() session = pool.get() self.assertIs(session, SESSIONS[0]) self.assertFalse(session._exists_checked) self.assertFalse(pool._sessions.full()) + self.assertNoSpans() def test_get_hit_w_ping(self): import datetime @@ -444,18 +827,21 @@ def test_get_hit_w_ping(self): pool = self._make_one(size=4) database = _Database("name") SESSIONS = [_Session(database)] * 4 - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) sessions_created = datetime.datetime.utcnow() - datetime.timedelta(seconds=4000) with _Monkey(MUT, _NOW=lambda: sessions_created): pool.bind(database) + self.reset() + session = pool.get() self.assertIs(session, SESSIONS[0]) self.assertTrue(session._exists_checked) self.assertFalse(pool._sessions.full()) + self.assertNoSpans() def test_get_hit_w_ping_expired(self): import datetime @@ -466,12 +852,13 @@ def test_get_hit_w_ping_expired(self): database = _Database("name") SESSIONS = [_Session(database)] * 5 SESSIONS[0]._exists = False - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) sessions_created = datetime.datetime.utcnow() - datetime.timedelta(seconds=4000) with _Monkey(MUT, _NOW=lambda: sessions_created): pool.bind(database) + self.reset() session = pool.get() @@ -479,31 +866,51 @@ def test_get_hit_w_ping_expired(self): session.create.assert_called() self.assertTrue(SESSIONS[0]._exists_checked) self.assertFalse(pool._sessions.full()) + self.assertNoSpans() def test_get_empty_default_timeout(self): - from six.moves.queue import Empty + import queue pool = self._make_one(size=1) - queue = pool._sessions = _Queue() + session_queue = pool._sessions = _Queue() - with self.assertRaises(Empty): + with self.assertRaises(queue.Empty): pool.get() - self.assertEqual(queue._got, {"block": True, "timeout": 10}) + self.assertEqual(session_queue._got, {"block": True, "timeout": 10}) + self.assertNoSpans() def test_get_empty_explicit_timeout(self): - from six.moves.queue import Empty + import queue pool = self._make_one(size=1, default_timeout=0.1) - queue = pool._sessions = _Queue() + session_queue = pool._sessions = _Queue() - with self.assertRaises(Empty): + with self.assertRaises(queue.Empty): pool.get(timeout=1) - self.assertEqual(queue._got, {"block": True, "timeout": 1}) + self.assertEqual(session_queue._got, {"block": True, "timeout": 1}) + self.assertNoSpans() def test_put_full(self): - from six.moves.queue import Full + import queue + + pool = self._make_one(size=4) + database = _Database("name") + SESSIONS = [_Session(database)] * 4 + database._sessions.extend(SESSIONS) + pool.bind(database) + + with self.assertRaises(queue.Full): + pool.put(_Session(database)) + + self.assertTrue(pool._sessions.full()) + + def test_spans_put_full(self): + if not HAS_OPENTELEMETRY_INSTALLED: + return + + import queue pool = self._make_one(size=4) database = _Database("name") @@ -511,18 +918,41 @@ def test_put_full(self): database._sessions.extend(SESSIONS) pool.bind(database) - with self.assertRaises(Full): + with self.assertRaises(queue.Full): pool.put(_Session(database)) self.assertTrue(pool._sessions.full()) + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + want_span_names = ["CloudSpanner.PingingPool.BatchCreateSessions"] + assert got_span_names == want_span_names + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id - 1}.{database._channel_id}.{_Database.NTH_REQUEST.value}.1" + attrs = dict( + TestPingingPool.BASE_ATTRIBUTES.copy(), x_goog_spanner_request_id=req_id + ) + self.assertSpanAttributes( + "CloudSpanner.PingingPool.BatchCreateSessions", + attributes=attrs, + span=span_list[-1], + ) + wantEventNames = [ + "Created 2 sessions", + "Created 2 sessions", + "Requested for 4 sessions, returned 4", + ] + self.assertSpanEvents( + "CloudSpanner.PingingPool.BatchCreateSessions", wantEventNames + ) + def test_put_non_full(self): import datetime from google.cloud._testing import _Monkey from google.cloud.spanner_v1 import pool as MUT pool = self._make_one(size=1) - queue = pool._sessions = _Queue() + session_queue = pool._sessions = _Queue() now = datetime.datetime.utcnow() database = _Database("name") @@ -531,17 +961,19 @@ def test_put_non_full(self): with _Monkey(MUT, _NOW=lambda: now): pool.put(session) - self.assertEqual(len(queue._items), 1) - ping_after, queued = queue._items[0] + self.assertEqual(len(session_queue._items), 1) + ping_after, queued = session_queue._items[0] self.assertEqual(ping_after, now + datetime.timedelta(seconds=3000)) self.assertIs(queued, session) + self.assertNoSpans() def test_clear(self): pool = self._make_one() database = _Database("name") SESSIONS = [_Session(database)] * 10 - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) pool.bind(database) + self.reset() self.assertTrue(pool._sessions.full()) api = database.spanner_api @@ -553,10 +985,12 @@ def test_clear(self): for session in SESSIONS: self.assertTrue(session._deleted) + self.assertNoSpans() def test_ping_empty(self): pool = self._make_one(size=1) pool.ping() # Does not raise 'Empty' + self.assertNoSpans() def test_ping_oldest_fresh(self): pool = self._make_one(size=1) @@ -564,10 +998,12 @@ def test_ping_oldest_fresh(self): SESSIONS = [_Session(database)] * 1 database._sessions.extend(SESSIONS) pool.bind(database) + self.reset() pool.ping() - self.assertFalse(SESSIONS[0]._exists_checked) + self.assertFalse(SESSIONS[0]._pinged) + self.assertNoSpans() def test_ping_oldest_stale_but_exists(self): import datetime @@ -577,14 +1013,14 @@ def test_ping_oldest_stale_but_exists(self): pool = self._make_one(size=1) database = _Database("name") SESSIONS = [_Session(database)] * 1 - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) pool.bind(database) later = datetime.datetime.utcnow() + datetime.timedelta(seconds=4000) with _Monkey(MUT, _NOW=lambda: later): pool.ping() - self.assertTrue(SESSIONS[0]._exists_checked) + self.assertTrue(SESSIONS[0]._pinged) def test_ping_oldest_stale_and_not_exists(self): import datetime @@ -595,185 +1031,55 @@ def test_ping_oldest_stale_and_not_exists(self): database = _Database("name") SESSIONS = [_Session(database)] * 2 SESSIONS[0]._exists = False - database._sessions.extend(SESSIONS) + pool._new_session = mock.Mock(side_effect=SESSIONS) pool.bind(database) + self.reset() later = datetime.datetime.utcnow() + datetime.timedelta(seconds=4000) with _Monkey(MUT, _NOW=lambda: later): pool.ping() - self.assertTrue(SESSIONS[0]._exists_checked) + self.assertTrue(SESSIONS[0]._pinged) SESSIONS[1].create.assert_called() + self.assertNoSpans() + def test_spans_get_and_leave_empty_pool(self): + if not HAS_OPENTELEMETRY_INSTALLED: + return -class TestTransactionPingingPool(unittest.TestCase): - def _getTargetClass(self): - from google.cloud.spanner_v1.pool import TransactionPingingPool - - return TransactionPingingPool - - def _make_one(self, *args, **kwargs): - return self._getTargetClass()(*args, **kwargs) - - def test_ctor_defaults(self): - pool = self._make_one() - self.assertIsNone(pool._database) - self.assertEqual(pool.size, 10) - self.assertEqual(pool.default_timeout, 10) - self.assertEqual(pool._delta.seconds, 3000) - self.assertTrue(pool._sessions.empty()) - self.assertTrue(pool._pending_sessions.empty()) - self.assertEqual(pool.labels, {}) - - def test_ctor_explicit(self): - labels = {"foo": "bar"} - pool = self._make_one( - size=4, default_timeout=30, ping_interval=1800, labels=labels - ) - self.assertIsNone(pool._database) - self.assertEqual(pool.size, 4) - self.assertEqual(pool.default_timeout, 30) - self.assertEqual(pool._delta.seconds, 1800) - self.assertTrue(pool._sessions.empty()) - self.assertTrue(pool._pending_sessions.empty()) - self.assertEqual(pool.labels, labels) - - def test_bind(self): + # This scenario tests the spans generated from pulling a span + # out the pool and leaving it empty. pool = self._make_one() database = _Database("name") - SESSIONS = [_Session(database) for _ in range(10)] - database._sessions.extend(SESSIONS) - pool.bind(database) - - self.assertIs(pool._database, database) - self.assertEqual(pool.size, 10) - self.assertEqual(pool.default_timeout, 10) - self.assertEqual(pool._delta.seconds, 3000) - self.assertTrue(pool._sessions.full()) - - api = database.spanner_api - self.assertEqual(api.batch_create_sessions.call_count, 5) - for session in SESSIONS: - session.create.assert_not_called() - txn = session._transaction - txn.begin.assert_called_once_with() - - self.assertTrue(pool._pending_sessions.empty()) - - def test_bind_w_timestamp_race(self): - import datetime - from google.cloud._testing import _Monkey - from google.cloud.spanner_v1 import pool as MUT - - NOW = datetime.datetime.utcnow() - pool = self._make_one() - database = _Database("name") - SESSIONS = [_Session(database) for _ in range(10)] - database._sessions.extend(SESSIONS) - - with _Monkey(MUT, _NOW=lambda: NOW): + session1 = _Session(database) + pool._new_session = mock.Mock(side_effect=[session1, Exception]) + try: pool.bind(database) + except Exception: + pass - self.assertIs(pool._database, database) - self.assertEqual(pool.size, 10) - self.assertEqual(pool.default_timeout, 10) - self.assertEqual(pool._delta.seconds, 3000) - self.assertTrue(pool._sessions.full()) - - api = database.spanner_api - self.assertEqual(api.batch_create_sessions.call_count, 5) - for session in SESSIONS: - session.create.assert_not_called() - txn = session._transaction - txn.begin.assert_called_once_with() - - self.assertTrue(pool._pending_sessions.empty()) - - def test_put_full(self): - from six.moves.queue import Full - - pool = self._make_one(size=4) - database = _Database("name") - SESSIONS = [_Session(database) for _ in range(4)] - database._sessions.extend(SESSIONS) - pool.bind(database) - - with self.assertRaises(Full): - pool.put(_Session(database)) - - self.assertTrue(pool._sessions.full()) - - def test_put_non_full_w_active_txn(self): - pool = self._make_one(size=1) - queue = pool._sessions = _Queue() - pending = pool._pending_sessions = _Queue() - database = _Database("name") - session = _Session(database) - txn = session.transaction() - - pool.put(session) - - self.assertEqual(len(queue._items), 1) - _, queued = queue._items[0] - self.assertIs(queued, session) - - self.assertEqual(len(pending._items), 0) - txn.begin.assert_not_called() - - def test_put_non_full_w_committed_txn(self): - pool = self._make_one(size=1) - queue = pool._sessions = _Queue() - pending = pool._pending_sessions = _Queue() - database = _Database("name") - session = _Session(database) - committed = session.transaction() - committed.committed = True - - pool.put(session) - - self.assertEqual(len(queue._items), 0) - - self.assertEqual(len(pending._items), 1) - self.assertIs(pending._items[0], session) - self.assertIsNot(session._transaction, committed) - session._transaction.begin.assert_not_called() - - def test_put_non_full(self): - pool = self._make_one(size=1) - queue = pool._sessions = _Queue() - pending = pool._pending_sessions = _Queue() - database = _Database("name") - session = _Session(database) - - pool.put(session) - - self.assertEqual(len(queue._items), 0) - self.assertEqual(len(pending._items), 1) - self.assertIs(pending._items[0], session) - - self.assertFalse(pending.empty()) - - def test_begin_pending_transactions_empty(self): - pool = self._make_one(size=1) - pool.begin_pending_transactions() # no raise - - def test_begin_pending_transactions_non_empty(self): - pool = self._make_one(size=1) - pool._sessions = _Queue() - - database = _Database("name") - TRANSACTIONS = [_make_transaction(object())] - PENDING_SESSIONS = [_Session(database, transaction=txn) for txn in TRANSACTIONS] - - pending = pool._pending_sessions = _Queue(*PENDING_SESSIONS) - self.assertFalse(pending.empty()) - - pool.begin_pending_transactions() # no raise - - for txn in TRANSACTIONS: - txn.begin.assert_called_once_with() - - self.assertTrue(pending.empty()) + with trace_call("pool.Get", session1): + session = pool.get() + self.assertIsInstance(session, _Session) + self.assertIs(session._database, database) + # session.create.assert_called() + self.assertTrue(pool._sessions.empty()) + + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + want_span_names = ["CloudSpanner.PingingPool.BatchCreateSessions", "pool.Get"] + assert got_span_names == want_span_names + + self.assertSpanAttributes( + "pool.Get", + attributes=TestPingingPool.BASE_ATTRIBUTES, + span=span_list[-1], + ) + wantEventNames = [ + "Waiting for a session to become available", + "Acquired session", + ] + self.assertSpanEvents("pool.Get", wantEventNames, span_list[-1]) class TestSessionCheckout(unittest.TestCase): @@ -794,10 +1100,12 @@ def test_ctor_wo_kwargs(self): def test_ctor_w_kwargs(self): pool = _Pool() - checkout = self._make_one(pool, foo="bar") + checkout = self._make_one(pool, foo="bar", database_role="dummy-role") self.assertIs(checkout._pool, pool) self.assertIsNone(checkout._session) - self.assertEqual(checkout._kwargs, {"foo": "bar"}) + self.assertEqual( + checkout._kwargs, {"foo": "bar", "database_role": "dummy-role"} + ) def test_context_manager_wo_kwargs(self): session = object() @@ -843,24 +1151,40 @@ def _make_transaction(*args, **kw): @total_ordering class _Session(object): - _transaction = None - def __init__(self, database, exists=True, transaction=None): + def __init__( + self, database, exists=True, transaction=None, last_use_time=datetime.utcnow() + ): self._database = database self._exists = exists self._exists_checked = False + self._pinged = False self.create = mock.Mock() self._deleted = False self._transaction = transaction + self._last_use_time = last_use_time + # Generate a faux id. + self._session_id = f"{time.time()}" def __lt__(self, other): return id(self) < id(other) + @property + def last_use_time(self): + return self._last_use_time + def exists(self): self._exists_checked = True return self._exists + def ping(self): + from google.cloud.exceptions import NotFound + + self._pinged = True + if not self._exists: + raise NotFound("expired session") + def delete(self): from google.cloud.exceptions import NotFound @@ -872,37 +1196,95 @@ def transaction(self): txn = self._transaction = _make_transaction(self) return txn + @property + def session_id(self): + return self._session_id + class _Database(object): + NTH_REQUEST = AtomicCounter() + NTH_CLIENT_ID = AtomicCounter() + def __init__(self, name): self.name = name self._sessions = [] - - def mock_batch_create_sessions(db, session_count=10, timeout=10, metadata=[]): - from google.cloud.spanner_v1.proto import spanner_pb2 - - response = spanner_pb2.BatchCreateSessionsResponse() - if session_count < 2: - response.session.add() + self._database_role = None + self.database_id = name + self._route_to_leader_enabled = True + + def mock_batch_create_sessions( + request=None, + timeout=10, + metadata=[], + labels={}, + ): + from google.cloud.spanner_v1 import BatchCreateSessionsResponse + from google.cloud.spanner_v1 import Session + + database_role = request.session_template.creator_role if request else None + if request.session_count < 2: + response = BatchCreateSessionsResponse( + session=[Session(creator_role=database_role, labels=labels)] + ) else: - response.session.add() - response.session.add() + response = BatchCreateSessionsResponse( + session=[ + Session(creator_role=database_role, labels=labels), + Session(creator_role=database_role, labels=labels), + ] + ) return response - from google.cloud.spanner_v1.gapic.spanner_client import SpannerClient + from google.cloud.spanner_v1 import SpannerClient self.spanner_api = mock.create_autospec(SpannerClient, instance=True) self.spanner_api.batch_create_sessions.side_effect = mock_batch_create_sessions - def session(self): + @property + def database_role(self): + """Database role used in sessions to connect to this database. + + :rtype: str + :returns: an str with the name of the database role. + """ + return self._database_role + + def session(self, **kwargs): # always return first session in the list # to avoid reversing the order of putting # sessions into pool (important for order tests) return self._sessions.pop(0) + @property + def observability_options(self): + return dict(db_name=self.name) + + @property + def _next_nth_request(self): + return self.NTH_REQUEST.increment() + + @property + def _nth_client_id(self): + return self.NTH_CLIENT_ID.increment() + + def metadata_with_request_id( + self, nth_request, nth_attempt, prior_metadata=[], span=None + ): + return _metadata_with_request_id( + self._nth_client_id, + self._channel_id, + nth_request, + nth_attempt, + prior_metadata, + span, + ) -class _Queue(object): + @property + def _channel_id(self): + return 1 + +class _Queue(object): _size = 1 def __init__(self, *items): @@ -915,13 +1297,13 @@ def full(self): return len(self._items) >= self._size def get(self, **kwargs): - from six.moves.queue import Empty + import queue self._got = kwargs try: return self._items.pop() except IndexError: - raise Empty() + raise queue.Empty() def put(self, item, **kwargs): self._put = kwargs @@ -933,5 +1315,4 @@ def put_nowait(self, item, **kwargs): class _Pool(_Queue): - _database = None diff --git a/tests/unit/test_session.py b/tests/unit/test_session.py index e2bf18c723..3b08cc5c65 100644 --- a/tests/unit/test_session.py +++ b/tests/unit/test_session.py @@ -13,21 +13,107 @@ # limitations under the License. -import unittest import google.api_core.gapic_v1.method +from google.cloud.spanner_v1._opentelemetry_tracing import trace_call import mock - - -def _make_rpc_error(error_cls, trailing_metadata=None): - import grpc - +import datetime +from google.cloud.spanner_v1 import ( + Transaction as TransactionPB, + TransactionOptions, + CommitResponse, + CommitRequest, + RequestOptions, + SpannerClient, + CreateSessionRequest, + Session as SessionRequestProto, + ExecuteSqlRequest, + TypeCode, + BeginTransactionRequest, +) +from google.cloud._helpers import UTC, _datetime_to_pb_timestamp +from google.cloud.spanner_v1._helpers import _delay_until_retry +from google.cloud.spanner_v1.transaction import Transaction +from tests._builders import ( + build_spanner_api, + build_session, + build_transaction_pb, + build_commit_response_pb, +) +from tests._helpers import ( + OpenTelemetryBase, + LIB_VERSION, + StatusCode, + enrich_with_otel_scope, +) +import grpc +from google.cloud.spanner_v1.session import Session +from google.cloud.spanner_v1.snapshot import Snapshot +from google.cloud.spanner_v1.database import Database +from google.cloud.spanner_v1.keyset import KeySet +from google.protobuf.duration_pb2 import Duration +from google.rpc.error_details_pb2 import RetryInfo +from google.api_core.exceptions import Unknown, Aborted, NotFound, Cancelled +from google.protobuf.struct_pb2 import Struct, Value +from google.cloud.spanner_v1.batch import Batch +from google.cloud.spanner_v1 import DefaultTransactionOptions +from google.cloud.spanner_v1.request_id_header import REQ_RAND_PROCESS_ID +from google.cloud.spanner_v1._helpers import ( + AtomicCounter, + _metadata_with_request_id, +) + +TABLE_NAME = "citizens" +COLUMNS = ["email", "first_name", "last_name", "age"] +VALUES = [ + ["phred@exammple.com", "Phred", "Phlyntstone", 32], + ["bharney@example.com", "Bharney", "Rhubble", 31], +] +KEYS = ["bharney@example.com", "phred@example.com"] +KEYSET = KeySet(keys=KEYS) +TRANSACTION_ID = b"FACEDACE" + + +def _make_rpc_error(error_cls, trailing_metadata=[]): grpc_error = mock.create_autospec(grpc.Call, instance=True) grpc_error.trailing_metadata.return_value = trailing_metadata return error_cls("error", errors=(grpc_error,)) -class TestSession(unittest.TestCase): +NTH_CLIENT_ID = AtomicCounter() + + +def inject_into_mock_database(mockdb): + setattr(mockdb, "_nth_request", AtomicCounter()) + nth_client_id = NTH_CLIENT_ID.increment() + setattr(mockdb, "_nth_client_id", nth_client_id) + channel_id = 1 + setattr(mockdb, "_channel_id", channel_id) + + def metadata_with_request_id( + nth_request, nth_attempt, prior_metadata=[], span=None + ): + nth_req = nth_request.fget(mockdb) + return _metadata_with_request_id( + nth_client_id, + channel_id, + nth_req, + nth_attempt, + prior_metadata, + span, + ) + + setattr(mockdb, "metadata_with_request_id", metadata_with_request_id) + + @property + def _next_nth_request(self): + return self._nth_request.increment() + + setattr(mockdb, "_next_nth_request", _next_nth_request) + + return mockdb + +class TestSession(OpenTelemetryBase): PROJECT_ID = "project-id" INSTANCE_ID = "instance-id" INSTANCE_NAME = "projects/" + PROJECT_ID + "/instances/" + INSTANCE_ID @@ -35,32 +121,45 @@ class TestSession(unittest.TestCase): DATABASE_NAME = INSTANCE_NAME + "/databases/" + DATABASE_ID SESSION_ID = "session-id" SESSION_NAME = DATABASE_NAME + "/sessions/" + SESSION_ID + DATABASE_ROLE = "dummy-role" + BASE_ATTRIBUTES = { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": DATABASE_NAME, + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + } + enrich_with_otel_scope(BASE_ATTRIBUTES) def _getTargetClass(self): - from google.cloud.spanner_v1.session import Session - return Session def _make_one(self, *args, **kwargs): return self._getTargetClass()(*args, **kwargs) @staticmethod - def _make_database(name=DATABASE_NAME): - from google.cloud.spanner_v1.database import Database - + def _make_database( + name=DATABASE_NAME, + database_role=None, + default_transaction_options=DefaultTransactionOptions(), + ): database = mock.create_autospec(Database, instance=True) database.name = name + database.log_commit_stats = False + database.database_role = database_role + database._route_to_leader_enabled = True + database.default_transaction_options = default_transaction_options + inject_into_mock_database(database) + return database @staticmethod - def _make_session_pb(name, labels=None): - from google.cloud.spanner_v1.proto.spanner_pb2 import Session - - return Session(name=name, labels=labels) + def _make_session_pb(name, labels=None, database_role=None): + return SessionRequestProto(name=name, labels=labels, creator_role=database_role) def _make_spanner_api(self): - from google.cloud.spanner_v1.gapic.spanner_client import SpannerClient - return mock.Mock(autospec=SpannerClient, instance=True) def test_constructor_wo_labels(self): @@ -70,6 +169,20 @@ def test_constructor_wo_labels(self): self.assertIs(session._database, database) self.assertEqual(session.labels, {}) + def test_constructor_w_database_role(self): + database = self._make_database(database_role=self.DATABASE_ROLE) + session = self._make_one(database, database_role=self.DATABASE_ROLE) + self.assertIs(session.session_id, None) + self.assertIs(session._database, database) + self.assertEqual(session.database_role, self.DATABASE_ROLE) + + def test_constructor_wo_database_role(self): + database = self._make_database() + session = self._make_one(database) + self.assertIs(session.session_id, None) + self.assertIs(session._database, database) + self.assertIs(session.database_role, None) + def test_constructor_w_labels(self): database = self._make_database() labels = {"foo": "bar"} @@ -107,6 +220,123 @@ def test_create_w_session_id(self): with self.assertRaises(ValueError): session.create() + self.assertNoSpans() + + def test_create_w_database_role(self): + session_pb = self._make_session_pb( + self.SESSION_NAME, database_role=self.DATABASE_ROLE + ) + gax_api = self._make_spanner_api() + gax_api.create_session.return_value = session_pb + database = self._make_database(database_role=self.DATABASE_ROLE) + database.spanner_api = gax_api + session = self._make_one(database, database_role=self.DATABASE_ROLE) + + session.create() + + self.assertEqual(session.session_id, self.SESSION_ID) + self.assertEqual(session.database_role, self.DATABASE_ROLE) + session_template = SessionRequestProto(creator_role=self.DATABASE_ROLE) + + request = CreateSessionRequest( + database=database.name, + session=session_template, + ) + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + gax_api.create_session.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + + self.assertSpanAttributes( + "CloudSpanner.CreateSession", + attributes=dict( + TestSession.BASE_ATTRIBUTES, x_goog_spanner_request_id=req_id + ), + ) + + def test_create_session_span_annotations(self): + session_pb = self._make_session_pb( + self.SESSION_NAME, database_role=self.DATABASE_ROLE + ) + + gax_api = self._make_spanner_api() + gax_api.create_session.return_value = session_pb + database = self._make_database(database_role=self.DATABASE_ROLE) + database.spanner_api = gax_api + session = self._make_one(database, database_role=self.DATABASE_ROLE) + + with trace_call("TestSessionSpan", session) as span: + session.create() + + self.assertEqual(session.session_id, self.SESSION_ID) + self.assertEqual(session.database_role, self.DATABASE_ROLE) + session_template = SessionRequestProto(creator_role=self.DATABASE_ROLE) + + request = CreateSessionRequest( + database=database.name, + session=session_template, + ) + + gax_api.create_session.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + wantEventNames = ["Creating Session"] + self.assertSpanEvents("TestSessionSpan", wantEventNames, span) + + def test_create_wo_database_role(self): + session_pb = self._make_session_pb(self.SESSION_NAME) + gax_api = self._make_spanner_api() + gax_api.create_session.return_value = session_pb + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session.create() + + self.assertEqual(session.session_id, self.SESSION_ID) + self.assertIsNone(session.database_role) + + request = CreateSessionRequest( + database=database.name, + ) + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + gax_api.create_session.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + self.assertSpanAttributes( + "CloudSpanner.CreateSession", + attributes=dict( + TestSession.BASE_ATTRIBUTES, x_goog_spanner_request_id=req_id + ), + ) + def test_create_ok(self): session_pb = self._make_session_pb(self.SESSION_NAME) gax_api = self._make_spanner_api() @@ -119,8 +349,28 @@ def test_create_ok(self): self.assertEqual(session.session_id, self.SESSION_ID) + request = CreateSessionRequest( + database=database.name, + ) + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" gax_api.create_session.assert_called_once_with( - database.name, metadata=[("google-cloud-resource-prefix", database.name)] + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + + self.assertSpanAttributes( + "CloudSpanner.CreateSession", + attributes=dict( + TestSession.BASE_ATTRIBUTES, x_goog_spanner_request_id=req_id + ), ) def test_create_w_labels(self): @@ -136,15 +386,32 @@ def test_create_w_labels(self): self.assertEqual(session.session_id, self.SESSION_ID) + request = CreateSessionRequest( + database=database.name, + session=SessionRequestProto(labels=labels), + ) + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" gax_api.create_session.assert_called_once_with( - database.name, - session={"labels": labels}, - metadata=[("google-cloud-resource-prefix", database.name)], + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], ) - def test_create_error(self): - from google.api_core.exceptions import Unknown + self.assertSpanAttributes( + "CloudSpanner.CreateSession", + attributes=dict( + TestSession.BASE_ATTRIBUTES, foo="bar", x_goog_spanner_request_id=req_id + ), + ) + def test_create_error(self): gax_api = self._make_spanner_api() gax_api.create_session.side_effect = Unknown("error") database = self._make_database() @@ -154,11 +421,22 @@ def test_create_error(self): with self.assertRaises(Unknown): session.create() + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner.CreateSession", + status=StatusCode.ERROR, + attributes=dict( + TestSession.BASE_ATTRIBUTES, x_goog_spanner_request_id=req_id + ), + ) + def test_exists_wo_session_id(self): database = self._make_database() session = self._make_one(database) self.assertFalse(session.exists()) + self.assertNoSpans() + def test_exists_hit(self): session_pb = self._make_session_pb(self.SESSION_NAME) gax_api = self._make_spanner_api() @@ -170,14 +448,94 @@ def test_exists_hit(self): self.assertTrue(session.exists()) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + gax_api.get_session.assert_called_once_with( + name=self.SESSION_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + + self.assertSpanAttributes( + "CloudSpanner.GetSession", + attributes=dict( + TestSession.BASE_ATTRIBUTES, + session_found=True, + x_goog_spanner_request_id=req_id, + ), + ) + + @mock.patch( + "google.cloud.spanner_v1._opentelemetry_tracing.HAS_OPENTELEMETRY_INSTALLED", + False, + ) + def test_exists_hit_wo_span(self): + session_pb = self._make_session_pb(self.SESSION_NAME) + gax_api = self._make_spanner_api() + gax_api.get_session.return_value = session_pb + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + self.assertTrue(session.exists()) + gax_api.get_session.assert_called_once_with( - self.SESSION_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + name=self.SESSION_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) + self.assertNoSpans() + def test_exists_miss(self): - from google.api_core.exceptions import NotFound + gax_api = self._make_spanner_api() + gax_api.get_session.side_effect = NotFound("testing") + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + self.assertFalse(session.exists()) + + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + gax_api.get_session.assert_called_once_with( + name=self.SESSION_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + self.assertSpanAttributes( + "CloudSpanner.GetSession", + attributes=dict( + TestSession.BASE_ATTRIBUTES, + session_found=False, + x_goog_spanner_request_id=req_id, + ), + ) + + @mock.patch( + "google.cloud.spanner_v1._opentelemetry_tracing.HAS_OPENTELEMETRY_INSTALLED", + False, + ) + def test_exists_miss_wo_span(self): gax_api = self._make_spanner_api() gax_api.get_session.side_effect = NotFound("testing") database = self._make_database() @@ -188,13 +546,20 @@ def test_exists_miss(self): self.assertFalse(session.exists()) gax_api.get_session.assert_called_once_with( - self.SESSION_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + name=self.SESSION_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) - def test_exists_error(self): - from google.api_core.exceptions import Unknown + self.assertNoSpans() + def test_exists_error(self): gax_api = self._make_spanner_api() gax_api.get_session.side_effect = Unknown("testing") database = self._make_database() @@ -205,9 +570,111 @@ def test_exists_error(self): with self.assertRaises(Unknown): session.exists() + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" gax_api.get_session.assert_called_once_with( - self.SESSION_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + name=self.SESSION_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + + self.assertSpanAttributes( + "CloudSpanner.GetSession", + status=StatusCode.ERROR, + attributes=dict( + TestSession.BASE_ATTRIBUTES, x_goog_spanner_request_id=req_id + ), + ) + + def test_ping_wo_session_id(self): + database = self._make_database() + session = self._make_one(database) + with self.assertRaises(ValueError): + session.ping() + + def test_ping_hit(self): + gax_api = self._make_spanner_api() + gax_api.execute_sql.return_value = "1" + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + session.ping() + + request = ExecuteSqlRequest( + session=self.SESSION_NAME, + sql="SELECT 1", + ) + + gax_api.execute_sql.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_ping_miss(self): + gax_api = self._make_spanner_api() + gax_api.execute_sql.side_effect = NotFound("testing") + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + with self.assertRaises(NotFound): + session.ping() + + request = ExecuteSqlRequest( + session=self.SESSION_NAME, + sql="SELECT 1", + ) + + gax_api.execute_sql.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_ping_error(self): + gax_api = self._make_spanner_api() + gax_api.execute_sql.side_effect = Unknown("testing") + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + with self.assertRaises(Unknown): + session.ping() + + request = ExecuteSqlRequest( + session=self.SESSION_NAME, + sql="SELECT 1", + ) + + gax_api.execute_sql.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) def test_delete_wo_session_id(self): @@ -217,6 +684,8 @@ def test_delete_wo_session_id(self): with self.assertRaises(ValueError): session.delete() + self.assertNoSpans() + def test_delete_hit(self): gax_api = self._make_spanner_api() gax_api.delete_session.return_value = None @@ -227,14 +696,26 @@ def test_delete_hit(self): session.delete() + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" gax_api.delete_session.assert_called_once_with( - self.SESSION_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + name=self.SESSION_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], ) - def test_delete_miss(self): - from google.cloud.exceptions import NotFound + attrs = {"session.id": session._session_id, "session.name": session.name} + attrs.update(TestSession.BASE_ATTRIBUTES) + self.assertSpanAttributes( + "CloudSpanner.DeleteSession", + attributes=dict(attrs, x_goog_spanner_request_id=req_id), + ) + def test_delete_miss(self): gax_api = self._make_spanner_api() gax_api.delete_session.side_effect = NotFound("testing") database = self._make_database() @@ -245,14 +726,32 @@ def test_delete_miss(self): with self.assertRaises(NotFound): session.delete() + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" gax_api.delete_session.assert_called_once_with( - self.SESSION_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + name=self.SESSION_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], ) - def test_delete_error(self): - from google.api_core.exceptions import Unknown + attrs = { + "session.id": session._session_id, + "session.name": session.name, + "x_goog_spanner_request_id": req_id, + } + attrs.update(TestSession.BASE_ATTRIBUTES) + + self.assertSpanAttributes( + "CloudSpanner.DeleteSession", + status=StatusCode.ERROR, + attributes=attrs, + ) + def test_delete_error(self): gax_api = self._make_spanner_api() gax_api.delete_session.side_effect = Unknown("testing") database = self._make_database() @@ -263,9 +762,29 @@ def test_delete_error(self): with self.assertRaises(Unknown): session.delete() + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" gax_api.delete_session.assert_called_once_with( - self.SESSION_NAME, - metadata=[("google-cloud-resource-prefix", database.name)], + name=self.SESSION_NAME, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + + attrs = { + "session.id": session._session_id, + "session.name": session.name, + "x_goog_spanner_request_id": req_id, + } + attrs.update(TestSession.BASE_ATTRIBUTES) + + self.assertSpanAttributes( + "CloudSpanner.DeleteSession", + status=StatusCode.ERROR, + attributes=attrs, ) def test_snapshot_not_created(self): @@ -276,8 +795,6 @@ def test_snapshot_not_created(self): session.snapshot() def test_snapshot_created(self): - from google.cloud.spanner_v1.snapshot import Snapshot - database = self._make_database() session = self._make_one(database) session._session_id = "DEADBEEF" # emulate 'session.create()' @@ -290,8 +807,6 @@ def test_snapshot_created(self): self.assertFalse(snapshot._multi_use) def test_snapshot_created_w_multi_use(self): - from google.cloud.spanner_v1.snapshot import Snapshot - database = self._make_database() session = self._make_one(database) session._session_id = "DEADBEEF" # emulate 'session.create()' @@ -304,8 +819,6 @@ def test_snapshot_created_w_multi_use(self): self.assertTrue(snapshot._multi_use) def test_read_not_created(self): - from google.cloud.spanner_v1.keyset import KeySet - TABLE_NAME = "citizens" COLUMNS = ["email", "first_name", "last_name", "age"] KEYS = ["bharney@example.com", "phred@example.com"] @@ -317,8 +830,6 @@ def test_read_not_created(self): session.read(TABLE_NAME, COLUMNS, KEYSET) def test_read(self): - from google.cloud.spanner_v1.keyset import KeySet - TABLE_NAME = "citizens" COLUMNS = ["email", "first_name", "last_name", "age"] KEYS = ["bharney@example.com", "phred@example.com"] @@ -335,7 +846,12 @@ def test_read(self): self.assertIs(found, snapshot().read.return_value) snapshot().read.assert_called_once_with( - TABLE_NAME, COLUMNS, KEYSET, INDEX, LIMIT + TABLE_NAME, + COLUMNS, + KEYSET, + INDEX, + LIMIT, + column_info=None, ) def test_execute_sql_not_created(self): @@ -363,21 +879,20 @@ def test_execute_sql_defaults(self): None, None, query_options=None, + request_options=None, timeout=google.api_core.gapic_v1.method.DEFAULT, retry=google.api_core.gapic_v1.method.DEFAULT, + column_info=None, ) def test_execute_sql_non_default_retry(self): - from google.protobuf.struct_pb2 import Struct, Value - from google.cloud.spanner_v1.proto.type_pb2 import STRING - SQL = "SELECT first_name, age FROM citizens" database = self._make_database() session = self._make_one(database) session._session_id = "DEADBEEF" params = Struct(fields={"foo": Value(string_value="bar")}) - param_types = {"foo": STRING} + param_types = {"foo": TypeCode.STRING} with mock.patch("google.cloud.spanner_v1.session.Snapshot") as snapshot: found = session.execute_sql( @@ -392,21 +907,20 @@ def test_execute_sql_non_default_retry(self): param_types, "PLAN", query_options=None, + request_options=None, timeout=None, retry=None, + column_info=None, ) def test_execute_sql_explicit(self): - from google.protobuf.struct_pb2 import Struct, Value - from google.cloud.spanner_v1.proto.type_pb2 import STRING - SQL = "SELECT first_name, age FROM citizens" database = self._make_database() session = self._make_one(database) session._session_id = "DEADBEEF" params = Struct(fields={"foo": Value(string_value="bar")}) - param_types = {"foo": STRING} + param_types = {"foo": TypeCode.STRING} with mock.patch("google.cloud.spanner_v1.session.Snapshot") as snapshot: found = session.execute_sql(SQL, params, param_types, "PLAN") @@ -419,8 +933,10 @@ def test_execute_sql_explicit(self): param_types, "PLAN", query_options=None, + request_options=None, timeout=google.api_core.gapic_v1.method.DEFAULT, retry=google.api_core.gapic_v1.method.DEFAULT, + column_info=None, ) def test_batch_not_created(self): @@ -431,8 +947,6 @@ def test_batch_not_created(self): session.batch() def test_batch_created(self): - from google.cloud.spanner_v1.batch import Batch - database = self._make_database() session = self._make_one(database) session._session_id = "DEADBEEF" @@ -450,8 +964,6 @@ def test_transaction_not_created(self): session.transaction() def test_transaction_created(self): - from google.cloud.spanner_v1.transaction import Transaction - database = self._make_database() session = self._make_one(database) session._session_id = "DEADBEEF" @@ -460,26 +972,8 @@ def test_transaction_created(self): self.assertIsInstance(transaction, Transaction) self.assertIs(transaction._session, session) - self.assertIs(session._transaction, transaction) - - def test_transaction_w_existing_txn(self): - database = self._make_database() - session = self._make_one(database) - session._session_id = "DEADBEEF" - - existing = session.transaction() - another = session.transaction() # invalidates existing txn - - self.assertIs(session._transaction, another) - self.assertTrue(existing.rolled_back) def test_run_in_transaction_callback_raises_non_gax_error(self): - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, - ) - from google.cloud.spanner_v1.transaction import Transaction - TABLE_NAME = "citizens" COLUMNS = ["email", "first_name", "last_name", "age"] VALUES = [ @@ -509,7 +1003,6 @@ def unit_of_work(txn, *args, **kw): with self.assertRaises(Testing): session.run_in_transaction(unit_of_work) - self.assertIsNone(session._transaction) self.assertEqual(len(called_with), 1) txn, args, kw = called_with[0] self.assertIsInstance(txn, Transaction) @@ -517,27 +1010,13 @@ def unit_of_work(txn, *args, **kw): self.assertTrue(txn.rolled_back) self.assertEqual(args, ()) self.assertEqual(kw, {}) - - expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) - gax_api.begin_transaction.assert_called_once_with( - self.SESSION_NAME, - expected_options, - metadata=[("google-cloud-resource-prefix", database.name)], - ) - gax_api.rollback.assert_called_once_with( - self.SESSION_NAME, - TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], - ) + # Transaction only has mutation operations. + # Exception was raised before commit, hence transaction did not begin. + # Therefore rollback and begin transaction were not called. + gax_api.rollback.assert_not_called() + gax_api.begin_transaction.assert_not_called() def test_run_in_transaction_callback_raises_non_abort_rpc_error(self): - from google.api_core.exceptions import Cancelled - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, - ) - from google.cloud.spanner_v1.transaction import Transaction - TABLE_NAME = "citizens" COLUMNS = ["email", "first_name", "last_name", "age"] VALUES = [ @@ -564,7 +1043,6 @@ def unit_of_work(txn, *args, **kw): with self.assertRaises(Cancelled): session.run_in_transaction(unit_of_work) - self.assertIsNone(session._transaction) self.assertEqual(len(called_with), 1) txn, args, kw = called_with[0] self.assertIsInstance(txn, Transaction) @@ -573,27 +1051,134 @@ def unit_of_work(txn, *args, **kw): self.assertEqual(args, ()) self.assertEqual(kw, {}) - expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) - gax_api.begin_transaction.assert_called_once_with( - self.SESSION_NAME, - expected_options, - metadata=[("google-cloud-resource-prefix", database.name)], - ) gax_api.rollback.assert_not_called() - def test_run_in_transaction_w_args_w_kwargs_wo_abort(self): - import datetime - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, + def test_run_in_transaction_retry_callback_raises_abort(self): + session = build_session() + database = session._database + + # Build API responses. + api = database.spanner_api + begin_transaction = api.begin_transaction + streaming_read = api.streaming_read + streaming_read.side_effect = [_make_rpc_error(Aborted), []] + + # Run in transaction. + def unit_of_work(transaction): + transaction.begin() + list(transaction.read(TABLE_NAME, COLUMNS, KEYSET)) + + session.create() + session.run_in_transaction(unit_of_work) + + self.assertEqual(begin_transaction.call_count, 2) + + begin_transaction.assert_called_with( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions(read_write=TransactionOptions.ReadWrite()), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.4.1", + ), + ], ) - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - from google.cloud.spanner_v1.transaction import Transaction - TABLE_NAME = "citizens" - COLUMNS = ["email", "first_name", "last_name", "age"] + def test_run_in_transaction_retry_callback_raises_abort_multiplexed(self): + session = build_session(is_multiplexed=True) + database = session._database + api = database.spanner_api + + # Build API responses + previous_transaction_id = b"transaction-id" + begin_transaction = api.begin_transaction + begin_transaction.return_value = build_transaction_pb( + id=previous_transaction_id + ) + + streaming_read = api.streaming_read + streaming_read.side_effect = [_make_rpc_error(Aborted), []] + + # Run in transaction. + def unit_of_work(transaction): + transaction.begin() + list(transaction.read(TABLE_NAME, COLUMNS, KEYSET)) + + session.create() + session.run_in_transaction(unit_of_work) + + # Verify retried BeginTransaction API call. + self.assertEqual(begin_transaction.call_count, 2) + + begin_transaction.assert_called_with( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite( + multiplexed_session_previous_transaction_id=previous_transaction_id + ) + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.4.1", + ), + ], + ) + + def test_run_in_transaction_retry_commit_raises_abort_multiplexed(self): + session = build_session(is_multiplexed=True) + database = session._database + + # Build API responses + api = database.spanner_api + previous_transaction_id = b"transaction-id" + begin_transaction = api.begin_transaction + begin_transaction.return_value = build_transaction_pb( + id=previous_transaction_id + ) + + commit = api.commit + commit.side_effect = [_make_rpc_error(Aborted), build_commit_response_pb()] + + # Run in transaction. + def unit_of_work(transaction): + transaction.begin() + list(transaction.read(TABLE_NAME, COLUMNS, KEYSET)) + + session.create() + session.run_in_transaction(unit_of_work) + + # Verify retried BeginTransaction API call. + self.assertEqual(begin_transaction.call_count, 2) + + begin_transaction.assert_called_with( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite( + multiplexed_session_previous_transaction_id=previous_transaction_id + ) + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.5.1", + ), + ], + ) + + def test_run_in_transaction_w_args_w_kwargs_wo_abort(self): VALUES = [ ["phred@exammple.com", "Phred", "Phlyntstone", 32], ["bharney@example.com", "Bharney", "Rhubble", 31], @@ -620,7 +1205,6 @@ def unit_of_work(txn, *args, **kw): return_value = session.run_in_transaction(unit_of_work, "abc", some_arg="def") - self.assertIsNone(session._transaction) self.assertEqual(len(called_with), 1) txn, args, kw = called_with[0] self.assertIsInstance(txn, Transaction) @@ -630,38 +1214,53 @@ def unit_of_work(txn, *args, **kw): expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) gax_api.begin_transaction.assert_called_once_with( - self.SESSION_NAME, - expected_options, - metadata=[("google-cloud-resource-prefix", database.name)], + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) - gax_api.commit.assert_called_once_with( - self.SESSION_NAME, + request = CommitRequest( + session=self.SESSION_NAME, mutations=txn._mutations, transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], + request_options=RequestOptions(), + ) + gax_api.commit.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], ) def test_run_in_transaction_w_commit_error(self): - from google.api_core.exceptions import Unknown - from google.cloud.spanner_v1.transaction import Transaction - TABLE_NAME = "citizens" COLUMNS = ["email", "first_name", "last_name", "age"] VALUES = [ ["phred@exammple.com", "Phred", "Phlyntstone", 32], ["bharney@example.com", "Bharney", "Rhubble", 31], ] - TRANSACTION_ID = b"FACEDACE" - gax_api = self._make_spanner_api() - gax_api.commit.side_effect = Unknown("error") database = self._make_database() - database.spanner_api = gax_api + + api = database.spanner_api = build_spanner_api() + begin_transaction = api.begin_transaction + commit = api.commit + + commit.side_effect = Unknown("error") + session = self._make_one(database) session._session_id = self.SESSION_ID - begun_txn = session._transaction = Transaction(session) - begun_txn._transaction_id = TRANSACTION_ID - - assert session._transaction._transaction_id called_with = [] @@ -672,41 +1271,45 @@ def unit_of_work(txn, *args, **kw): with self.assertRaises(Unknown): session.run_in_transaction(unit_of_work) - self.assertIsNone(session._transaction) self.assertEqual(len(called_with), 1) txn, args, kw = called_with[0] - self.assertIs(txn, begun_txn) self.assertEqual(txn.committed, None) self.assertEqual(args, ()) self.assertEqual(kw, {}) - gax_api.begin_transaction.assert_not_called() - gax_api.commit.assert_called_once_with( - self.SESSION_NAME, - mutations=txn._mutations, - transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], + begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions(read_write=TransactionOptions.ReadWrite()), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) - def test_run_in_transaction_w_abort_no_retry_metadata(self): - import datetime - from google.api_core.exceptions import Aborted - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, + api.commit.assert_called_once_with( + request=CommitRequest( + session=session.name, + mutations=txn._mutations, + transaction_id=begin_transaction.return_value.id, + request_options=RequestOptions(), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], ) - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - from google.cloud.spanner_v1.transaction import Transaction - TABLE_NAME = "citizens" - COLUMNS = ["email", "first_name", "last_name", "age"] - VALUES = [ - ["phred@exammple.com", "Phred", "Phlyntstone", 32], - ["bharney@example.com", "Bharney", "Rhubble", 31], - ] - TRANSACTION_ID = b"FACEDACE" + def test_run_in_transaction_w_abort_no_retry_metadata(self): transaction_pb = TransactionPB(id=TRANSACTION_ID) now = datetime.datetime.utcnow().replace(tzinfo=UTC) now_pb = _datetime_to_pb_timestamp(now) @@ -727,7 +1330,9 @@ def unit_of_work(txn, *args, **kw): txn.insert(TABLE_NAME, COLUMNS, VALUES) return "answer" - return_value = session.run_in_transaction(unit_of_work, "abc", some_arg="def") + return_value = session.run_in_transaction( + unit_of_work, "abc", some_arg="def", default_retry_delay=0 + ) self.assertEqual(len(called_with), 2) for index, (txn, args, kw) in enumerate(called_with): @@ -736,52 +1341,78 @@ def unit_of_work(txn, *args, **kw): self.assertEqual(args, ("abc",)) self.assertEqual(kw, {"some_arg": "def"}) - expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) self.assertEqual( gax_api.begin_transaction.call_args_list, [ mock.call( - self.SESSION_NAME, - expected_options, - metadata=[("google-cloud-resource-prefix", database.name)], - ) - ] - * 2, + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite() + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ), + mock.call( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite() + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.3.1", + ), + ], + ), + ], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + request_options=RequestOptions(), ) self.assertEqual( gax_api.commit.call_args_list, [ mock.call( - self.SESSION_NAME, - mutations=txn._mutations, - transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], - ) - ] - * 2, + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ), + mock.call( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.4.1", + ), + ], + ), + ], ) def test_run_in_transaction_w_abort_w_retry_metadata(self): - import datetime - from google.api_core.exceptions import Aborted - from google.protobuf.duration_pb2 import Duration - from google.rpc.error_details_pb2 import RetryInfo - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, - ) - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - from google.cloud.spanner_v1.transaction import Transaction - - TABLE_NAME = "citizens" - COLUMNS = ["email", "first_name", "last_name", "age"] - VALUES = [ - ["phred@exammple.com", "Phred", "Phlyntstone", 32], - ["bharney@example.com", "Bharney", "Rhubble", 31], - ] - TRANSACTION_ID = b"FACEDACE" RETRY_SECONDS = 12 RETRY_NANOS = 3456 retry_info = RetryInfo( @@ -824,52 +1455,78 @@ def unit_of_work(txn, *args, **kw): self.assertEqual(args, ("abc",)) self.assertEqual(kw, {"some_arg": "def"}) - expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) self.assertEqual( gax_api.begin_transaction.call_args_list, [ mock.call( - self.SESSION_NAME, - expected_options, - metadata=[("google-cloud-resource-prefix", database.name)], - ) - ] - * 2, + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite() + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ), + mock.call( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite() + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.3.1", + ), + ], + ), + ], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + request_options=RequestOptions(), ) self.assertEqual( gax_api.commit.call_args_list, [ mock.call( - self.SESSION_NAME, - mutations=txn._mutations, - transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], - ) - ] - * 2, + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ), + mock.call( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.4.1", + ), + ], + ), + ], ) def test_run_in_transaction_w_callback_raises_abort_wo_metadata(self): - import datetime - from google.api_core.exceptions import Aborted - from google.protobuf.duration_pb2 import Duration - from google.rpc.error_details_pb2 import RetryInfo - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, - ) - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - from google.cloud.spanner_v1.transaction import Transaction - - TABLE_NAME = "citizens" - COLUMNS = ["email", "first_name", "last_name", "age"] - VALUES = [ - ["phred@exammple.com", "Phred", "Phlyntstone", 32], - ["bharney@example.com", "Bharney", "Rhubble", 31], - ] - TRANSACTION_ID = b"FACEDACE" RETRY_SECONDS = 1 RETRY_NANOS = 3456 transaction_pb = TransactionPB(id=TRANSACTION_ID) @@ -913,45 +1570,40 @@ def unit_of_work(txn, *args, **kw): self.assertEqual(kw, {}) expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) - self.assertEqual( - gax_api.begin_transaction.call_args_list, - [ - mock.call( - self.SESSION_NAME, - expected_options, - metadata=[("google-cloud-resource-prefix", database.name)], - ) - ] - * 2, + + # First call was aborted before commit operation, therefore no begin rpc was made during first attempt. + gax_api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) - gax_api.commit.assert_called_once_with( - self.SESSION_NAME, + request = CommitRequest( + session=self.SESSION_NAME, mutations=txn._mutations, transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], + request_options=RequestOptions(), + ) + gax_api.commit.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], ) def test_run_in_transaction_w_abort_w_retry_metadata_deadline(self): - import datetime - from google.api_core.exceptions import Aborted - from google.protobuf.duration_pb2 import Duration - from google.rpc.error_details_pb2 import RetryInfo - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, - ) - from google.cloud.spanner_v1.transaction import Transaction - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - - TABLE_NAME = "citizens" - COLUMNS = ["email", "first_name", "last_name", "age"] - VALUES = [ - ["phred@exammple.com", "Phred", "Phlyntstone", 32], - ["bharney@example.com", "Bharney", "Rhubble", 31], - ] - TRANSACTION_ID = b"FACEDACE" RETRY_SECONDS = 1 RETRY_NANOS = 3456 transaction_pb = TransactionPB(id=TRANSACTION_ID) @@ -999,32 +1651,37 @@ def _time(_results=[1, 1.5]): expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) gax_api.begin_transaction.assert_called_once_with( - self.SESSION_NAME, - expected_options, - metadata=[("google-cloud-resource-prefix", database.name)], + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], ) - gax_api.commit.assert_called_once_with( - self.SESSION_NAME, + request = CommitRequest( + session=self.SESSION_NAME, mutations=txn._mutations, transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], + request_options=RequestOptions(), ) - - def test_run_in_transaction_w_timeout(self): - from google.api_core.exceptions import Aborted - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, + gax_api.commit.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], ) - from google.cloud.spanner_v1.transaction import Transaction - TABLE_NAME = "citizens" - COLUMNS = ["email", "first_name", "last_name", "age"] - VALUES = [ - ["phred@exammple.com", "Phred", "Phlyntstone", 32], - ["bharney@example.com", "Bharney", "Rhubble", 31], - ] - TRANSACTION_ID = b"FACEDACE" + def test_run_in_transaction_w_timeout(self): transaction_pb = TransactionPB(id=TRANSACTION_ID) aborted = _make_rpc_error(Aborted, trailing_metadata=[]) gax_api = self._make_spanner_api() @@ -1063,34 +1720,834 @@ def _time(_results=[1, 2, 4, 8]): self.assertEqual(args, ()) self.assertEqual(kw, {}) + self.assertEqual( + gax_api.begin_transaction.call_args_list, + [ + mock.call( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite() + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ), + mock.call( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite() + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.3.1", + ), + ], + ), + mock.call( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite() + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.5.1", + ), + ], + ), + ], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + request_options=RequestOptions(), + ) + self.assertEqual( + gax_api.commit.call_args_list, + [ + mock.call( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ), + mock.call( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.4.1", + ), + ], + ), + mock.call( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.6.1", + ), + ], + ), + ], + ) + + def test_run_in_transaction_w_commit_stats_success(self): + transaction_pb = TransactionPB(id=TRANSACTION_ID) + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + commit_stats = CommitResponse.CommitStats(mutation_count=4) + response = CommitResponse(commit_timestamp=now_pb, commit_stats=commit_stats) + gax_api = self._make_spanner_api() + gax_api.begin_transaction.return_value = transaction_pb + gax_api.commit.return_value = response + database = self._make_database() + database.log_commit_stats = True + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + called_with = [] + + def unit_of_work(txn, *args, **kw): + called_with.append((txn, args, kw)) + txn.insert(TABLE_NAME, COLUMNS, VALUES) + return 42 + + return_value = session.run_in_transaction(unit_of_work, "abc", some_arg="def") + + self.assertEqual(len(called_with), 1) + txn, args, kw = called_with[0] + self.assertIsInstance(txn, Transaction) + self.assertEqual(return_value, 42) + self.assertEqual(args, ("abc",)) + self.assertEqual(kw, {"some_arg": "def"}) + + expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) + gax_api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + return_commit_stats=True, + request_options=RequestOptions(), + ) + gax_api.commit.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + database.logger.info.assert_called_once_with( + "CommitStats: mutation_count: 4\n", extra={"commit_stats": commit_stats} + ) + + def test_run_in_transaction_w_commit_stats_error(self): + transaction_pb = TransactionPB(id=TRANSACTION_ID) + gax_api = self._make_spanner_api() + gax_api.begin_transaction.return_value = transaction_pb + gax_api.commit.side_effect = Unknown("testing") + database = self._make_database() + database.log_commit_stats = True + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + called_with = [] + + def unit_of_work(txn, *args, **kw): + called_with.append((txn, args, kw)) + txn.insert(TABLE_NAME, COLUMNS, VALUES) + return 42 + + with self.assertRaises(Unknown): + session.run_in_transaction(unit_of_work, "abc", some_arg="def") + + self.assertEqual(len(called_with), 1) + txn, args, kw = called_with[0] + self.assertIsInstance(txn, Transaction) + self.assertEqual(args, ("abc",)) + self.assertEqual(kw, {"some_arg": "def"}) + expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) + gax_api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + return_commit_stats=True, + request_options=RequestOptions(), + ) + gax_api.commit.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + database.logger.info.assert_not_called() + + def test_run_in_transaction_w_transaction_tag(self): + transaction_pb = TransactionPB(id=TRANSACTION_ID) + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + commit_stats = CommitResponse.CommitStats(mutation_count=4) + response = CommitResponse(commit_timestamp=now_pb, commit_stats=commit_stats) + gax_api = self._make_spanner_api() + gax_api.begin_transaction.return_value = transaction_pb + gax_api.commit.return_value = response + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + called_with = [] + + def unit_of_work(txn, *args, **kw): + called_with.append((txn, args, kw)) + txn.insert(TABLE_NAME, COLUMNS, VALUES) + return 42 + + transaction_tag = "transaction_tag" + return_value = session.run_in_transaction( + unit_of_work, "abc", some_arg="def", transaction_tag=transaction_tag + ) + + self.assertEqual(len(called_with), 1) + txn, args, kw = called_with[0] + self.assertIsInstance(txn, Transaction) + self.assertEqual(return_value, 42) + self.assertEqual(args, ("abc",)) + self.assertEqual(kw, {"some_arg": "def"}) + + expected_options = TransactionOptions(read_write=TransactionOptions.ReadWrite()) + gax_api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + request_options=RequestOptions(transaction_tag=transaction_tag), + ) + gax_api.commit.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + + def test_run_in_transaction_w_exclude_txn_from_change_streams(self): + transaction_pb = TransactionPB(id=TRANSACTION_ID) + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + commit_stats = CommitResponse.CommitStats(mutation_count=4) + response = CommitResponse(commit_timestamp=now_pb, commit_stats=commit_stats) + gax_api = self._make_spanner_api() + gax_api.begin_transaction.return_value = transaction_pb + gax_api.commit.return_value = response + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + called_with = [] + + def unit_of_work(txn, *args, **kw): + called_with.append((txn, args, kw)) + txn.insert(TABLE_NAME, COLUMNS, VALUES) + return 42 + + return_value = session.run_in_transaction( + unit_of_work, "abc", exclude_txn_from_change_streams=True + ) + + self.assertEqual(len(called_with), 1) + txn, args, kw = called_with[0] + self.assertIsInstance(txn, Transaction) + self.assertEqual(return_value, 42) + self.assertEqual(args, ("abc",)) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + ) + gax_api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + request_options=RequestOptions(), + ) + gax_api.commit.assert_called_once_with( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + + def test_run_in_transaction_w_abort_w_retry_metadata_w_exclude_txn_from_change_streams( + self, + ): + RETRY_SECONDS = 12 + RETRY_NANOS = 3456 + retry_info = RetryInfo( + retry_delay=Duration(seconds=RETRY_SECONDS, nanos=RETRY_NANOS) + ) + trailing_metadata = [ + ("google.rpc.retryinfo-bin", retry_info.SerializeToString()) + ] + aborted = _make_rpc_error(Aborted, trailing_metadata=trailing_metadata) + transaction_pb = TransactionPB(id=TRANSACTION_ID) + now = datetime.datetime.utcnow().replace(tzinfo=UTC) + now_pb = _datetime_to_pb_timestamp(now) + response = CommitResponse(commit_timestamp=now_pb) + gax_api = self._make_spanner_api() + gax_api.begin_transaction.return_value = transaction_pb + gax_api.commit.side_effect = [aborted, response] + database = self._make_database() + database.spanner_api = gax_api + session = self._make_one(database) + session._session_id = self.SESSION_ID + + called_with = [] + + def unit_of_work(txn, *args, **kw): + called_with.append((txn, args, kw)) + txn.insert(TABLE_NAME, COLUMNS, VALUES) + + with mock.patch("time.sleep") as sleep_mock: + session.run_in_transaction( + unit_of_work, + "abc", + some_arg="def", + exclude_txn_from_change_streams=True, + ) + + sleep_mock.assert_called_once_with(RETRY_SECONDS + RETRY_NANOS / 1.0e9) + self.assertEqual(len(called_with), 2) + + for index, (txn, args, kw) in enumerate(called_with): + self.assertIsInstance(txn, Transaction) + if index == 1: + self.assertEqual(txn.committed, now) + else: + self.assertIsNone(txn.committed) + self.assertEqual(args, ("abc",)) + self.assertEqual(kw, {"some_arg": "def"}) + self.assertEqual( gax_api.begin_transaction.call_args_list, [ mock.call( - self.SESSION_NAME, - expected_options, - metadata=[("google-cloud-resource-prefix", database.name)], - ) - ] - * 3, + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ), + mock.call( + request=BeginTransactionRequest( + session=session.name, + options=TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + exclude_txn_from_change_streams=True, + ), + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.3.1", + ), + ], + ), + ], + ) + request = CommitRequest( + session=self.SESSION_NAME, + mutations=txn._mutations, + transaction_id=TRANSACTION_ID, + request_options=RequestOptions(), ) self.assertEqual( gax_api.commit.call_args_list, [ mock.call( - self.SESSION_NAME, - mutations=txn._mutations, - transaction_id=TRANSACTION_ID, - metadata=[("google-cloud-resource-prefix", database.name)], - ) - ] - * 3, + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ), + mock.call( + request=request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.4.1", + ), + ], + ), + ], ) - def test_delay_helper_w_no_delay(self): - from google.cloud.spanner_v1.session import _delay_until_retry + def test_run_in_transaction_w_isolation_level_at_request(self): + database = self._make_database() + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction( + unit_of_work, "abc", isolation_level="SERIALIZABLE" + ) + + self.assertEqual(return_value, 42) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE, + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_run_in_transaction_w_isolation_level_at_client(self): + database = self._make_database( + default_transaction_options=DefaultTransactionOptions( + isolation_level="SERIALIZABLE" + ) + ) + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction(unit_of_work, "abc") + + self.assertEqual(return_value, 42) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE, + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_run_in_transaction_w_isolation_level_at_request_overrides_client(self): + database = self._make_database( + default_transaction_options=DefaultTransactionOptions( + isolation_level="SERIALIZABLE" + ) + ) + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction( + unit_of_work, + "abc", + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + ) + + self.assertEqual(return_value, 42) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite(), + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_run_in_transaction_w_read_lock_mode_at_request(self): + database = self._make_database() + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction( + unit_of_work, "abc", read_lock_mode="OPTIMISTIC" + ) + + self.assertEqual(return_value, 42) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_run_in_transaction_w_read_lock_mode_at_client(self): + database = self._make_database( + default_transaction_options=DefaultTransactionOptions( + read_lock_mode="OPTIMISTIC" + ) + ) + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction(unit_of_work, "abc") + + self.assertEqual(return_value, 42) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_run_in_transaction_w_read_lock_mode_at_request_overrides_client(self): + database = self._make_database( + default_transaction_options=DefaultTransactionOptions( + read_lock_mode="PESSIMISTIC" + ) + ) + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction( + unit_of_work, + "abc", + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ) + + self.assertEqual(return_value, 42) + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_run_in_transaction_w_isolation_level_and_read_lock_mode_at_request(self): + database = self._make_database() + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction( + unit_of_work, + "abc", + read_lock_mode="PESSIMISTIC", + isolation_level="REPEATABLE_READ", + ) + + self.assertEqual(return_value, 42) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC, + ), + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_run_in_transaction_w_isolation_level_and_read_lock_mode_at_client(self): + database = self._make_database( + default_transaction_options=DefaultTransactionOptions( + read_lock_mode="PESSIMISTIC", + isolation_level="REPEATABLE_READ", + ) + ) + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction(unit_of_work, "abc") + + self.assertEqual(return_value, 42) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC, + ), + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_run_in_transaction_w_isolation_level_and_read_lock_mode_at_request_overrides_client( + self, + ): + database = self._make_database( + default_transaction_options=DefaultTransactionOptions( + read_lock_mode="PESSIMISTIC", + isolation_level="REPEATABLE_READ", + ) + ) + api = database.spanner_api = build_spanner_api() + session = self._make_one(database) + session._session_id = self.SESSION_ID + + def unit_of_work(txn, *args, **kw): + txn.insert("test", [], []) + return 42 + + return_value = session.run_in_transaction( + unit_of_work, + "abc", + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE, + ) + + self.assertEqual(return_value, 42) + + expected_options = TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + isolation_level=TransactionOptions.IsolationLevel.SERIALIZABLE, + ) + api.begin_transaction.assert_called_once_with( + request=BeginTransactionRequest( + session=self.SESSION_NAME, options=expected_options + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_delay_helper_w_no_delay(self): metadata_mock = mock.Mock() metadata_mock.trailing_metadata.return_value = {} @@ -1102,11 +2559,11 @@ def _time_func(): # check if current time > deadline with mock.patch("time.time", _time_func): with self.assertRaises(Exception): - _delay_until_retry(exc_mock, 2, 1) + _delay_until_retry(exc_mock, 2, 1, default_retry_delay=0) with mock.patch("time.time", _time_func): with mock.patch( - "google.cloud.spanner_v1.session._get_retry_delay" + "google.cloud.spanner_v1._helpers._get_retry_delay" ) as get_retry_delay_mock: with mock.patch("time.sleep") as sleep_mock: get_retry_delay_mock.return_value = None diff --git a/tests/unit/test_snapshot.py b/tests/unit/test_snapshot.py index 40ba1c6c5a..5e60d71bd6 100644 --- a/tests/unit/test_snapshot.py +++ b/tests/unit/test_snapshot.py @@ -11,12 +11,47 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +from datetime import timedelta, datetime +from threading import Lock +from typing import Mapping - -import unittest -import google.api_core.gapic_v1.method +from google.api_core import gapic_v1 import mock - +from google.api_core.exceptions import InternalServerError, Aborted + +from google.cloud.spanner_admin_database_v1 import Database +from google.cloud.spanner_v1 import ( + RequestOptions, + DirectedReadOptions, + BeginTransactionRequest, + TransactionOptions, + TransactionSelector, +) +from google.cloud.spanner_v1.snapshot import _SnapshotBase +from tests._builders import ( + build_precommit_token_pb, + build_spanner_api, + build_session, + build_transaction_pb, + build_snapshot, +) +from tests._helpers import ( + OpenTelemetryBase, + LIB_VERSION, + StatusCode, + HAS_OPENTELEMETRY_INSTALLED, + enrich_with_otel_scope, +) +from google.cloud.spanner_v1._helpers import ( + _metadata_with_request_id, + AtomicCounter, +) +from google.cloud.spanner_v1.param_types import INT64 +from google.cloud.spanner_v1.request_id_header import ( + REQ_RAND_PROCESS_ID, + build_request_id, +) +from google.api_core.retry import Retry TABLE_NAME = "citizens" COLUMNS = ["email", "first_name", "last_name", "age"] @@ -25,7 +60,7 @@ SQL_QUERY_WITH_PARAM = """ SELECT first_name, last_name, email FROM citizens WHERE age <= @max_age""" PARAMS = {"max_age": 30} -PARAM_TYPES = {"max_age": "INT64"} +PARAM_TYPES = {"max_age": INT64} SQL_QUERY_WITH_BYTES_PARAM = """\ SELECT image_name FROM images WHERE @bytes IN image_data""" PARAMS_WITH_BYTES = {"bytes": b"FACEDACE"} @@ -33,34 +68,144 @@ TXN_ID = b"DEAFBEAD" SECONDS = 3 MICROS = 123456 - - -class Test_restart_on_unavailable(unittest.TestCase): - def _call_fut(self, restart): +DURATION = timedelta(seconds=SECONDS, microseconds=MICROS) +TIMESTAMP = datetime.now() + +BASE_ATTRIBUTES = { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": "testing", + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", +} +enrich_with_otel_scope(BASE_ATTRIBUTES) + +DIRECTED_READ_OPTIONS = { + "include_replicas": { + "replica_selections": [ + { + "location": "us-west1", + "type_": DirectedReadOptions.ReplicaSelection.Type.READ_ONLY, + }, + ], + "auto_failover_disabled": True, + }, +} +DIRECTED_READ_OPTIONS_FOR_CLIENT = { + "include_replicas": { + "replica_selections": [ + { + "location": "us-east1", + }, + ], + }, +} + +PRECOMMIT_TOKEN_1 = build_precommit_token_pb(precommit_token=b"1", seq_num=1) +PRECOMMIT_TOKEN_2 = build_precommit_token_pb(precommit_token=b"2", seq_num=2) + +# Common errors for testing. +INTERNAL_SERVER_ERROR_UNEXPECTED_EOS = InternalServerError( + "Received unexpected EOS on DATA frame from server" +) + + +class _Derived(_SnapshotBase): + """A minimally-implemented _SnapshotBase-derived class for testing""" + + # Use a simplified implementation of _build_transaction_options_pb + # that always returns the same transaction options. + TRANSACTION_OPTIONS = TransactionOptions() + + def _build_transaction_options_pb(self) -> TransactionOptions: + return self.TRANSACTION_OPTIONS + + +class Test_restart_on_unavailable(OpenTelemetryBase): + def build_spanner_api(self): + from google.cloud.spanner_v1 import SpannerClient + + return mock.create_autospec(SpannerClient, instance=True) + + def _call_fut( + self, + derived, + restart, + request, + span_name=None, + session=None, + attributes=None, + metadata=None, + ): from google.cloud.spanner_v1.snapshot import _restart_on_unavailable - return _restart_on_unavailable(restart) + return _restart_on_unavailable( + restart, + request, + metadata, + span_name, + session, + attributes, + transaction=derived, + request_id_manager=None if not session else session._database, + ) - def _make_item(self, value, resume_token=b""): + def _make_item(self, value, resume_token=b"", metadata=None): return mock.Mock( - value=value, resume_token=resume_token, spec=["value", "resume_token"] + value=value, + resume_token=resume_token, + metadata=metadata, + precommit_token=None, + _pb=None, + spec=["value", "resume_token", "metadata", "precommit_token"], ) def test_iteration_w_empty_raw(self): raw = _MockIterator() + request = mock.Mock(test="test", spec=["test", "resume_token"]) restart = mock.Mock(spec=[], return_value=raw) - resumable = self._call_fut(restart) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) self.assertEqual(list(resumable), []) + restart.assert_called_once_with( + request=request, + metadata=[ + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ) + ], + ) + self.assertNoSpans() def test_iteration_w_non_empty_raw(self): ITEMS = (self._make_item(0), self._make_item(1)) raw = _MockIterator(*ITEMS) + request = mock.Mock(test="test", spec=["test", "resume_token"]) restart = mock.Mock(spec=[], return_value=raw) - resumable = self._call_fut(restart) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) self.assertEqual(list(resumable), list(ITEMS)) - restart.assert_called_once_with() + restart.assert_called_once_with( + request=request, + metadata=[ + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ) + ], + ) + self.assertNoSpans() - def test_iteration_w_raw_w_resume_tken(self): + def test_iteration_w_raw_w_resume_token(self): ITEMS = ( self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN), @@ -68,258 +213,913 @@ def test_iteration_w_raw_w_resume_tken(self): self._make_item(3), ) raw = _MockIterator(*ITEMS) + request = mock.Mock(test="test", spec=["test", "resume_token"]) restart = mock.Mock(spec=[], return_value=raw) - resumable = self._call_fut(restart) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) self.assertEqual(list(resumable), list(ITEMS)) - restart.assert_called_once_with() + restart.assert_called_once_with( + request=request, + metadata=[ + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ) + ], + ) + self.assertNoSpans() def test_iteration_w_raw_raising_unavailable_no_token(self): + from google.api_core.exceptions import ServiceUnavailable + ITEMS = ( self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN), self._make_item(2), ) - before = _MockIterator(fail_after=True) + before = _MockIterator(fail_after=True, error=ServiceUnavailable("testing")) after = _MockIterator(*ITEMS) + request = mock.Mock(test="test", spec=["test", "resume_token"]) restart = mock.Mock(spec=[], side_effect=[before, after]) - resumable = self._call_fut(restart) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) self.assertEqual(list(resumable), list(ITEMS)) - self.assertEqual(restart.mock_calls, [mock.call(), mock.call(resume_token=b"")]) + self.assertEqual(len(restart.mock_calls), 2) + self.assertEqual(request.resume_token, b"") + self.assertNoSpans() + + def test_iteration_w_raw_raising_retryable_internal_error_no_token(self): + ITEMS = ( + self._make_item(0), + self._make_item(1, resume_token=RESUME_TOKEN), + self._make_item(2), + ) + before = _MockIterator( + fail_after=True, + error=INTERNAL_SERVER_ERROR_UNEXPECTED_EOS, + ) + after = _MockIterator(*ITEMS) + request = mock.Mock(test="test", spec=["test", "resume_token"]) + restart = mock.Mock(spec=[], side_effect=[before, after]) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) + self.assertEqual(list(resumable), list(ITEMS)) + self.assertEqual(len(restart.mock_calls), 2) + self.assertEqual(request.resume_token, b"") + self.assertNoSpans() + + def test_iteration_w_raw_raising_non_retryable_internal_error_no_token(self): + from google.api_core.exceptions import InternalServerError + + ITEMS = ( + self._make_item(0), + self._make_item(1, resume_token=RESUME_TOKEN), + self._make_item(2), + ) + before = _MockIterator(fail_after=True, error=InternalServerError("testing")) + after = _MockIterator(*ITEMS) + request = mock.Mock(spec=["resume_token"]) + restart = mock.Mock(spec=[], side_effect=[before, after]) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) + with self.assertRaises(InternalServerError): + list(resumable) + restart.assert_called_once_with( + request=request, + metadata=[ + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ) + ], + ) + self.assertNoSpans() def test_iteration_w_raw_raising_unavailable(self): + from google.api_core.exceptions import ServiceUnavailable + FIRST = (self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN)) SECOND = (self._make_item(2),) # discarded after 503 LAST = (self._make_item(3),) - before = _MockIterator(*(FIRST + SECOND), fail_after=True) + before = _MockIterator( + *(FIRST + SECOND), fail_after=True, error=ServiceUnavailable("testing") + ) after = _MockIterator(*LAST) + request = mock.Mock(test="test", spec=["test", "resume_token"]) restart = mock.Mock(spec=[], side_effect=[before, after]) - resumable = self._call_fut(restart) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) self.assertEqual(list(resumable), list(FIRST + LAST)) - self.assertEqual( - restart.mock_calls, [mock.call(), mock.call(resume_token=RESUME_TOKEN)] + self.assertEqual(len(restart.mock_calls), 2) + self.assertEqual(request.resume_token, RESUME_TOKEN) + self.assertNoSpans() + + def test_iteration_w_raw_raising_retryable_internal_error(self): + FIRST = (self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN)) + SECOND = (self._make_item(2),) # discarded after 503 + LAST = (self._make_item(3),) + before = _MockIterator( + *(FIRST + SECOND), + fail_after=True, + error=INTERNAL_SERVER_ERROR_UNEXPECTED_EOS, + ) + after = _MockIterator(*LAST) + request = mock.Mock(test="test", spec=["test", "resume_token"]) + restart = mock.Mock(spec=[], side_effect=[before, after]) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) + self.assertEqual(list(resumable), list(FIRST + LAST)) + self.assertEqual(len(restart.mock_calls), 2) + self.assertEqual(request.resume_token, RESUME_TOKEN) + self.assertNoSpans() + + def test_iteration_w_raw_raising_non_retryable_internal_error(self): + from google.api_core.exceptions import InternalServerError + + FIRST = (self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN)) + SECOND = (self._make_item(2),) # discarded after 503 + LAST = (self._make_item(3),) + before = _MockIterator( + *(FIRST + SECOND), fail_after=True, error=InternalServerError("testing") + ) + after = _MockIterator(*LAST) + request = mock.Mock(test="test", spec=["test", "resume_token"]) + restart = mock.Mock(spec=[], side_effect=[before, after]) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) + with self.assertRaises(InternalServerError): + list(resumable) + restart.assert_called_once_with( + request=request, + metadata=[ + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ) + ], ) + self.assertNoSpans() def test_iteration_w_raw_raising_unavailable_after_token(self): + from google.api_core.exceptions import ServiceUnavailable + FIRST = (self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN)) SECOND = (self._make_item(2), self._make_item(3)) - before = _MockIterator(*FIRST, fail_after=True) + before = _MockIterator( + *FIRST, fail_after=True, error=ServiceUnavailable("testing") + ) after = _MockIterator(*SECOND) + request = mock.Mock(test="test", spec=["test", "resume_token"]) restart = mock.Mock(spec=[], side_effect=[before, after]) - resumable = self._call_fut(restart) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) self.assertEqual(list(resumable), list(FIRST + SECOND)) - self.assertEqual( - restart.mock_calls, [mock.call(), mock.call(resume_token=RESUME_TOKEN)] + self.assertEqual(len(restart.mock_calls), 2) + self.assertEqual(request.resume_token, RESUME_TOKEN) + self.assertNoSpans() + + def test_iteration_w_raw_w_multiuse(self): + from google.cloud.spanner_v1 import ( + ReadRequest, ) + FIRST = ( + self._make_item(0), + self._make_item(1), + ) + before = _MockIterator(*FIRST) + request = ReadRequest(transaction=None) + restart = mock.Mock(spec=[], return_value=before) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + derived._multi_use = True + resumable = self._call_fut(derived, restart, request, session=session) + self.assertEqual(list(resumable), list(FIRST)) + self.assertEqual(len(restart.mock_calls), 1) + begin_count = sum( + [1 for args in restart.call_args_list if "begin" in args.kwargs.__str__()] + ) + self.assertEqual(begin_count, 1) + self.assertNoSpans() -class Test_SnapshotBase(unittest.TestCase): + def test_iteration_w_raw_raising_unavailable_w_multiuse(self): + from google.api_core.exceptions import ServiceUnavailable + from google.cloud.spanner_v1 import ( + ReadRequest, + ) - PROJECT_ID = "project-id" - INSTANCE_ID = "instance-id" - INSTANCE_NAME = "projects/" + PROJECT_ID + "/instances/" + INSTANCE_ID - DATABASE_ID = "database-id" - DATABASE_NAME = INSTANCE_NAME + "/databases/" + DATABASE_ID - SESSION_ID = "session-id" - SESSION_NAME = DATABASE_NAME + "/sessions/" + SESSION_ID + FIRST = ( + self._make_item(0), + self._make_item(1), + ) + SECOND = (self._make_item(2), self._make_item(3)) + before = _MockIterator( + *FIRST, fail_after=True, error=ServiceUnavailable("testing") + ) + after = _MockIterator(*SECOND) + request = ReadRequest(transaction=None) + restart = mock.Mock(spec=[], side_effect=[before, after]) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + derived._multi_use = True + resumable = self._call_fut(derived, restart, request, session=session) + self.assertEqual(list(resumable), list(SECOND)) + self.assertEqual(len(restart.mock_calls), 2) + begin_count = sum( + [1 for args in restart.call_args_list if "begin" in args.kwargs.__str__()] + ) - def _getTargetClass(self): - from google.cloud.spanner_v1.snapshot import _SnapshotBase + # Since the transaction id was not set before the Unavailable error, the statement will be retried with inline begin. + self.assertEqual(begin_count, 2) + self.assertNoSpans() - return _SnapshotBase + def test_iteration_w_raw_raising_unavailable_after_token_w_multiuse(self): + from google.api_core.exceptions import ServiceUnavailable - def _make_one(self, session): - return self._getTargetClass()(session) + from google.cloud.spanner_v1 import ResultSetMetadata + from google.cloud.spanner_v1 import ( + Transaction as TransactionPB, + ReadRequest, + ) - def _makeDerived(self, session): - class _Derived(self._getTargetClass()): + transaction_pb = TransactionPB(id=TXN_ID) + metadata_pb = ResultSetMetadata(transaction=transaction_pb) + FIRST = ( + self._make_item(0), + self._make_item(1, resume_token=RESUME_TOKEN, metadata=metadata_pb), + ) + SECOND = (self._make_item(2), self._make_item(3)) + before = _MockIterator( + *FIRST, fail_after=True, error=ServiceUnavailable("testing") + ) + after = _MockIterator(*SECOND) + request = ReadRequest(transaction=None) + restart = mock.Mock(spec=[], side_effect=[before, after]) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + derived._multi_use = True + + resumable = self._call_fut(derived, restart, request, session=session) - _transaction_id = None - _multi_use = False + self.assertEqual(list(resumable), list(FIRST + SECOND)) + self.assertEqual(len(restart.mock_calls), 2) + self.assertEqual(request.resume_token, RESUME_TOKEN) + transaction_id_selector_count = sum( + [ + 1 + for args in restart.call_args_list + if 'id: "DEAFBEAD"' in args.kwargs.__str__() + ] + ) - def _make_txn_selector(self): - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - TransactionOptions, - TransactionSelector, + # Statement will be retried with Transaction id. + self.assertEqual(transaction_id_selector_count, 2) + self.assertNoSpans() + + def test_iteration_w_raw_raising_retryable_internal_error_after_token(self): + FIRST = (self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN)) + SECOND = (self._make_item(2), self._make_item(3)) + before = _MockIterator( + *FIRST, + fail_after=True, + error=INTERNAL_SERVER_ERROR_UNEXPECTED_EOS, + ) + after = _MockIterator(*SECOND) + request = mock.Mock(test="test", spec=["test", "resume_token"]) + restart = mock.Mock(spec=[], side_effect=[before, after]) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) + self.assertEqual(list(resumable), list(FIRST + SECOND)) + self.assertEqual(len(restart.mock_calls), 2) + self.assertEqual(request.resume_token, RESUME_TOKEN) + self.assertNoSpans() + + def test_iteration_w_raw_raising_non_retryable_internal_error_after_token(self): + from google.api_core.exceptions import InternalServerError + + FIRST = (self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN)) + SECOND = (self._make_item(2), self._make_item(3)) + before = _MockIterator( + *FIRST, fail_after=True, error=InternalServerError("testing") + ) + after = _MockIterator(*SECOND) + request = mock.Mock(test="test", spec=["test", "resume_token"]) + restart = mock.Mock(spec=[], side_effect=[before, after]) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut(derived, restart, request, session=session) + with self.assertRaises(InternalServerError): + list(resumable) + restart.assert_called_once_with( + request=request, + metadata=[ + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", ) + ], + ) + self.assertNoSpans() + + def test_iteration_w_span_creation(self): + name = "TestSpan" + extra_atts = {"test_att": 1} + raw = _MockIterator() + request = mock.Mock(test="test", spec=["test", "resume_token"]) + restart = mock.Mock(spec=[], return_value=raw) + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut( + derived, restart, request, name, _Session(_Database()), extra_atts + ) + self.assertEqual(list(resumable), []) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + name, + attributes=dict( + BASE_ATTRIBUTES, test_att=1, x_goog_spanner_request_id=req_id + ), + ) + + def test_iteration_w_multiple_span_creation(self): + from google.api_core.exceptions import ServiceUnavailable - if self._transaction_id: - return TransactionSelector(id=self._transaction_id) - options = TransactionOptions( - read_only=TransactionOptions.ReadOnly(strong=True) + if HAS_OPENTELEMETRY_INSTALLED: + FIRST = (self._make_item(0), self._make_item(1, resume_token=RESUME_TOKEN)) + SECOND = (self._make_item(2),) # discarded after 503 + LAST = (self._make_item(3),) + before = _MockIterator( + *(FIRST + SECOND), fail_after=True, error=ServiceUnavailable("testing") + ) + after = _MockIterator(*LAST) + request = mock.Mock(test="test", spec=["test", "resume_token"]) + restart = mock.Mock(spec=[], side_effect=[before, after]) + name = "TestSpan" + database = _Database() + database.spanner_api = build_spanner_api() + session = _Session(database) + derived = _build_snapshot_derived(session) + resumable = self._call_fut( + derived, restart, request, name, _Session(_Database()) + ) + self.assertEqual(list(resumable), list(FIRST + LAST)) + self.assertEqual(len(restart.mock_calls), 2) + self.assertEqual(request.resume_token, RESUME_TOKEN) + + span_list = self.ot_exporter.get_finished_spans() + self.assertEqual(len(span_list), 2) + for i, span in enumerate(span_list): + self.assertEqual(span.name, name) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.{i + 1}" + self.assertEqual( + dict(span.attributes), + dict( + enrich_with_otel_scope(BASE_ATTRIBUTES), + x_goog_spanner_request_id=req_id, + ), ) - if self._multi_use: - return TransactionSelector(begin=options) - return TransactionSelector(single_use=options) - return _Derived(session) - def _make_spanner_api(self): - import google.cloud.spanner_v1.gapic.spanner_client +class Test_SnapshotBase(OpenTelemetryBase): + def test_ctor(self): + session = build_session() + derived = _build_snapshot_derived(session=session) + + # Attributes from _SessionWrapper. + self.assertIs(derived._session, session) + + # Attributes from _SnapshotBase. + self.assertTrue(derived._read_only) + self.assertFalse(derived._multi_use) + self.assertEqual(derived._execute_sql_request_count, 0) + self.assertEqual(derived._read_request_count, 0) + self.assertIsNone(derived._transaction_id) + self.assertIsNone(derived._precommit_token) + self.assertIsInstance(derived._lock, type(Lock())) + + self.assertNoSpans() + + def test__build_transaction_selector_pb_single_use(self): + derived = _build_snapshot_derived(multi_use=False) + + actual_selector = derived._build_transaction_selector_pb() + + expected_selector = TransactionSelector(single_use=_Derived.TRANSACTION_OPTIONS) + self.assertEqual(actual_selector, expected_selector) + + def test__build_transaction_selector_pb_multi_use(self): + derived = _build_snapshot_derived(multi_use=True) + + # Select new transaction. + expected_options = _Derived.TRANSACTION_OPTIONS + expected_selector = TransactionSelector(begin=expected_options) + self.assertEqual(expected_selector, derived._build_transaction_selector_pb()) + + # Select existing transaction. + transaction_id = b"transaction-id" + begin_transaction = derived._session._database.spanner_api.begin_transaction + begin_transaction.return_value = build_transaction_pb(id=transaction_id) + + derived.begin() + + expected_selector = TransactionSelector(id=transaction_id) + self.assertEqual(expected_selector, derived._build_transaction_selector_pb()) + + def test_begin_error_not_multi_use(self): + derived = _build_snapshot_derived(multi_use=False) + + with self.assertRaises(ValueError): + derived.begin() + + self.assertNoSpans() - return mock.create_autospec( - google.cloud.spanner_v1.gapic.spanner_client.SpannerClient, instance=True + def test_begin_error_already_begun(self): + derived = _build_snapshot_derived(multi_use=True) + derived.begin() + + self.reset() + with self.assertRaises(ValueError): + derived.begin() + + self.assertNoSpans() + + def test_begin_error_other(self): + derived = _build_snapshot_derived(multi_use=True) + + database = derived._session._database + begin_transaction = database.spanner_api.begin_transaction + begin_transaction.side_effect = RuntimeError() + + with self.assertRaises(RuntimeError): + derived.begin() + + if not HAS_OPENTELEMETRY_INSTALLED: + return + + self.assertSpanAttributes( + name="CloudSpanner._Derived.begin", + status=StatusCode.ERROR, + attributes=_build_span_attributes(database), ) - def test_ctor(self): - session = _Session() - base = self._make_one(session) - self.assertIs(base._session, session) - self.assertEqual(base._execute_sql_count, 0) + def test_begin_read_write(self): + derived = _build_snapshot_derived(multi_use=True, read_only=False) + + begin_transaction = derived._session._database.spanner_api.begin_transaction + begin_transaction.return_value = build_transaction_pb() + + self._execute_begin(derived) + + def test_begin_read_only(self): + derived = _build_snapshot_derived(multi_use=True, read_only=True) + + begin_transaction = derived._session._database.spanner_api.begin_transaction + begin_transaction.return_value = build_transaction_pb() + + self._execute_begin(derived) + + def test_begin_precommit_token(self): + derived = _build_snapshot_derived(multi_use=True) + + begin_transaction = derived._session._database.spanner_api.begin_transaction + begin_transaction.return_value = build_transaction_pb( + precommit_token=PRECOMMIT_TOKEN_1 + ) + + self._execute_begin(derived) + + def test_begin_retry_for_internal_server_error(self): + derived = _build_snapshot_derived(multi_use=True) + + begin_transaction = derived._session._database.spanner_api.begin_transaction + begin_transaction.side_effect = [ + INTERNAL_SERVER_ERROR_UNEXPECTED_EOS, + build_transaction_pb(), + ] + + self._execute_begin(derived, attempts=2) + + expected_statuses = [ + ( + "Transaction Begin Attempt Failed. Retrying", + {"attempt": 1, "sleep_seconds": 4}, + ) + ] + actual_statuses = self.finished_spans_events_statuses() + self.assertEqual(expected_statuses, actual_statuses) + + def test_begin_retry_for_aborted(self): + derived = _build_snapshot_derived(multi_use=True) + + begin_transaction = derived._session._database.spanner_api.begin_transaction + begin_transaction.side_effect = [ + Aborted("test"), + build_transaction_pb(), + ] + + self._execute_begin(derived, attempts=2) + + expected_statuses = [ + ( + "Transaction Begin Attempt Failed. Retrying", + {"attempt": 1, "sleep_seconds": 4}, + ) + ] + actual_statuses = self.finished_spans_events_statuses() + self.assertEqual(expected_statuses, actual_statuses) + + def _execute_begin(self, derived: _Derived, attempts: int = 1): + """Helper for testing _SnapshotBase.begin(). Executes method and verifies + transaction state, begin transaction API call, and span attributes and events. + """ + + session = derived._session + database = session._database - def test__make_txn_selector_virtual(self): - session = _Session() - base = self._make_one(session) - with self.assertRaises(NotImplementedError): - base._make_txn_selector() + transaction_id = derived.begin() + + # Verify transaction state. + begin_transaction = database.spanner_api.begin_transaction + expected_transaction_id = begin_transaction.return_value.id or None + expected_precommit_token = ( + begin_transaction.return_value.precommit_token or None + ) + + self.assertEqual(transaction_id, expected_transaction_id) + self.assertEqual(derived._transaction_id, expected_transaction_id) + self.assertEqual(derived._precommit_token, expected_precommit_token) + + # Verify begin transaction API call. + self.assertEqual(begin_transaction.call_count, attempts) + + expected_metadata = [ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-request-id", _build_request_id(database, attempts)), + ] + if not derived._read_only and database._route_to_leader_enabled: + expected_metadata.insert(-1, ("x-goog-spanner-route-to-leader", "true")) + + database.spanner_api.begin_transaction.assert_called_with( + request=BeginTransactionRequest( + session=session.name, options=_Derived.TRANSACTION_OPTIONS + ), + metadata=expected_metadata, + ) + + if not HAS_OPENTELEMETRY_INSTALLED: + return + + # Verify span attributes. + expected_span_name = "CloudSpanner._Derived.begin" + self.assertSpanAttributes( + name=expected_span_name, + attributes=_build_span_attributes(database, attempt=attempts), + ) def test_read_other_error(self): from google.cloud.spanner_v1.keyset import KeySet keyset = KeySet(all_=True) database = _Database() - database.spanner_api = self._make_spanner_api() + database.spanner_api = build_spanner_api() database.spanner_api.streaming_read.side_effect = RuntimeError() session = _Session(database) - derived = self._makeDerived(session) + derived = _build_snapshot_derived(session) with self.assertRaises(RuntimeError): list(derived.read(TABLE_NAME, COLUMNS, keyset)) - def _read_helper(self, multi_use, first=True, count=0, partition=None): + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner._Derived.read", + status=StatusCode.ERROR, + attributes=dict( + BASE_ATTRIBUTES, + table_id=TABLE_NAME, + columns=tuple(COLUMNS), + x_goog_spanner_request_id=req_id, + ), + ) + + def _execute_read( + self, + multi_use, + first=True, + count=0, + partition=None, + timeout=gapic_v1.method.DEFAULT, + retry=gapic_v1.method.DEFAULT, + request_options=None, + directed_read_options=None, + directed_read_options_at_client_level=None, + use_multiplexed=False, + ): + """Helper for testing _SnapshotBase.read(). Executes method and verifies + transaction state, begin transaction API call, and span attributes and events. + """ + from google.protobuf.struct_pb2 import Struct - from google.cloud.spanner_v1.proto.result_set_pb2 import ( + from google.cloud.spanner_v1 import ( PartialResultSet, ResultSetMetadata, ResultSetStats, ) - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - TransactionSelector, - TransactionOptions, - ) - from google.cloud.spanner_v1.proto.type_pb2 import Type, StructType - from google.cloud.spanner_v1.proto.type_pb2 import STRING, INT64 + from google.cloud.spanner_v1 import ReadRequest + from google.cloud.spanner_v1 import Type, StructType + from google.cloud.spanner_v1 import TypeCode from google.cloud.spanner_v1.keyset import KeySet from google.cloud.spanner_v1._helpers import _make_value_pb - VALUES = [[u"bharney", 31], [u"phred", 32]] + VALUES = [["bharney", 31], ["phred", 32]] VALUE_PBS = [[_make_value_pb(item) for item in row] for row in VALUES] struct_type_pb = StructType( fields=[ - StructType.Field(name="name", type=Type(code=STRING)), - StructType.Field(name="age", type=Type(code=INT64)), + StructType.Field(name="name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="age", type_=Type(code=TypeCode.INT64)), ] ) - metadata_pb = ResultSetMetadata(row_type=struct_type_pb) + + # If the transaction had not already begun, the first result + # set will include metadata with information about the transaction. + transaction_pb = build_transaction_pb(id=TXN_ID) if first else None + metadata_pb = ResultSetMetadata( + row_type=struct_type_pb, + transaction=transaction_pb, + ) + stats_pb = ResultSetStats( query_stats=Struct(fields={"rows_returned": _make_value_pb(2)}) ) - result_sets = [ - PartialResultSet(values=VALUE_PBS[0], metadata=metadata_pb), - PartialResultSet(values=VALUE_PBS[1], stats=stats_pb), - ] + + # Precommit tokens will be included in the result sets if the transaction is on + # a multiplexed session. Precommit tokens may be returned out of order. + partial_result_set_1_args = {"metadata": metadata_pb} + if use_multiplexed: + partial_result_set_1_args["precommit_token"] = PRECOMMIT_TOKEN_2 + partial_result_set_1 = PartialResultSet(**partial_result_set_1_args) + + partial_result_set_2_args = {"stats": stats_pb} + if use_multiplexed: + partial_result_set_2_args["precommit_token"] = PRECOMMIT_TOKEN_1 + partial_result_set_2 = PartialResultSet(**partial_result_set_2_args) + + result_sets = [partial_result_set_1, partial_result_set_2] + + for i in range(len(result_sets)): + result_sets[i].values.extend(VALUE_PBS[i]) KEYS = [["bharney@example.com"], ["phred@example.com"]] keyset = KeySet(keys=KEYS) INDEX = "email-address-index" LIMIT = 20 - database = _Database() - api = database.spanner_api = self._make_spanner_api() + database = _Database( + directed_read_options=directed_read_options_at_client_level + ) + + api = database.spanner_api = build_spanner_api() api.streaming_read.return_value = _MockIterator(*result_sets) session = _Session(database) - derived = self._makeDerived(session) + derived = _build_snapshot_derived(session) derived._multi_use = multi_use derived._read_request_count = count + if not first: derived._transaction_id = TXN_ID + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + + transaction_selector_pb = derived._build_transaction_selector_pb() + if partition is not None: # 'limit' and 'partition' incompatible result_set = derived.read( - TABLE_NAME, COLUMNS, keyset, index=INDEX, partition=partition + TABLE_NAME, + COLUMNS, + keyset, + index=INDEX, + partition=partition, + retry=retry, + timeout=timeout, + request_options=request_options, + directed_read_options=directed_read_options, ) else: result_set = derived.read( - TABLE_NAME, COLUMNS, keyset, index=INDEX, limit=LIMIT + TABLE_NAME, + COLUMNS, + keyset, + index=INDEX, + limit=LIMIT, + retry=retry, + timeout=timeout, + request_options=request_options, + directed_read_options=directed_read_options, ) self.assertEqual(derived._read_request_count, count + 1) - if multi_use: - self.assertIs(result_set._source, derived) - else: - self.assertIsNone(result_set._source) - self.assertEqual(list(result_set), VALUES) self.assertEqual(result_set.metadata, metadata_pb) self.assertEqual(result_set.stats, stats_pb) - txn_options = TransactionOptions( - read_only=TransactionOptions.ReadOnly(strong=True) - ) - - if multi_use: - if first: - expected_transaction = TransactionSelector(begin=txn_options) - else: - expected_transaction = TransactionSelector(id=TXN_ID) - else: - expected_transaction = TransactionSelector(single_use=txn_options) - if partition is not None: expected_limit = 0 else: expected_limit = LIMIT - api.streaming_read.assert_called_once_with( - self.SESSION_NAME, - TABLE_NAME, - COLUMNS, - keyset._to_pb(), - transaction=expected_transaction, + # Transaction tag is ignored for read request. + expected_request_options = request_options + expected_request_options.transaction_tag = None + + expected_directed_read_options = ( + directed_read_options + if directed_read_options is not None + else directed_read_options_at_client_level + ) + + expected_request = ReadRequest( + session=session.name, + table=TABLE_NAME, + columns=COLUMNS, + key_set=keyset._to_pb(), + transaction=transaction_selector_pb, index=INDEX, limit=expected_limit, partition_token=partition, - metadata=[("google-cloud-resource-prefix", database.name)], + request_options=expected_request_options, + directed_read_options=expected_directed_read_options, ) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + api.streaming_read.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + retry=retry, + timeout=timeout, + ) + + self.assertSpanAttributes( + "CloudSpanner._Derived.read", + attributes=dict( + BASE_ATTRIBUTES, + table_id=TABLE_NAME, + columns=tuple(COLUMNS), + x_goog_spanner_request_id=req_id, + ), + ) + + if first: + self.assertEqual(derived._transaction_id, TXN_ID) + + if use_multiplexed: + self.assertEqual(derived._precommit_token, PRECOMMIT_TOKEN_2) def test_read_wo_multi_use(self): - self._read_helper(multi_use=False) + self._execute_read(multi_use=False) + + def test_read_w_request_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._execute_read(multi_use=False, request_options=request_options) + + def test_read_w_transaction_tag_success(self): + request_options = RequestOptions( + transaction_tag="tag-1-1", + ) + self._execute_read(multi_use=False, request_options=request_options) + + def test_read_w_request_and_transaction_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + transaction_tag="tag-1-1", + ) + self._execute_read(multi_use=False, request_options=request_options) + + def test_read_w_request_and_transaction_tag_dictionary_success(self): + request_options = {"request_tag": "tag-1", "transaction_tag": "tag-1-1"} + self._execute_read(multi_use=False, request_options=request_options) + + def test_read_w_incorrect_tag_dictionary_error(self): + request_options = {"incorrect_tag": "tag-1-1"} + with self.assertRaises(ValueError): + self._execute_read(multi_use=False, request_options=request_options) def test_read_wo_multi_use_w_read_request_count_gt_0(self): with self.assertRaises(ValueError): - self._read_helper(multi_use=False, count=1) + self._execute_read(multi_use=False, count=1) + + def test_read_w_multi_use_w_first(self): + self._execute_read(multi_use=True, first=True) def test_read_w_multi_use_wo_first(self): - self._read_helper(multi_use=True, first=False) + self._execute_read(multi_use=True, first=False) def test_read_w_multi_use_wo_first_w_count_gt_0(self): - self._read_helper(multi_use=True, first=False, count=1) + self._execute_read(multi_use=True, first=False, count=1) def test_read_w_multi_use_w_first_w_partition(self): PARTITION = b"FADEABED" - self._read_helper(multi_use=True, first=True, partition=PARTITION) + self._execute_read(multi_use=True, first=True, partition=PARTITION) def test_read_w_multi_use_w_first_w_count_gt_0(self): with self.assertRaises(ValueError): - self._read_helper(multi_use=True, first=True, count=1) + self._execute_read(multi_use=True, first=True, count=1) + + def test_read_w_timeout_param(self): + self._execute_read(multi_use=True, first=False, timeout=2.0) + + def test_read_w_retry_param(self): + self._execute_read(multi_use=True, first=False, retry=Retry(deadline=60)) + + def test_read_w_timeout_and_retry_params(self): + self._execute_read( + multi_use=True, first=False, retry=Retry(deadline=60), timeout=2.0 + ) + + def test_read_w_directed_read_options(self): + self._execute_read(multi_use=False, directed_read_options=DIRECTED_READ_OPTIONS) + + def test_read_w_directed_read_options_at_client_level(self): + self._execute_read( + multi_use=False, + directed_read_options_at_client_level=DIRECTED_READ_OPTIONS_FOR_CLIENT, + ) + + def test_read_w_directed_read_options_override(self): + self._execute_read( + multi_use=False, + directed_read_options=DIRECTED_READ_OPTIONS, + directed_read_options_at_client_level=DIRECTED_READ_OPTIONS_FOR_CLIENT, + ) + + def test_read_w_precommit_tokens(self): + self._execute_read(multi_use=True, use_multiplexed=True) def test_execute_sql_other_error(self): database = _Database() - database.spanner_api = self._make_spanner_api() + database.spanner_api = build_spanner_api() database.spanner_api.execute_streaming_sql.side_effect = RuntimeError() session = _Session(database) - derived = self._makeDerived(session) + derived = _build_snapshot_derived(session) with self.assertRaises(RuntimeError): list(derived.execute_sql(SQL_QUERY)) - self.assertEqual(derived._execute_sql_count, 1) - - def test_execute_sql_w_params_wo_param_types(self): - database = _Database() - session = _Session(database) - derived = self._makeDerived(session) + self.assertEqual(derived._execute_sql_request_count, 1) - with self.assertRaises(ValueError): - derived.execute_sql(SQL_QUERY_WITH_PARAM, PARAMS) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner._Derived.execute_sql", + status=StatusCode.ERROR, + attributes=dict( + BASE_ATTRIBUTES, + **{"db.statement": SQL_QUERY, "x_goog_spanner_request_id": req_id}, + ), + ) def _execute_sql_helper( self, @@ -329,90 +1129,110 @@ def _execute_sql_helper( partition=None, sql_count=0, query_options=None, - timeout=google.api_core.gapic_v1.method.DEFAULT, - retry=google.api_core.gapic_v1.method.DEFAULT, + request_options=None, + timeout=gapic_v1.method.DEFAULT, + retry=gapic_v1.method.DEFAULT, + directed_read_options=None, + directed_read_options_at_client_level=None, + use_multiplexed=False, ): + """Helper for testing _SnapshotBase.execute_sql(). Executes method and verifies + transaction state, begin transaction API call, and span attributes and events. + """ + from google.protobuf.struct_pb2 import Struct - from google.cloud.spanner_v1.proto.result_set_pb2 import ( + from google.cloud.spanner_v1 import ( PartialResultSet, ResultSetMetadata, ResultSetStats, ) - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - TransactionSelector, - TransactionOptions, - ) - from google.cloud.spanner_v1.proto.type_pb2 import Type, StructType - from google.cloud.spanner_v1.proto.type_pb2 import STRING, INT64 + from google.cloud.spanner_v1 import ExecuteSqlRequest + from google.cloud.spanner_v1 import Type, StructType + from google.cloud.spanner_v1 import TypeCode from google.cloud.spanner_v1._helpers import ( _make_value_pb, _merge_query_options, ) - VALUES = [[u"bharney", u"rhubbyl", 31], [u"phred", u"phlyntstone", 32]] + VALUES = [["bharney", "rhubbyl", 31], ["phred", "phlyntstone", 32]] VALUE_PBS = [[_make_value_pb(item) for item in row] for row in VALUES] MODE = 2 # PROFILE struct_type_pb = StructType( fields=[ - StructType.Field(name="first_name", type=Type(code=STRING)), - StructType.Field(name="last_name", type=Type(code=STRING)), - StructType.Field(name="age", type=Type(code=INT64)), + StructType.Field(name="first_name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="last_name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="age", type_=Type(code=TypeCode.INT64)), ] ) - metadata_pb = ResultSetMetadata(row_type=struct_type_pb) + + # If the transaction has not already begun, the first result set will + # include metadata with information about the newly-begun transaction. + transaction_pb = build_transaction_pb(id=TXN_ID) if first else None + metadata_pb = ResultSetMetadata( + row_type=struct_type_pb, + transaction=transaction_pb, + ) + stats_pb = ResultSetStats( query_stats=Struct(fields={"rows_returned": _make_value_pb(2)}) ) - result_sets = [ - PartialResultSet(values=VALUE_PBS[0], metadata=metadata_pb), - PartialResultSet(values=VALUE_PBS[1], stats=stats_pb), - ] + + # Precommit tokens will be included in the result sets if the transaction is on + # a multiplexed session. Return the precommit tokens out of order to verify that + # the transaction tracks the one with the highest sequence number. + partial_result_set_1_args = {"metadata": metadata_pb} + if use_multiplexed: + partial_result_set_1_args["precommit_token"] = PRECOMMIT_TOKEN_2 + partial_result_set_1 = PartialResultSet(**partial_result_set_1_args) + + partial_result_set_2_args = {"stats": stats_pb} + if use_multiplexed: + partial_result_set_2_args["precommit_token"] = PRECOMMIT_TOKEN_1 + partial_result_set_2 = PartialResultSet(**partial_result_set_2_args) + + result_sets = [partial_result_set_1, partial_result_set_2] + + for i in range(len(result_sets)): + result_sets[i].values.extend(VALUE_PBS[i]) iterator = _MockIterator(*result_sets) - database = _Database() - api = database.spanner_api = self._make_spanner_api() + database = _Database( + directed_read_options=directed_read_options_at_client_level + ) + api = database.spanner_api = build_spanner_api() api.execute_streaming_sql.return_value = iterator session = _Session(database) - derived = self._makeDerived(session) - derived._multi_use = multi_use + derived = _build_snapshot_derived(session, multi_use=multi_use) derived._read_request_count = count - derived._execute_sql_count = sql_count + derived._execute_sql_request_count = sql_count if not first: derived._transaction_id = TXN_ID + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + + transaction_selector_pb = derived._build_transaction_selector_pb() + result_set = derived.execute_sql( SQL_QUERY_WITH_PARAM, PARAMS, PARAM_TYPES, query_mode=MODE, query_options=query_options, + request_options=request_options, partition=partition, retry=retry, timeout=timeout, + directed_read_options=directed_read_options, ) self.assertEqual(derived._read_request_count, count + 1) - if multi_use: - self.assertIs(result_set._source, derived) - else: - self.assertIsNone(result_set._source) - self.assertEqual(list(result_set), VALUES) self.assertEqual(result_set.metadata, metadata_pb) self.assertEqual(result_set.stats, stats_pb) - txn_options = TransactionOptions( - read_only=TransactionOptions.ReadOnly(strong=True) - ) - - if multi_use: - if first: - expected_transaction = TransactionSelector(begin=txn_options) - else: - expected_transaction = TransactionSelector(id=TXN_ID) - else: - expected_transaction = TransactionSelector(single_use=txn_options) - expected_params = Struct( fields={key: _make_value_pb(value) for (key, value) in PARAMS.items()} ) @@ -423,22 +1243,63 @@ def _execute_sql_helper( expected_query_options, query_options ) - api.execute_streaming_sql.assert_called_once_with( - self.SESSION_NAME, - SQL_QUERY_WITH_PARAM, - transaction=expected_transaction, + if derived._read_only: + # Transaction tag is ignored for read only requests. + expected_request_options = request_options + expected_request_options.transaction_tag = None + + expected_directed_read_options = ( + directed_read_options + if directed_read_options is not None + else directed_read_options_at_client_level + ) + + expected_request = ExecuteSqlRequest( + session=session.name, + sql=SQL_QUERY_WITH_PARAM, + transaction=transaction_selector_pb, params=expected_params, param_types=PARAM_TYPES, query_mode=MODE, query_options=expected_query_options, + request_options=expected_request_options, partition_token=partition, seqno=sql_count, - metadata=[("google-cloud-resource-prefix", database.name)], + directed_read_options=expected_directed_read_options, + ) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + api.execute_streaming_sql.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], timeout=timeout, retry=retry, ) - self.assertEqual(derived._execute_sql_count, sql_count + 1) + self.assertEqual(derived._execute_sql_request_count, sql_count + 1) + + self.assertSpanAttributes( + "CloudSpanner._Derived.execute_sql", + status=StatusCode.OK, + attributes=dict( + BASE_ATTRIBUTES, + **{ + "db.statement": SQL_QUERY_WITH_PARAM, + "x_goog_spanner_request_id": req_id, + }, + ), + ) + + if first: + self.assertEqual(derived._transaction_id, TXN_ID) + + if use_multiplexed: + self.assertEqual(derived._precommit_token, PRECOMMIT_TOKEN_2) def test_execute_sql_wo_multi_use(self): self._execute_sql_helper(multi_use=False) @@ -467,22 +1328,86 @@ def test_execute_sql_w_timeout(self): self._execute_sql_helper(multi_use=False, timeout=None) def test_execute_sql_w_query_options(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest self._execute_sql_helper( multi_use=False, query_options=ExecuteSqlRequest.QueryOptions(optimizer_version="3"), ) + def test_execute_sql_w_request_options(self): + self._execute_sql_helper( + multi_use=False, + request_options=RequestOptions( + priority=RequestOptions.Priority.PRIORITY_MEDIUM + ), + ) + + def test_execute_sql_w_request_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._execute_sql_helper(multi_use=False, request_options=request_options) + + def test_execute_sql_w_transaction_tag_success(self): + request_options = RequestOptions( + transaction_tag="tag-1-1", + ) + self._execute_sql_helper(multi_use=False, request_options=request_options) + + def test_execute_sql_w_request_and_transaction_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + transaction_tag="tag-1-1", + ) + self._execute_sql_helper(multi_use=False, request_options=request_options) + + def test_execute_sql_w_request_and_transaction_tag_dictionary_success(self): + request_options = {"request_tag": "tag-1", "transaction_tag": "tag-1-1"} + self._execute_sql_helper(multi_use=False, request_options=request_options) + + def test_execute_sql_w_incorrect_tag_dictionary_error(self): + request_options = {"incorrect_tag": "tag-1-1"} + with self.assertRaises(ValueError): + self._execute_sql_helper(multi_use=False, request_options=request_options) + + def test_execute_sql_w_directed_read_options(self): + self._execute_sql_helper( + multi_use=False, directed_read_options=DIRECTED_READ_OPTIONS + ) + + def test_execute_sql_w_directed_read_options_at_client_level(self): + self._execute_sql_helper( + multi_use=False, + directed_read_options_at_client_level=DIRECTED_READ_OPTIONS_FOR_CLIENT, + ) + + def test_execute_sql_w_directed_read_options_override(self): + self._execute_sql_helper( + multi_use=False, + directed_read_options=DIRECTED_READ_OPTIONS, + directed_read_options_at_client_level=DIRECTED_READ_OPTIONS_FOR_CLIENT, + ) + + def test_execute_sql_w_precommit_tokens(self): + self._execute_sql_helper(multi_use=True, use_multiplexed=True) + def _partition_read_helper( - self, multi_use, w_txn, size=None, max_partitions=None, index=None + self, + multi_use, + w_txn, + size=None, + max_partitions=None, + index=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, ): from google.cloud.spanner_v1.keyset import KeySet - from google.cloud.spanner_v1.types import Partition - from google.cloud.spanner_v1.types import PartitionOptions - from google.cloud.spanner_v1.types import PartitionResponse - from google.cloud.spanner_v1.types import Transaction - from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionSelector + from google.cloud.spanner_v1 import Partition + from google.cloud.spanner_v1 import PartitionOptions + from google.cloud.spanner_v1 import PartitionReadRequest + from google.cloud.spanner_v1 import PartitionResponse + from google.cloud.spanner_v1 import Transaction keyset = KeySet(all_=True) new_txn_id = b"ABECAB91" @@ -496,14 +1421,17 @@ def _partition_read_helper( transaction=Transaction(id=new_txn_id), ) database = _Database() - api = database.spanner_api = self._make_spanner_api() + api = database.spanner_api = build_spanner_api() api.partition_read.return_value = response session = _Session(database) - derived = self._makeDerived(session) + derived = _build_snapshot_derived(session) derived._multi_use = multi_use + if w_txn: derived._transaction_id = TXN_ID + transaction_selector_pb = derived._build_transaction_selector_pb() + tokens = list( derived.partition_read( TABLE_NAME, @@ -512,26 +1440,53 @@ def _partition_read_helper( index=index, partition_size_bytes=size, max_partitions=max_partitions, + retry=retry, + timeout=timeout, ) ) self.assertEqual(tokens, [token_1, token_2]) - expected_txn_selector = TransactionSelector(id=TXN_ID) - expected_partition_options = PartitionOptions( partition_size_bytes=size, max_partitions=max_partitions ) - api.partition_read.assert_called_once_with( - session=self.SESSION_NAME, + expected_request = PartitionReadRequest( + session=session.name, table=TABLE_NAME, columns=COLUMNS, key_set=keyset._to_pb(), - transaction=expected_txn_selector, + transaction=transaction_selector_pb, index=index, partition_options=expected_partition_options, - metadata=[("google-cloud-resource-prefix", database.name)], + ) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + api.partition_read.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + retry=retry, + timeout=timeout, + ) + + want_span_attributes = dict( + BASE_ATTRIBUTES, + table_id=TABLE_NAME, + columns=tuple(COLUMNS), + x_goog_spanner_request_id=req_id, + ) + if index: + want_span_attributes["index"] = index + self.assertSpanAttributes( + "CloudSpanner._Derived.partition_read", + status=StatusCode.OK, + attributes=want_span_attributes, ) def test_partition_read_single_use_raises(self): @@ -547,16 +1502,60 @@ def test_partition_read_other_error(self): keyset = KeySet(all_=True) database = _Database() - database.spanner_api = self._make_spanner_api() + database.spanner_api = build_spanner_api() database.spanner_api.partition_read.side_effect = RuntimeError() session = _Session(database) - derived = self._makeDerived(session) - derived._multi_use = True + derived = _build_snapshot_derived(session, multi_use=True) derived._transaction_id = TXN_ID with self.assertRaises(RuntimeError): list(derived.partition_read(TABLE_NAME, COLUMNS, keyset)) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner._Derived.partition_read", + status=StatusCode.ERROR, + attributes=dict( + BASE_ATTRIBUTES, + table_id=TABLE_NAME, + columns=tuple(COLUMNS), + x_goog_spanner_request_id=req_id, + ), + ) + + def test_partition_read_w_retry(self): + from google.cloud.spanner_v1.keyset import KeySet + from google.cloud.spanner_v1 import Partition + from google.cloud.spanner_v1 import PartitionResponse + from google.cloud.spanner_v1 import Transaction + + keyset = KeySet(all_=True) + database = _Database() + api = database.spanner_api = build_spanner_api() + new_txn_id = b"ABECAB91" + token_1 = b"FACE0FFF" + token_2 = b"BADE8CAF" + response = PartitionResponse( + partitions=[ + Partition(partition_token=token_1), + Partition(partition_token=token_2), + ], + transaction=Transaction(id=new_txn_id), + ) + database.spanner_api.partition_read.side_effect = [ + INTERNAL_SERVER_ERROR_UNEXPECTED_EOS, + response, + ] + + session = _Session(database) + derived = _build_snapshot_derived(session) + derived._multi_use = True + derived._transaction_id = TXN_ID + + list(derived.partition_read(TABLE_NAME, COLUMNS, keyset)) + + self.assertEqual(api.partition_read.call_count, 2) + def test_partition_read_ok_w_index_no_options(self): self._partition_read_helper(multi_use=True, w_txn=True, index="index") @@ -566,13 +1565,38 @@ def test_partition_read_ok_w_size(self): def test_partition_read_ok_w_max_partitions(self): self._partition_read_helper(multi_use=True, w_txn=True, max_partitions=4) - def _partition_query_helper(self, multi_use, w_txn, size=None, max_partitions=None): + def test_partition_read_ok_w_timeout_param(self): + self._partition_read_helper(multi_use=True, w_txn=True, timeout=2.0) + + def test_partition_read_ok_w_retry_param(self): + self._partition_read_helper( + multi_use=True, w_txn=True, retry=Retry(deadline=60) + ) + + def test_partition_read_ok_w_timeout_and_retry_params(self): + self._partition_read_helper( + multi_use=True, w_txn=True, retry=Retry(deadline=60), timeout=2.0 + ) + + def _partition_query_helper( + self, + multi_use, + w_txn, + size=None, + max_partitions=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ): + """Helper for testing _SnapshotBase.partition_query(). Executes method and verifies + transaction state, begin transaction API call, and span attributes and events. + """ + from google.protobuf.struct_pb2 import Struct - from google.cloud.spanner_v1.types import Partition - from google.cloud.spanner_v1.types import PartitionOptions - from google.cloud.spanner_v1.types import PartitionResponse - from google.cloud.spanner_v1.types import Transaction - from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionSelector + from google.cloud.spanner_v1 import Partition + from google.cloud.spanner_v1 import PartitionOptions + from google.cloud.spanner_v1 import PartitionQueryRequest + from google.cloud.spanner_v1 import PartitionResponse + from google.cloud.spanner_v1 import Transaction from google.cloud.spanner_v1._helpers import _make_value_pb new_txn_id = b"ABECAB91" @@ -586,14 +1610,15 @@ def _partition_query_helper(self, multi_use, w_txn, size=None, max_partitions=No transaction=Transaction(id=new_txn_id), ) database = _Database() - api = database.spanner_api = self._make_spanner_api() + api = database.spanner_api = build_spanner_api() api.partition_query.return_value = response session = _Session(database) - derived = self._makeDerived(session) - derived._multi_use = multi_use + derived = _build_snapshot_derived(session, multi_use=multi_use) if w_txn: derived._transaction_id = TXN_ID + transaction_selector_pb = derived._build_transaction_selector_pb() + tokens = list( derived.partition_query( SQL_QUERY_WITH_PARAM, @@ -601,6 +1626,8 @@ def _partition_query_helper(self, multi_use, w_txn, size=None, max_partitions=No PARAM_TYPES, partition_size_bytes=size, max_partitions=max_partitions, + retry=retry, + timeout=timeout, ) ) @@ -610,43 +1637,65 @@ def _partition_query_helper(self, multi_use, w_txn, size=None, max_partitions=No fields={key: _make_value_pb(value) for (key, value) in PARAMS.items()} ) - expected_txn_selector = TransactionSelector(id=TXN_ID) - expected_partition_options = PartitionOptions( partition_size_bytes=size, max_partitions=max_partitions ) - api.partition_query.assert_called_once_with( - session=self.SESSION_NAME, + expected_request = PartitionQueryRequest( + session=session.name, sql=SQL_QUERY_WITH_PARAM, - transaction=expected_txn_selector, + transaction=transaction_selector_pb, params=expected_params, param_types=PARAM_TYPES, partition_options=expected_partition_options, - metadata=[("google-cloud-resource-prefix", database.name)], + ) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + api.partition_query.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + retry=retry, + timeout=timeout, + ) + + self.assertSpanAttributes( + "CloudSpanner._Derived.partition_query", + status=StatusCode.OK, + attributes=dict( + BASE_ATTRIBUTES, + **{ + "db.statement": SQL_QUERY_WITH_PARAM, + "x_goog_spanner_request_id": req_id, + }, + ), ) def test_partition_query_other_error(self): database = _Database() - database.spanner_api = self._make_spanner_api() + database.spanner_api = build_spanner_api() database.spanner_api.partition_query.side_effect = RuntimeError() session = _Session(database) - derived = self._makeDerived(session) - derived._multi_use = True + derived = _build_snapshot_derived(session, multi_use=True) derived._transaction_id = TXN_ID with self.assertRaises(RuntimeError): list(derived.partition_query(SQL_QUERY)) - def test_partition_query_w_params_wo_param_types(self): - database = _Database() - session = _Session(database) - derived = self._makeDerived(session) - derived._multi_use = True - derived._transaction_id = TXN_ID - - with self.assertRaises(ValueError): - list(derived.partition_query(SQL_QUERY_WITH_PARAM, PARAMS)) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner._Derived.partition_query", + status=StatusCode.ERROR, + attributes=dict( + BASE_ATTRIBUTES, + **{"db.statement": SQL_QUERY, "x_goog_spanner_request_id": req_id}, + ), + ) def test_partition_query_single_use_raises(self): with self.assertRaises(ValueError): @@ -665,9 +1714,21 @@ def test_partition_query_ok_w_size(self): def test_partition_query_ok_w_max_partitions(self): self._partition_query_helper(multi_use=True, w_txn=True, max_partitions=4) + def test_partition_query_ok_w_timeout_param(self): + self._partition_query_helper(multi_use=True, w_txn=True, timeout=2.0) + + def test_partition_query_ok_w_retry_param(self): + self._partition_query_helper( + multi_use=True, w_txn=True, retry=Retry(deadline=30) + ) + + def test_partition_query_ok_w_timeout_and_retry_params(self): + self._partition_query_helper( + multi_use=True, w_txn=True, retry=Retry(deadline=60), timeout=2.0 + ) -class TestSnapshot(unittest.TestCase): +class TestSnapshot(OpenTelemetryBase): PROJECT_ID = "project-id" INSTANCE_ID = "instance-id" INSTANCE_NAME = "projects/" + PROJECT_ID + "/instances/" + INSTANCE_ID @@ -684,320 +1745,155 @@ def _getTargetClass(self): def _make_one(self, *args, **kwargs): return self._getTargetClass()(*args, **kwargs) - def _make_spanner_api(self): - import google.cloud.spanner_v1.gapic.spanner_client - - return mock.create_autospec( - google.cloud.spanner_v1.gapic.spanner_client.SpannerClient, instance=True - ) - - def _makeTimestamp(self): - import datetime - from google.cloud._helpers import UTC - - return datetime.datetime.utcnow().replace(tzinfo=UTC) - def _makeDuration(self, seconds=1, microseconds=0): import datetime return datetime.timedelta(seconds=seconds, microseconds=microseconds) def test_ctor_defaults(self): - session = _Session() - snapshot = self._make_one(session) + session = build_session() + snapshot = build_snapshot(session=session) + + # Attributes from _SessionWrapper. self.assertIs(snapshot._session, session) + + # Attributes from _SnapshotBase. + self.assertTrue(snapshot._read_only) + self.assertFalse(snapshot._multi_use) + self.assertEqual(snapshot._execute_sql_request_count, 0) + self.assertEqual(snapshot._read_request_count, 0) + self.assertIsNone(snapshot._transaction_id) + self.assertIsNone(snapshot._precommit_token) + self.assertIsInstance(snapshot._lock, type(Lock())) + + # Attributes from Snapshot. self.assertTrue(snapshot._strong) self.assertIsNone(snapshot._read_timestamp) self.assertIsNone(snapshot._min_read_timestamp) self.assertIsNone(snapshot._max_staleness) self.assertIsNone(snapshot._exact_staleness) - self.assertFalse(snapshot._multi_use) def test_ctor_w_multiple_options(self): - timestamp = self._makeTimestamp() - duration = self._makeDuration() - session = _Session() - with self.assertRaises(ValueError): - self._make_one(session, read_timestamp=timestamp, max_staleness=duration) + build_snapshot(read_timestamp=datetime.min, max_staleness=timedelta()) def test_ctor_w_read_timestamp(self): - timestamp = self._makeTimestamp() - session = _Session() - snapshot = self._make_one(session, read_timestamp=timestamp) - self.assertIs(snapshot._session, session) - self.assertFalse(snapshot._strong) - self.assertEqual(snapshot._read_timestamp, timestamp) - self.assertIsNone(snapshot._min_read_timestamp) - self.assertIsNone(snapshot._max_staleness) - self.assertIsNone(snapshot._exact_staleness) - self.assertFalse(snapshot._multi_use) + snapshot = build_snapshot(read_timestamp=TIMESTAMP) + self.assertEqual(snapshot._read_timestamp, TIMESTAMP) def test_ctor_w_min_read_timestamp(self): - timestamp = self._makeTimestamp() - session = _Session() - snapshot = self._make_one(session, min_read_timestamp=timestamp) - self.assertIs(snapshot._session, session) - self.assertFalse(snapshot._strong) - self.assertIsNone(snapshot._read_timestamp) - self.assertEqual(snapshot._min_read_timestamp, timestamp) - self.assertIsNone(snapshot._max_staleness) - self.assertIsNone(snapshot._exact_staleness) - self.assertFalse(snapshot._multi_use) + snapshot = build_snapshot(min_read_timestamp=TIMESTAMP) + self.assertEqual(snapshot._min_read_timestamp, TIMESTAMP) def test_ctor_w_max_staleness(self): - duration = self._makeDuration() - session = _Session() - snapshot = self._make_one(session, max_staleness=duration) - self.assertIs(snapshot._session, session) - self.assertFalse(snapshot._strong) - self.assertIsNone(snapshot._read_timestamp) - self.assertIsNone(snapshot._min_read_timestamp) - self.assertEqual(snapshot._max_staleness, duration) - self.assertIsNone(snapshot._exact_staleness) - self.assertFalse(snapshot._multi_use) + snapshot = build_snapshot(max_staleness=DURATION) + self.assertEqual(snapshot._max_staleness, DURATION) def test_ctor_w_exact_staleness(self): - duration = self._makeDuration() - session = _Session() - snapshot = self._make_one(session, exact_staleness=duration) - self.assertIs(snapshot._session, session) - self.assertFalse(snapshot._strong) - self.assertIsNone(snapshot._read_timestamp) - self.assertIsNone(snapshot._min_read_timestamp) - self.assertIsNone(snapshot._max_staleness) - self.assertEqual(snapshot._exact_staleness, duration) - self.assertFalse(snapshot._multi_use) + snapshot = build_snapshot(exact_staleness=DURATION) + self.assertEqual(snapshot._exact_staleness, DURATION) def test_ctor_w_multi_use(self): - session = _Session() - snapshot = self._make_one(session, multi_use=True) - self.assertTrue(snapshot._session is session) - self.assertTrue(snapshot._strong) - self.assertIsNone(snapshot._read_timestamp) - self.assertIsNone(snapshot._min_read_timestamp) - self.assertIsNone(snapshot._max_staleness) - self.assertIsNone(snapshot._exact_staleness) + snapshot = build_snapshot(multi_use=True) self.assertTrue(snapshot._multi_use) def test_ctor_w_multi_use_and_read_timestamp(self): - timestamp = self._makeTimestamp() - session = _Session() - snapshot = self._make_one(session, read_timestamp=timestamp, multi_use=True) - self.assertTrue(snapshot._session is session) - self.assertFalse(snapshot._strong) - self.assertEqual(snapshot._read_timestamp, timestamp) - self.assertIsNone(snapshot._min_read_timestamp) - self.assertIsNone(snapshot._max_staleness) - self.assertIsNone(snapshot._exact_staleness) + snapshot = build_snapshot(multi_use=True, read_timestamp=TIMESTAMP) self.assertTrue(snapshot._multi_use) + self.assertEqual(snapshot._read_timestamp, TIMESTAMP) def test_ctor_w_multi_use_and_min_read_timestamp(self): - timestamp = self._makeTimestamp() - session = _Session() - with self.assertRaises(ValueError): - self._make_one(session, min_read_timestamp=timestamp, multi_use=True) + build_snapshot(multi_use=True, min_read_timestamp=TIMESTAMP) def test_ctor_w_multi_use_and_max_staleness(self): - duration = self._makeDuration() - session = _Session() - with self.assertRaises(ValueError): - self._make_one(session, max_staleness=duration, multi_use=True) + build_snapshot(multi_use=True, max_staleness=DURATION) def test_ctor_w_multi_use_and_exact_staleness(self): - duration = self._makeDuration() - session = _Session() - snapshot = self._make_one(session, exact_staleness=duration, multi_use=True) - self.assertTrue(snapshot._session is session) - self.assertFalse(snapshot._strong) - self.assertIsNone(snapshot._read_timestamp) - self.assertIsNone(snapshot._min_read_timestamp) - self.assertIsNone(snapshot._max_staleness) - self.assertEqual(snapshot._exact_staleness, duration) + snapshot = build_snapshot(multi_use=True, exact_staleness=DURATION) self.assertTrue(snapshot._multi_use) + self.assertEqual(snapshot._exact_staleness, DURATION) - def test__make_txn_selector_w_transaction_id(self): - session = _Session() - snapshot = self._make_one(session) - snapshot._transaction_id = TXN_ID - selector = snapshot._make_txn_selector() - self.assertEqual(selector.id, TXN_ID) - - def test__make_txn_selector_strong(self): - session = _Session() - snapshot = self._make_one(session) - selector = snapshot._make_txn_selector() - options = selector.single_use - self.assertTrue(options.read_only.strong) - - def test__make_txn_selector_w_read_timestamp(self): - from google.cloud._helpers import _pb_timestamp_to_datetime - - timestamp = self._makeTimestamp() - session = _Session() - snapshot = self._make_one(session, read_timestamp=timestamp) - selector = snapshot._make_txn_selector() - options = selector.single_use - self.assertEqual( - _pb_timestamp_to_datetime(options.read_only.read_timestamp), timestamp - ) - - def test__make_txn_selector_w_min_read_timestamp(self): - from google.cloud._helpers import _pb_timestamp_to_datetime + def test__build_transaction_options_strong(self): + snapshot = build_snapshot() + options = snapshot._build_transaction_options_pb() - timestamp = self._makeTimestamp() - session = _Session() - snapshot = self._make_one(session, min_read_timestamp=timestamp) - selector = snapshot._make_txn_selector() - options = selector.single_use self.assertEqual( - _pb_timestamp_to_datetime(options.read_only.min_read_timestamp), timestamp - ) - - def test__make_txn_selector_w_max_staleness(self): - duration = self._makeDuration(seconds=3, microseconds=123456) - session = _Session() - snapshot = self._make_one(session, max_staleness=duration) - selector = snapshot._make_txn_selector() - options = selector.single_use - self.assertEqual(options.read_only.max_staleness.seconds, 3) - self.assertEqual(options.read_only.max_staleness.nanos, 123456000) - - def test__make_txn_selector_w_exact_staleness(self): - duration = self._makeDuration(seconds=3, microseconds=123456) - session = _Session() - snapshot = self._make_one(session, exact_staleness=duration) - selector = snapshot._make_txn_selector() - options = selector.single_use - self.assertEqual(options.read_only.exact_staleness.seconds, 3) - self.assertEqual(options.read_only.exact_staleness.nanos, 123456000) - - def test__make_txn_selector_strong_w_multi_use(self): - session = _Session() - snapshot = self._make_one(session, multi_use=True) - selector = snapshot._make_txn_selector() - options = selector.begin - self.assertTrue(options.read_only.strong) - - def test__make_txn_selector_w_read_timestamp_w_multi_use(self): - from google.cloud._helpers import _pb_timestamp_to_datetime - - timestamp = self._makeTimestamp() - session = _Session() - snapshot = self._make_one(session, read_timestamp=timestamp, multi_use=True) - selector = snapshot._make_txn_selector() - options = selector.begin - self.assertEqual( - _pb_timestamp_to_datetime(options.read_only.read_timestamp), timestamp + options, + TransactionOptions( + read_only=TransactionOptions.ReadOnly( + strong=True, return_read_timestamp=True + ) + ), ) - def test__make_txn_selector_w_exact_staleness_w_multi_use(self): - duration = self._makeDuration(seconds=3, microseconds=123456) - session = _Session() - snapshot = self._make_one(session, exact_staleness=duration, multi_use=True) - selector = snapshot._make_txn_selector() - options = selector.begin - self.assertEqual(options.read_only.exact_staleness.seconds, 3) - self.assertEqual(options.read_only.exact_staleness.nanos, 123456000) - - def test_begin_wo_multi_use(self): - session = _Session() - snapshot = self._make_one(session) - with self.assertRaises(ValueError): - snapshot.begin() - - def test_begin_w_read_request_count_gt_0(self): - session = _Session() - snapshot = self._make_one(session, multi_use=True) - snapshot._read_request_count = 1 - with self.assertRaises(ValueError): - snapshot.begin() - - def test_begin_w_existing_txn_id(self): - session = _Session() - snapshot = self._make_one(session, multi_use=True) - snapshot._transaction_id = TXN_ID - with self.assertRaises(ValueError): - snapshot.begin() - - def test_begin_w_other_error(self): - database = _Database() - database.spanner_api = self._make_spanner_api() - database.spanner_api.begin_transaction.side_effect = RuntimeError() - timestamp = self._makeTimestamp() - session = _Session(database) - snapshot = self._make_one(session, read_timestamp=timestamp, multi_use=True) - - with self.assertRaises(RuntimeError): - snapshot.begin() + def test__build_transaction_options_w_read_timestamp(self): + snapshot = build_snapshot(read_timestamp=TIMESTAMP) + options = snapshot._build_transaction_options_pb() - def test_begin_ok_exact_staleness(self): - from google.protobuf.duration_pb2 import Duration - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, + self.assertEqual( + options, + TransactionOptions( + read_only=TransactionOptions.ReadOnly( + read_timestamp=TIMESTAMP, return_read_timestamp=True + ) + ), ) - transaction_pb = TransactionPB(id=TXN_ID) - database = _Database() - api = database.spanner_api = self._make_spanner_api() - api.begin_transaction.return_value = transaction_pb - duration = self._makeDuration(seconds=SECONDS, microseconds=MICROS) - session = _Session(database) - snapshot = self._make_one(session, exact_staleness=duration, multi_use=True) - - txn_id = snapshot.begin() + def test__build_transaction_options_w_min_read_timestamp(self): + snapshot = build_snapshot(min_read_timestamp=TIMESTAMP) + options = snapshot._build_transaction_options_pb() - self.assertEqual(txn_id, TXN_ID) - self.assertEqual(snapshot._transaction_id, TXN_ID) - - expected_duration = Duration(seconds=SECONDS, nanos=MICROS * 1000) - expected_txn_options = TransactionOptions( - read_only=TransactionOptions.ReadOnly(exact_staleness=expected_duration) + self.assertEqual( + options, + TransactionOptions( + read_only=TransactionOptions.ReadOnly( + min_read_timestamp=TIMESTAMP, return_read_timestamp=True + ) + ), ) - api.begin_transaction.assert_called_once_with( - session.name, - expected_txn_options, - metadata=[("google-cloud-resource-prefix", database.name)], - ) + def test__build_transaction_options_w_max_staleness(self): + snapshot = build_snapshot(max_staleness=DURATION) + options = snapshot._build_transaction_options_pb() - def test_begin_ok_exact_strong(self): - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - TransactionOptions, + self.assertEqual( + options, + TransactionOptions( + read_only=TransactionOptions.ReadOnly( + max_staleness=DURATION, return_read_timestamp=True + ) + ), ) - transaction_pb = TransactionPB(id=TXN_ID) - database = _Database() - api = database.spanner_api = self._make_spanner_api() - api.begin_transaction.return_value = transaction_pb - session = _Session(database) - snapshot = self._make_one(session, multi_use=True) - - txn_id = snapshot.begin() - - self.assertEqual(txn_id, TXN_ID) - self.assertEqual(snapshot._transaction_id, TXN_ID) + def test__build_transaction_options_w_exact_staleness(self): + snapshot = build_snapshot(exact_staleness=DURATION) + options = snapshot._build_transaction_options_pb() - expected_txn_options = TransactionOptions( - read_only=TransactionOptions.ReadOnly(strong=True) - ) - - api.begin_transaction.assert_called_once_with( - session.name, - expected_txn_options, - metadata=[("google-cloud-resource-prefix", database.name)], + self.assertEqual( + options, + TransactionOptions( + read_only=TransactionOptions.ReadOnly( + exact_staleness=DURATION, return_read_timestamp=True + ) + ), ) class _Client(object): + NTH_CLIENT = AtomicCounter() + def __init__(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest self._query_options = ExecuteSqlRequest.QueryOptions(optimizer_version="1") + self._nth_client_id = _Client.NTH_CLIENT.increment() + self._nth_request = AtomicCounter() + + @property + def _next_nth_request(self): + return self._nth_request.increment() class _Instance(object): @@ -1006,9 +1902,41 @@ def __init__(self): class _Database(object): - def __init__(self): + def __init__(self, directed_read_options=None): self.name = "testing" + self._nth_request = 0 self._instance = _Instance() + self._route_to_leader_enabled = True + self._directed_read_options = directed_read_options + + @property + def observability_options(self): + return dict(db_name=self.name) + + @property + def _next_nth_request(self): + self._nth_request += 1 + return self._nth_request + + @property + def _nth_client_id(self): + return 1 + + def metadata_with_request_id( + self, nth_request, nth_attempt, prior_metadata=[], span=None + ): + return _metadata_with_request_id( + self._nth_client_id, + self._channel_id, + nth_request, + nth_attempt, + prior_metadata, + span, + ) + + @property + def _channel_id(self): + return 1 class _Session(object): @@ -1016,23 +1944,70 @@ def __init__(self, database=None, name=TestSnapshot.SESSION_NAME): self._database = database self.name = name + @property + def session_id(self): + return self.name + class _MockIterator(object): def __init__(self, *values, **kw): self._iter_values = iter(values) self._fail_after = kw.pop("fail_after", False) + self._error = kw.pop("error", Exception) def __iter__(self): return self def __next__(self): - from google.api_core.exceptions import ServiceUnavailable - try: return next(self._iter_values) except StopIteration: if self._fail_after: - raise ServiceUnavailable("testing") + raise self._error raise next = __next__ + + +def _build_snapshot_derived(session=None, multi_use=False, read_only=True) -> _Derived: + """Builds and returns an instance of a minimally- + implemented _Derived class for testing.""" + + session = session or build_session() + if session.session_id is None: + session._session_id = "session-id" + + derived = _Derived(session=session) + derived._multi_use = multi_use + derived._read_only = read_only + + return derived + + +def _build_span_attributes(database: Database, attempt: int = 1) -> Mapping[str, str]: + """Builds the attributes for spans using the given database and extra attributes.""" + + return enrich_with_otel_scope( + { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": database.name, + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + "x_goog_spanner_request_id": _build_request_id(database, attempt), + } + ) + + +def _build_request_id(database: Database, attempt: int) -> str: + """Builds a request ID for an Spanner Client API request with the given database and attempt number.""" + + client = database._instance._client + return build_request_id( + client_id=client._nth_client_id, + channel_id=database._channel_id, + nth_request=client._nth_request.value, + attempt=attempt, + ) diff --git a/tests/unit/test_spanner.py b/tests/unit/test_spanner.py new file mode 100644 index 0000000000..e35b817858 --- /dev/null +++ b/tests/unit/test_spanner.py @@ -0,0 +1,1376 @@ +# Copyright 2022 Google LLC All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +import threading +from google.protobuf.struct_pb2 import Struct +from google.cloud.spanner_v1 import ( + PartialResultSet, + ResultSetMetadata, + ResultSetStats, + ResultSet, + RequestOptions, + Type, + TypeCode, + ExecuteSqlRequest, + ReadRequest, + StructType, + TransactionOptions, + TransactionSelector, + DirectedReadOptions, + ExecuteBatchDmlRequest, + ExecuteBatchDmlResponse, + param_types, + DefaultTransactionOptions, +) +from google.cloud.spanner_v1.types import transaction as transaction_type +from google.cloud.spanner_v1.keyset import KeySet + +from google.cloud.spanner_v1._helpers import ( + AtomicCounter, + _make_value_pb, + _merge_query_options, + _metadata_with_request_id, +) +from google.cloud.spanner_v1.request_id_header import REQ_RAND_PROCESS_ID +import mock + +from google.api_core import gapic_v1 + +from tests._helpers import OpenTelemetryBase + +TABLE_NAME = "citizens" +COLUMNS = ["email", "first_name", "last_name", "age"] +VALUES = [ + ["phred@exammple.com", "Phred", "Phlyntstone", 32], + ["bharney@example.com", "Bharney", "Rhubble", 31], +] +DML_QUERY = """\ +INSERT INTO citizens(first_name, last_name, age) +VALUES ("Phred", "Phlyntstone", 32) +""" +DML_QUERY_WITH_PARAM = """ +INSERT INTO citizens(first_name, last_name, age) +VALUES ("Phred", "Phlyntstone", @age) +""" +SQL_QUERY = """\ +SELECT first_name, last_name, age FROM citizens ORDER BY age""" +SQL_QUERY_WITH_PARAM = """ +SELECT first_name, last_name, email FROM citizens WHERE age <= @max_age""" +PARAMS = {"age": 30} +PARAM_TYPES = {"age": Type(code=TypeCode.INT64)} +KEYS = [["bharney@example.com"], ["phred@example.com"]] +KEYSET = KeySet(keys=KEYS) +INDEX = "email-address-index" +LIMIT = 20 +MODE = 2 +RETRY = gapic_v1.method.DEFAULT +TIMEOUT = gapic_v1.method.DEFAULT +DIRECTED_READ_OPTIONS = { + "include_replicas": { + "replica_selections": [ + { + "location": "us-west1", + "type_": DirectedReadOptions.ReplicaSelection.Type.READ_ONLY, + }, + ], + "auto_failover_disabled": True, + }, +} +insert_dml = "INSERT INTO table(pkey, desc) VALUES (%pkey, %desc)" +insert_params = {"pkey": 12345, "desc": "DESCRIPTION"} +insert_param_types = {"pkey": param_types.INT64, "desc": param_types.STRING} +update_dml = 'UPDATE table SET desc = desc + "-amended"' +delete_dml = "DELETE FROM table WHERE desc IS NULL" + +dml_statements = [ + (insert_dml, insert_params, insert_param_types), + update_dml, + delete_dml, +] + + +class TestTransaction(OpenTelemetryBase): + PROJECT_ID = "project-id" + INSTANCE_ID = "instance-id" + INSTANCE_NAME = "projects/" + PROJECT_ID + "/instances/" + INSTANCE_ID + DATABASE_ID = "database-id" + DATABASE_NAME = INSTANCE_NAME + "/databases/" + DATABASE_ID + SESSION_ID = "session-id" + SESSION_NAME = DATABASE_NAME + "/sessions/" + SESSION_ID + TRANSACTION_ID = b"DEADBEEF" + TRANSACTION_TAG = "transaction-tag" + + BASE_ATTRIBUTES = { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": "testing", + "net.host.name": "spanner.googleapis.com", + } + + def _getTargetClass(self): + from google.cloud.spanner_v1.transaction import Transaction + + return Transaction + + def _make_one(self, session, *args, **kwargs): + transaction = self._getTargetClass()(session, *args, **kwargs) + session._transaction = transaction + return transaction + + def _make_spanner_api(self): + from google.cloud.spanner_v1 import SpannerClient + + return mock.create_autospec(SpannerClient, instance=True) + + def _execute_update_helper( + self, + transaction, + api, + count=0, + query_options=None, + exclude_txn_from_change_streams=False, + isolation_level=TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED, + ): + stats_pb = ResultSetStats(row_count_exact=1) + + transaction_pb = transaction_type.Transaction(id=self.TRANSACTION_ID) + metadata_pb = ResultSetMetadata(transaction=transaction_pb) + api.execute_sql.return_value = ResultSet(stats=stats_pb, metadata=metadata_pb) + + transaction.transaction_tag = self.TRANSACTION_TAG + transaction.exclude_txn_from_change_streams = exclude_txn_from_change_streams + transaction.isolation_level = isolation_level + transaction.read_lock_mode = read_lock_mode + transaction._execute_sql_request_count = count + + row_count = transaction.execute_update( + DML_QUERY_WITH_PARAM, + PARAMS, + PARAM_TYPES, + query_mode=MODE, + query_options=query_options, + request_options=RequestOptions(), + retry=RETRY, + timeout=TIMEOUT, + ) + self.assertEqual(row_count, count + 1) + + def _execute_update_expected_request( + self, + database, + query_options=None, + begin=True, + count=0, + exclude_txn_from_change_streams=False, + isolation_level=TransactionOptions.IsolationLevel.ISOLATION_LEVEL_UNSPECIFIED, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.READ_LOCK_MODE_UNSPECIFIED, + ): + if begin is True: + expected_transaction = TransactionSelector( + begin=TransactionOptions( + read_write=TransactionOptions.ReadWrite( + read_lock_mode=read_lock_mode + ), + exclude_txn_from_change_streams=exclude_txn_from_change_streams, + isolation_level=isolation_level, + ) + ) + else: + expected_transaction = TransactionSelector(id=self.TRANSACTION_ID) + + expected_params = Struct( + fields={key: _make_value_pb(value) for (key, value) in PARAMS.items()} + ) + + expected_query_options = database._instance._client._query_options + if query_options: + expected_query_options = _merge_query_options( + expected_query_options, query_options + ) + expected_request_options = RequestOptions() + expected_request_options.transaction_tag = self.TRANSACTION_TAG + + expected_request = ExecuteSqlRequest( + session=self.SESSION_NAME, + sql=DML_QUERY_WITH_PARAM, + transaction=expected_transaction, + params=expected_params, + param_types=PARAM_TYPES, + query_mode=MODE, + query_options=expected_query_options, + request_options=expected_request_options, + seqno=count, + ) + + return expected_request + + def _execute_sql_helper( + self, + transaction, + api, + count=0, + partition=None, + sql_count=0, + query_options=None, + directed_read_options=None, + ): + VALUES = [["bharney", "rhubbyl", 31], ["phred", "phlyntstone", 32]] + VALUE_PBS = [[_make_value_pb(item) for item in row] for row in VALUES] + struct_type_pb = StructType( + fields=[ + StructType.Field(name="first_name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="last_name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="age", type_=Type(code=TypeCode.INT64)), + ] + ) + transaction_pb = transaction_type.Transaction(id=self.TRANSACTION_ID) + metadata_pb = ResultSetMetadata( + row_type=struct_type_pb, transaction=transaction_pb + ) + stats_pb = ResultSetStats( + query_stats=Struct(fields={"rows_returned": _make_value_pb(2)}) + ) + result_sets = [ + PartialResultSet(metadata=metadata_pb), + PartialResultSet(stats=stats_pb), + ] + for i in range(len(result_sets)): + result_sets[i].values.extend(VALUE_PBS[i]) + iterator = _MockIterator(*result_sets) + api.execute_streaming_sql.return_value = iterator + transaction._execute_sql_request_count = sql_count + transaction._read_request_count = count + + result_set = transaction.execute_sql( + SQL_QUERY_WITH_PARAM, + PARAMS, + PARAM_TYPES, + query_mode=MODE, + query_options=query_options, + request_options=RequestOptions(), + partition=partition, + retry=RETRY, + timeout=TIMEOUT, + directed_read_options=directed_read_options, + ) + + self.assertEqual(transaction._read_request_count, count + 1) + + self.assertEqual(list(result_set), VALUES) + self.assertEqual(result_set.metadata, metadata_pb) + self.assertEqual(result_set.stats, stats_pb) + self.assertEqual(transaction._execute_sql_request_count, sql_count + 1) + + def _execute_sql_expected_request( + self, + database, + partition=None, + query_options=None, + begin=True, + sql_count=0, + transaction_tag=False, + directed_read_options=None, + ): + if begin is True: + expected_transaction = TransactionSelector( + begin=TransactionOptions(read_write=TransactionOptions.ReadWrite()) + ) + else: + expected_transaction = TransactionSelector(id=self.TRANSACTION_ID) + + expected_params = Struct( + fields={key: _make_value_pb(value) for (key, value) in PARAMS.items()} + ) + + expected_query_options = database._instance._client._query_options + if query_options: + expected_query_options = _merge_query_options( + expected_query_options, query_options + ) + + expected_request_options = RequestOptions() + + if transaction_tag is True: + expected_request_options.transaction_tag = self.TRANSACTION_TAG + else: + expected_request_options.transaction_tag = None + + expected_request = ExecuteSqlRequest( + session=self.SESSION_NAME, + sql=SQL_QUERY_WITH_PARAM, + transaction=expected_transaction, + params=expected_params, + param_types=PARAM_TYPES, + query_mode=MODE, + query_options=expected_query_options, + request_options=expected_request_options, + partition_token=partition, + seqno=sql_count, + directed_read_options=directed_read_options, + ) + + return expected_request + + def _read_helper( + self, + transaction, + api, + count=0, + partition=None, + directed_read_options=None, + ): + VALUES = [["bharney", 31], ["phred", 32]] + VALUE_PBS = [[_make_value_pb(item) for item in row] for row in VALUES] + struct_type_pb = StructType( + fields=[ + StructType.Field(name="name", type_=Type(code=TypeCode.STRING)), + StructType.Field(name="age", type_=Type(code=TypeCode.INT64)), + ] + ) + + transaction_pb = transaction_type.Transaction(id=self.TRANSACTION_ID) + metadata_pb = ResultSetMetadata( + row_type=struct_type_pb, transaction=transaction_pb + ) + + stats_pb = ResultSetStats( + query_stats=Struct(fields={"rows_returned": _make_value_pb(2)}) + ) + result_sets = [ + PartialResultSet(metadata=metadata_pb), + PartialResultSet(stats=stats_pb), + ] + for i in range(len(result_sets)): + result_sets[i].values.extend(VALUE_PBS[i]) + + api.streaming_read.return_value = _MockIterator(*result_sets) + transaction._read_request_count = count + + if partition is not None: # 'limit' and 'partition' incompatible + result_set = transaction.read( + TABLE_NAME, + COLUMNS, + KEYSET, + index=INDEX, + partition=partition, + retry=RETRY, + timeout=TIMEOUT, + request_options=RequestOptions(), + directed_read_options=directed_read_options, + ) + else: + result_set = transaction.read( + TABLE_NAME, + COLUMNS, + KEYSET, + index=INDEX, + limit=LIMIT, + retry=RETRY, + timeout=TIMEOUT, + request_options=RequestOptions(), + directed_read_options=directed_read_options, + ) + + self.assertEqual(transaction._read_request_count, count + 1) + + self.assertEqual(list(result_set), VALUES) + self.assertEqual(result_set.metadata, metadata_pb) + self.assertEqual(result_set.stats, stats_pb) + + def _read_helper_expected_request( + self, + partition=None, + begin=True, + count=0, + transaction_tag=False, + directed_read_options=None, + ): + if begin is True: + expected_transaction = TransactionSelector( + begin=TransactionOptions(read_write=TransactionOptions.ReadWrite()) + ) + else: + expected_transaction = TransactionSelector(id=self.TRANSACTION_ID) + + if partition is not None: + expected_limit = 0 + else: + expected_limit = LIMIT + + # Transaction tag is ignored for read request. + expected_request_options = RequestOptions() + + if transaction_tag is True: + expected_request_options.transaction_tag = self.TRANSACTION_TAG + else: + expected_request_options.transaction_tag = None + + expected_request = ReadRequest( + session=self.SESSION_NAME, + table=TABLE_NAME, + columns=COLUMNS, + key_set=KEYSET._to_pb(), + transaction=expected_transaction, + index=INDEX, + limit=expected_limit, + partition_token=partition, + request_options=expected_request_options, + directed_read_options=directed_read_options, + ) + + return expected_request + + def _batch_update_helper( + self, + transaction, + database, + api, + error_after=None, + count=0, + ): + from google.rpc.status_pb2 import Status + + stats_pbs = [ + ResultSetStats(row_count_exact=1), + ResultSetStats(row_count_exact=2), + ResultSetStats(row_count_exact=3), + ] + if error_after is not None: + stats_pbs = stats_pbs[:error_after] + expected_status = Status(code=400) + else: + expected_status = Status(code=200) + expected_row_counts = [stats.row_count_exact for stats in stats_pbs] + transaction_pb = transaction_type.Transaction(id=self.TRANSACTION_ID) + metadata_pb = ResultSetMetadata(transaction=transaction_pb) + result_sets_pb = [ + ResultSet(stats=stats_pb, metadata=metadata_pb) for stats_pb in stats_pbs + ] + + response = ExecuteBatchDmlResponse( + status=expected_status, + result_sets=result_sets_pb, + ) + + api.execute_batch_dml.return_value = response + transaction.transaction_tag = self.TRANSACTION_TAG + transaction._execute_sql_request_count = count + + status, row_counts = transaction.batch_update( + dml_statements, request_options=RequestOptions() + ) + + self.assertEqual(status, expected_status) + self.assertEqual(row_counts, expected_row_counts) + self.assertEqual(transaction._execute_sql_request_count, count + 1) + + def _batch_update_expected_request(self, begin=True, count=0): + if begin is True: + expected_transaction = TransactionSelector( + begin=TransactionOptions(read_write=TransactionOptions.ReadWrite()) + ) + else: + expected_transaction = TransactionSelector(id=self.TRANSACTION_ID) + + expected_insert_params = Struct( + fields={ + key: _make_value_pb(value) for (key, value) in insert_params.items() + } + ) + expected_statements = [ + ExecuteBatchDmlRequest.Statement( + sql=insert_dml, + params=expected_insert_params, + param_types=insert_param_types, + ), + ExecuteBatchDmlRequest.Statement(sql=update_dml), + ExecuteBatchDmlRequest.Statement(sql=delete_dml), + ] + + expected_request_options = RequestOptions() + expected_request_options.transaction_tag = self.TRANSACTION_TAG + + expected_request = ExecuteBatchDmlRequest( + session=self.SESSION_NAME, + transaction=expected_transaction, + statements=expected_statements, + seqno=count, + request_options=expected_request_options, + ) + + return expected_request + + def test_transaction_should_include_begin_with_first_update(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_update_helper(transaction=transaction, api=api) + + api.execute_sql.assert_called_once_with( + request=self._execute_update_expected_request(database=database), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.1.1.1", + ), + ], + ) + + def test_transaction_should_include_begin_with_first_query(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_sql_helper(transaction=transaction, api=api) + + api.execute_streaming_sql.assert_called_once_with( + request=self._execute_sql_expected_request(database=database), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + timeout=TIMEOUT, + retry=RETRY, + ) + + def test_transaction_should_include_begin_with_first_read(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._read_helper(transaction=transaction, api=api) + + api.streaming_read.assert_called_once_with( + request=self._read_helper_expected_request(), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + + def test_transaction_should_include_begin_with_first_batch_update(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._batch_update_helper(transaction=transaction, database=database, api=api) + api.execute_batch_dml.assert_called_once_with( + request=self._batch_update_expected_request(), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + + def test_transaction_should_include_begin_w_exclude_txn_from_change_streams_with_first_update( + self, + ): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_update_helper( + transaction=transaction, api=api, exclude_txn_from_change_streams=True + ) + + api.execute_sql.assert_called_once_with( + request=self._execute_update_expected_request( + database=database, exclude_txn_from_change_streams=True + ), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_transaction_should_include_begin_w_isolation_level_with_first_update( + self, + ): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_update_helper( + transaction=transaction, + api=api, + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + ) + + api.execute_sql.assert_called_once_with( + request=self._execute_update_expected_request( + database=database, + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + ), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_transaction_should_include_begin_w_read_lock_mode_with_first_update( + self, + ): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_update_helper( + transaction=transaction, + api=api, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ) + + api.execute_sql.assert_called_once_with( + request=self._execute_update_expected_request( + database=database, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.OPTIMISTIC, + ), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_transaction_should_include_begin_w_isolation_level_and_read_lock_mode_with_first_update( + self, + ): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_update_helper( + transaction=transaction, + api=api, + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC, + ) + + api.execute_sql.assert_called_once_with( + request=self._execute_update_expected_request( + database=database, + isolation_level=TransactionOptions.IsolationLevel.REPEATABLE_READ, + read_lock_mode=TransactionOptions.ReadWrite.ReadLockMode.PESSIMISTIC, + ), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + def test_transaction_should_use_transaction_id_if_error_with_first_batch_update( + self, + ): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._batch_update_helper( + transaction=transaction, database=database, api=api, error_after=2 + ) + api.execute_batch_dml.assert_called_once_with( + request=self._batch_update_expected_request(begin=True), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + self._execute_update_helper(transaction=transaction, api=api) + api.execute_sql.assert_called_once_with( + request=self._execute_update_expected_request( + database=database, begin=False + ), + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + + def test_transaction_should_use_transaction_id_returned_by_first_query(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_sql_helper(transaction=transaction, api=api) + api.execute_streaming_sql.assert_called_once_with( + request=self._execute_sql_expected_request(database=database), + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + self._execute_update_helper(transaction=transaction, api=api) + api.execute_sql.assert_called_once_with( + request=self._execute_update_expected_request( + database=database, begin=False + ), + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + + def test_transaction_should_use_transaction_id_returned_by_first_update(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_update_helper(transaction=transaction, api=api) + api.execute_sql.assert_called_once_with( + request=self._execute_update_expected_request(database=database), + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + self._execute_sql_helper(transaction=transaction, api=api) + api.execute_streaming_sql.assert_called_once_with( + request=self._execute_sql_expected_request( + database=database, begin=False, transaction_tag=True + ), + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + + def test_transaction_execute_sql_w_directed_read_options(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + + self._execute_sql_helper( + transaction=transaction, + api=api, + directed_read_options=DIRECTED_READ_OPTIONS, + ) + api.execute_streaming_sql.assert_called_once_with( + request=self._execute_sql_expected_request( + database=database, directed_read_options=DIRECTED_READ_OPTIONS + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + ) + + def test_transaction_streaming_read_w_directed_read_options(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + + self._read_helper( + transaction=transaction, + api=api, + directed_read_options=DIRECTED_READ_OPTIONS, + ) + api.streaming_read.assert_called_once_with( + request=self._read_helper_expected_request( + directed_read_options=DIRECTED_READ_OPTIONS + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + + def test_transaction_should_use_transaction_id_returned_by_first_read(self): + database = _Database() + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._read_helper(transaction=transaction, api=api) + api.streaming_read.assert_called_once_with( + request=self._read_helper_expected_request(), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + + self._batch_update_helper(transaction=transaction, database=database, api=api) + api.execute_batch_dml.assert_called_once_with( + request=self._batch_update_expected_request(begin=False), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + + def test_transaction_should_use_transaction_id_returned_by_first_batch_update(self): + database = _Database() + api = database.spanner_api = self._make_spanner_api() + session = _Session(database) + transaction = self._make_one(session) + self._batch_update_helper(transaction=transaction, database=database, api=api) + api.execute_batch_dml.assert_called_once_with( + request=self._batch_update_expected_request(), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + self._read_helper(transaction=transaction, api=api) + api.streaming_read.assert_called_once_with( + request=self._read_helper_expected_request( + begin=False, transaction_tag=True + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + + def test_transaction_for_concurrent_statement_should_begin_one_transaction_with_execute_update( + self, + ): + database = _Database() + api = database.spanner_api = self._make_spanner_api() + session = _Session(database) + transaction = self._make_one(session) + threads = [] + threads.append( + threading.Thread( + target=self._execute_update_helper, + kwargs={"transaction": transaction, "api": api}, + ) + ) + threads.append( + threading.Thread( + target=self._execute_update_helper, + kwargs={"transaction": transaction, "api": api}, + ) + ) + for thread in threads: + thread.start() + + for thread in threads: + thread.join() + + self._batch_update_helper(transaction=transaction, database=database, api=api) + + api.execute_sql.assert_any_call( + request=self._execute_update_expected_request(database), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + ) + + api.execute_sql.assert_any_call( + request=self._execute_update_expected_request(database, begin=False), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + ) + + api.execute_batch_dml.assert_any_call( + request=self._batch_update_expected_request(begin=False), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.3.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ) + + self.assertEqual(api.execute_sql.call_count, 2) + self.assertEqual(api.execute_batch_dml.call_count, 1) + + def test_transaction_for_concurrent_statement_should_begin_one_transaction_with_batch_update( + self, + ): + database = _Database() + api = database.spanner_api = self._make_spanner_api() + session = _Session(database) + transaction = self._make_one(session) + threads = [] + threads.append( + threading.Thread( + target=self._batch_update_helper, + kwargs={"transaction": transaction, "database": database, "api": api}, + ) + ) + threads.append( + threading.Thread( + target=self._batch_update_helper, + kwargs={"transaction": transaction, "database": database, "api": api}, + ) + ) + for thread in threads: + thread.start() + + for thread in threads: + thread.join() + + self._execute_update_helper(transaction=transaction, api=api) + self.assertEqual(api.execute_sql.call_count, 1) + + api.execute_sql.assert_any_call( + request=self._execute_update_expected_request(database, begin=False), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.3.1", + ), + ], + ) + + self.assertEqual(api.execute_batch_dml.call_count, 2) + self.assertEqual( + api.execute_batch_dml.call_args_list, + [ + mock.call( + request=self._batch_update_expected_request(), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ), + mock.call( + request=self._batch_update_expected_request(begin=False), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ), + ], + ) + + def test_transaction_for_concurrent_statement_should_begin_one_transaction_with_read( + self, + ): + database = _Database() + api = database.spanner_api = self._make_spanner_api() + session = _Session(database) + transaction = self._make_one(session) + threads = [] + threads.append( + threading.Thread( + target=self._read_helper, + kwargs={"transaction": transaction, "api": api}, + ) + ) + threads.append( + threading.Thread( + target=self._read_helper, + kwargs={"transaction": transaction, "api": api}, + ) + ) + for thread in threads: + thread.start() + + for thread in threads: + thread.join() + + self._execute_update_helper(transaction=transaction, api=api) + + begin_read_write_count = sum( + [1 for call in api.mock_calls if "read_write" in call.kwargs.__str__()] + ) + + self.assertEqual(begin_read_write_count, 1) + api.execute_sql.assert_any_call( + request=self._execute_update_expected_request(database, begin=False), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.1.3.1", + ), + ], + ) + + self.assertEqual( + api.streaming_read.call_args_list, + [ + mock.call( + request=self._read_helper_expected_request(), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ), + mock.call( + request=self._read_helper_expected_request(begin=False), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ), + ], + ) + + self.assertEqual(api.execute_sql.call_count, 1) + self.assertEqual(api.streaming_read.call_count, 2) + + def test_transaction_for_concurrent_statement_should_begin_one_transaction_with_query( + self, + ): + database = _Database() + api = database.spanner_api = self._make_spanner_api() + session = _Session(database) + transaction = self._make_one(session) + threads = [] + threads.append( + threading.Thread( + target=self._execute_sql_helper, + kwargs={"transaction": transaction, "api": api}, + ) + ) + threads.append( + threading.Thread( + target=self._execute_sql_helper, + kwargs={"transaction": transaction, "api": api}, + ) + ) + for thread in threads: + thread.start() + + for thread in threads: + thread.join() + + self._execute_update_helper(transaction=transaction, api=api) + + begin_read_write_count = sum( + [1 for call in api.mock_calls if "read_write" in call.kwargs.__str__()] + ) + + self.assertEqual(begin_read_write_count, 1) + api.execute_sql.assert_any_call( + request=self._execute_update_expected_request(database, begin=False), + retry=RETRY, + timeout=TIMEOUT, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.1.3.1", + ), + ], + ) + + self.assertEqual( + api.execute_streaming_sql.call_args_list, + [ + mock.call( + request=self._execute_sql_expected_request(database), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ), + mock.call( + request=self._execute_sql_expected_request(database, begin=False), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.2.1", + ), + ], + retry=RETRY, + timeout=TIMEOUT, + ), + ], + ) + + self.assertEqual(api.execute_sql.call_count, 1) + self.assertEqual(api.execute_streaming_sql.call_count, 2) + + def test_transaction_should_execute_sql_with_route_to_leader_disabled(self): + database = _Database() + database._route_to_leader_enabled = False + session = _Session(database) + api = database.spanner_api = self._make_spanner_api() + transaction = self._make_one(session) + self._execute_sql_helper(transaction=transaction, api=api) + + api.execute_streaming_sql.assert_called_once_with( + request=self._execute_sql_expected_request(database=database), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1", + ), + ], + timeout=TIMEOUT, + retry=RETRY, + ) + + +class _Client(object): + NTH_CLIENT = AtomicCounter() + + def __init__(self): + from google.cloud.spanner_v1 import ExecuteSqlRequest + + self._query_options = ExecuteSqlRequest.QueryOptions(optimizer_version="1") + self.directed_read_options = None + self.default_transaction_options = DefaultTransactionOptions() + self._nth_client_id = _Client.NTH_CLIENT.increment() + self._nth_request = AtomicCounter() + + @property + def _next_nth_request(self): + return self._nth_request.increment() + + +class _Instance(object): + def __init__(self): + self._client = _Client() + + +class _Database(object): + def __init__(self): + self.name = "testing" + self._instance = _Instance() + self._route_to_leader_enabled = True + self._directed_read_options = None + self.default_transaction_options = DefaultTransactionOptions() + + @property + def _next_nth_request(self): + return self._instance._client._next_nth_request + + @property + def _nth_client_id(self): + return self._instance._client._nth_client_id + + def metadata_with_request_id( + self, nth_request, nth_attempt, prior_metadata=[], span=None + ): + return _metadata_with_request_id( + self._nth_client_id, + self._channel_id, + nth_request, + nth_attempt, + prior_metadata, + span, + ) + + @property + def _channel_id(self): + return 1 + + +class _Session(object): + _transaction = None + + def __init__(self, database=None, name=TestTransaction.SESSION_NAME): + self._database = database + self.name = name + + @property + def session_id(self): + return self.name + + +class _MockIterator(object): + def __init__(self, *values, **kw): + self._iter_values = iter(values) + self._fail_after = kw.pop("fail_after", False) + self._error = kw.pop("error", Exception) + + def __iter__(self): + return self + + def __next__(self): + try: + return next(self._iter_values) + except StopIteration: + if self._fail_after: + raise self._error + raise + + next = __next__ diff --git a/tests/unit/test_spanner_metrics_tracer_factory.py b/tests/unit/test_spanner_metrics_tracer_factory.py new file mode 100644 index 0000000000..8ee4d53d3d --- /dev/null +++ b/tests/unit/test_spanner_metrics_tracer_factory.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- +# Copyright 2025 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from google.cloud.spanner_v1.metrics.spanner_metrics_tracer_factory import ( + SpannerMetricsTracerFactory, +) + + +class TestSpannerMetricsTracerFactory: + def test_new_instance_creation(self): + factory1 = SpannerMetricsTracerFactory(enabled=True) + factory2 = SpannerMetricsTracerFactory(enabled=True) + assert factory1 is factory2 # Should return the same instance + + def test_generate_client_uid_format(self): + client_uid = SpannerMetricsTracerFactory._generate_client_uid() + assert isinstance(client_uid, str) + assert len(client_uid.split("@")) == 3 # Should contain uuid, pid, and hostname + + def test_generate_client_hash(self): + client_uid = "123e4567-e89b-12d3-a456-426614174000@1234@hostname" + client_hash = SpannerMetricsTracerFactory._generate_client_hash(client_uid) + assert isinstance(client_hash, str) + assert len(client_hash) == 6 # Should be a 6-digit hex string + + def test_get_instance_config(self): + instance_config = SpannerMetricsTracerFactory._get_instance_config() + assert instance_config == "unknown" # As per the current implementation + + def test_get_client_name(self): + client_name = SpannerMetricsTracerFactory._get_client_name() + assert isinstance(client_name, str) + assert "spanner-python" in client_name + + def test_get_location(self): + location = SpannerMetricsTracerFactory._get_location() + assert isinstance(location, str) + assert location # Simply asserting for non empty as this can change depending on the instance this test runs in. diff --git a/tests/unit/test_streamed.py b/tests/unit/test_streamed.py index 3f3a90108d..529bb0ef3f 100644 --- a/tests/unit/test_streamed.py +++ b/tests/unit/test_streamed.py @@ -31,7 +31,6 @@ def test_ctor_defaults(self): iterator = _MockCancellableIterator() streamed = self._make_one(iterator) self.assertIs(streamed._response_iterator, iterator) - self.assertIsNone(streamed._source) self.assertEqual(list(streamed), []) self.assertIsNone(streamed.metadata) self.assertIsNone(streamed.stats) @@ -41,7 +40,6 @@ def test_ctor_w_source(self): source = object() streamed = self._make_one(iterator, source=source) self.assertIs(streamed._response_iterator, iterator) - self.assertIs(streamed._source, source) self.assertEqual(list(streamed), []) self.assertIsNone(streamed.metadata) self.assertIsNone(streamed.stats) @@ -54,32 +52,34 @@ def test_fields_unset(self): @staticmethod def _make_scalar_field(name, type_): - from google.cloud.spanner_v1.proto.type_pb2 import StructType - from google.cloud.spanner_v1.proto.type_pb2 import Type + from google.cloud.spanner_v1 import StructType + from google.cloud.spanner_v1 import Type - return StructType.Field(name=name, type=Type(code=type_)) + return StructType.Field(name=name, type_=Type(code=type_)) @staticmethod def _make_array_field(name, element_type_code=None, element_type=None): - from google.cloud.spanner_v1.proto.type_pb2 import StructType - from google.cloud.spanner_v1.proto.type_pb2 import Type + from google.cloud.spanner_v1 import StructType + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode if element_type is None: element_type = Type(code=element_type_code) - array_type = Type(code="ARRAY", array_element_type=element_type) - return StructType.Field(name=name, type=array_type) + array_type = Type(code=TypeCode.ARRAY, array_element_type=element_type) + return StructType.Field(name=name, type_=array_type) @staticmethod def _make_struct_type(struct_type_fields): - from google.cloud.spanner_v1.proto.type_pb2 import StructType - from google.cloud.spanner_v1.proto.type_pb2 import Type + from google.cloud.spanner_v1 import StructType + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode fields = [ - StructType.Field(name=key, type=Type(code=value)) + StructType.Field(name=key, type_=Type(code=value)) for key, value in struct_type_fields ] struct_type = StructType(fields=fields) - return Type(code="STRUCT", struct_type=struct_type) + return Type(code=TypeCode.STRUCT, struct_type=struct_type) @staticmethod def _make_value(value): @@ -99,18 +99,19 @@ def _make_list_value(values=(), value_pbs=None): @staticmethod def _make_result_set_metadata(fields=(), transaction_id=None): - from google.cloud.spanner_v1.proto.result_set_pb2 import ResultSetMetadata + from google.cloud.spanner_v1 import ResultSetMetadata + from google.cloud.spanner_v1 import StructType - metadata = ResultSetMetadata() + metadata = ResultSetMetadata(row_type=StructType(fields=[])) for field in fields: - metadata.row_type.fields.add().CopyFrom(field) + metadata.row_type.fields.append(field) if transaction_id is not None: metadata.transaction.id = transaction_id return metadata @staticmethod def _make_result_set_stats(query_plan=None, **kw): - from google.cloud.spanner_v1.proto.result_set_pb2 import ResultSetStats + from google.cloud.spanner_v1 import ResultSetStats from google.protobuf.struct_pb2 import Struct from google.cloud.spanner_v1._helpers import _make_value_pb @@ -121,44 +122,99 @@ def _make_result_set_stats(query_plan=None, **kw): @staticmethod def _make_partial_result_set( - values, metadata=None, stats=None, chunked_value=False + values, metadata=None, stats=None, chunked_value=False, last=False ): - from google.cloud.spanner_v1.proto.result_set_pb2 import PartialResultSet + from google.cloud.spanner_v1 import PartialResultSet - return PartialResultSet( - values=values, metadata=metadata, stats=stats, chunked_value=chunked_value + results = PartialResultSet( + metadata=metadata, stats=stats, chunked_value=chunked_value, last=last ) + for v in values: + results.values.append(v) + return results def test_properties_set(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), ] metadata = streamed._metadata = self._make_result_set_metadata(FIELDS) stats = streamed._stats = self._make_result_set_stats() self.assertEqual(list(streamed.fields), FIELDS) - self.assertIs(streamed.metadata, metadata) + self.assertIs(streamed.metadata._pb, metadata) self.assertIs(streamed.stats, stats) def test__merge_chunk_bool(self): from google.cloud.spanner_v1.streamed import Unmergeable + from google.cloud.spanner_v1 import TypeCode iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_scalar_field("registered_voter", "BOOL")] + FIELDS = [self._make_scalar_field("registered_voter", TypeCode.BOOL)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = self._make_value(True) - chunk = self._make_value(False) + streamed._pending_chunk = True + chunk = False with self.assertRaises(Unmergeable): streamed._merge_chunk(chunk) + def test__PartialResultSetWithLastFlag(self): + from google.cloud.spanner_v1 import TypeCode + + fields = [ + self._make_scalar_field("ID", TypeCode.INT64), + self._make_scalar_field("NAME", TypeCode.STRING), + ] + for length in range(4, 6): + metadata = self._make_result_set_metadata(fields) + result_sets = [ + self._make_partial_result_set( + [self._make_value(0), "google_0"], metadata=metadata + ) + ] + for i in range(1, 5): + bares = [i] + values = [ + [self._make_value(bare), "google_" + str(bare)] for bare in bares + ] + result_sets.append( + self._make_partial_result_set( + *values, metadata=metadata, last=(i == length - 1) + ) + ) + + iterator = _MockCancellableIterator(*result_sets) + streamed = self._make_one(iterator) + count = 0 + for row in streamed: + self.assertEqual(row[0], count) + self.assertEqual(row[1], "google_" + str(count)) + count += 1 + self.assertEqual(count, length) + + def test__merge_chunk_numeric(self): + from google.cloud.spanner_v1 import TypeCode + + iterator = _MockCancellableIterator() + streamed = self._make_one(iterator) + FIELDS = [self._make_scalar_field("total", TypeCode.NUMERIC)] + streamed._metadata = self._make_result_set_metadata(FIELDS) + streamed._pending_chunk = self._make_value("1234.") + chunk = self._make_value("5678") + + merged = streamed._merge_chunk(chunk) + self.assertEqual(merged.string_value, "1234.5678") + def test__merge_chunk_int64(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_scalar_field("age", "INT64")] + FIELDS = [self._make_scalar_field("age", TypeCode.INT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_value(42) chunk = self._make_value(13) @@ -168,20 +224,24 @@ def test__merge_chunk_int64(self): self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_float64_nan_string(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_scalar_field("weight", "FLOAT64")] + FIELDS = [self._make_scalar_field("weight", TypeCode.FLOAT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = self._make_value(u"Na") - chunk = self._make_value(u"N") + streamed._pending_chunk = self._make_value("Na") + chunk = self._make_value("N") merged = streamed._merge_chunk(chunk) - self.assertEqual(merged.string_value, u"NaN") + self.assertEqual(merged.string_value, "NaN") def test__merge_chunk_float64_w_empty(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_scalar_field("weight", "FLOAT64")] + FIELDS = [self._make_scalar_field("weight", TypeCode.FLOAT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_value(3.14159) chunk = self._make_value("") @@ -191,10 +251,11 @@ def test__merge_chunk_float64_w_empty(self): def test__merge_chunk_float64_w_float64(self): from google.cloud.spanner_v1.streamed import Unmergeable + from google.cloud.spanner_v1 import TypeCode iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_scalar_field("weight", "FLOAT64")] + FIELDS = [self._make_scalar_field("weight", TypeCode.FLOAT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_value(3.14159) chunk = self._make_value(2.71828) @@ -203,46 +264,92 @@ def test__merge_chunk_float64_w_float64(self): streamed._merge_chunk(chunk) def test__merge_chunk_string(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_scalar_field("name", "STRING")] + FIELDS = [self._make_scalar_field("name", TypeCode.STRING)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = self._make_value(u"phred") - chunk = self._make_value(u"wylma") + streamed._pending_chunk = self._make_value("phred") + chunk = self._make_value("wylma") merged = streamed._merge_chunk(chunk) - self.assertEqual(merged.string_value, u"phredwylma") + self.assertEqual(merged.string_value, "phredwylma") self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_string_w_bytes(self): + from google.cloud.spanner_v1 import TypeCode + + iterator = _MockCancellableIterator() + streamed = self._make_one(iterator) + FIELDS = [self._make_scalar_field("image", TypeCode.BYTES)] + streamed._metadata = self._make_result_set_metadata(FIELDS) + streamed._pending_chunk = self._make_value( + "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA" + "6fptVAAAACXBIWXMAAAsTAAALEwEAmpwYAAAA\n" + ) + chunk = self._make_value( + "B3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0FNUExF" + "MG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n" + ) + + merged = streamed._merge_chunk(chunk) + + self.assertEqual( + merged.string_value, + "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA6fptVAAAACXBIWXMAAAsTAAAL" + "EwEAmpwYAAAA\nB3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0" + "FNUExFMG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n", + ) + self.assertIsNone(streamed._pending_chunk) + + def test__merge_chunk_proto(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_scalar_field("image", "BYTES")] + FIELDS = [self._make_scalar_field("proto", TypeCode.PROTO)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_value( - u"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA" - u"6fptVAAAACXBIWXMAAAsTAAALEwEAmpwYAAAA\n" + "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA" + "6fptVAAAACXBIWXMAAAsTAAALEwEAmpwYAAAA\n" ) chunk = self._make_value( - u"B3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0FNUExF" - u"MG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n" + "B3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0FNUExF" + "MG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n" ) merged = streamed._merge_chunk(chunk) self.assertEqual( merged.string_value, - u"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA6fptVAAAACXBIWXMAAAsTAAAL" - u"EwEAmpwYAAAA\nB3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0" - u"FNUExFMG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n", + "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAAAAAA6fptVAAAACXBIWXMAAAsTAAAL" + "EwEAmpwYAAAA\nB3RJTUUH4QQGFwsBTL3HMwAAABJpVFh0Q29tbWVudAAAAAAAU0" + "FNUExFMG3E+AAAAApJREFUCNdj\nYAAAAAIAAeIhvDMAAAAASUVORK5CYII=\n", ) self.assertIsNone(streamed._pending_chunk) + def test__merge_chunk_enum(self): + from google.cloud.spanner_v1 import TypeCode + + iterator = _MockCancellableIterator() + streamed = self._make_one(iterator) + FIELDS = [self._make_scalar_field("age", TypeCode.ENUM)] + streamed._metadata = self._make_result_set_metadata(FIELDS) + streamed._pending_chunk = self._make_value(42) + chunk = self._make_value(13) + + merged = streamed._merge_chunk(chunk) + self.assertEqual(merged.string_value, "4213") + self.assertIsNone(streamed._pending_chunk) + def test__merge_chunk_array_of_bool(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_array_field("name", element_type_code="BOOL")] + FIELDS = [self._make_array_field("name", element_type_code=TypeCode.BOOL)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_list_value([True, True]) chunk = self._make_list_value([False, False, False]) @@ -254,9 +361,11 @@ def test__merge_chunk_array_of_bool(self): self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_int(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_array_field("name", element_type_code="INT64")] + FIELDS = [self._make_array_field("name", element_type_code=TypeCode.INT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_list_value([0, 1, 2]) chunk = self._make_list_value([3, 4, 5]) @@ -268,6 +377,7 @@ def test__merge_chunk_array_of_int(self): self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_float(self): + from google.cloud.spanner_v1 import TypeCode import math PI = math.pi @@ -276,7 +386,7 @@ def test__merge_chunk_array_of_float(self): LOG_10 = math.log(10) iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_array_field("name", element_type_code="FLOAT64")] + FIELDS = [self._make_array_field("name", element_type_code=TypeCode.FLOAT64)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_list_value([PI, SQRT_2]) chunk = self._make_list_value(["", EULER, LOG_10]) @@ -288,56 +398,79 @@ def test__merge_chunk_array_of_float(self): self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_string_with_empty(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_array_field("name", element_type_code="STRING")] + FIELDS = [self._make_array_field("name", element_type_code=TypeCode.STRING)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = self._make_list_value([u"A", u"B", u"C"]) + streamed._pending_chunk = self._make_list_value(["A", "B", "C"]) chunk = self._make_list_value([]) merged = streamed._merge_chunk(chunk) - expected = self._make_list_value([u"A", u"B", u"C"]) + expected = self._make_list_value(["A", "B", "C"]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_string(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_array_field("name", element_type_code="STRING")] + FIELDS = [self._make_array_field("name", element_type_code=TypeCode.STRING)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = self._make_list_value([u"A", u"B", u"C"]) - chunk = self._make_list_value([None, u"D", u"E"]) + streamed._pending_chunk = self._make_list_value(["A", "B", "C"]) + chunk = self._make_list_value(["D", "E"]) merged = streamed._merge_chunk(chunk) - expected = self._make_list_value([u"A", u"B", u"C", None, u"D", u"E"]) + expected = self._make_list_value(["A", "B", "CD", "E"]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_string_with_null(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [self._make_array_field("name", element_type_code="STRING")] + FIELDS = [self._make_array_field("name", element_type_code=TypeCode.STRING)] streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = self._make_list_value([u"A", u"B", u"C"]) - chunk = self._make_list_value([u"D", u"E"]) + streamed._pending_chunk = self._make_list_value(["A", "B", "C"]) + chunk = self._make_list_value([None, "D", "E"]) merged = streamed._merge_chunk(chunk) - expected = self._make_list_value([u"A", u"B", u"CD", u"E"]) + expected = self._make_list_value(["A", "B", "C", None, "D", "E"]) + self.assertEqual(merged, expected) + self.assertIsNone(streamed._pending_chunk) + + def test__merge_chunk_array_of_string_with_null_pending(self): + from google.cloud.spanner_v1 import TypeCode + + iterator = _MockCancellableIterator() + streamed = self._make_one(iterator) + FIELDS = [self._make_array_field("name", element_type_code=TypeCode.STRING)] + streamed._metadata = self._make_result_set_metadata(FIELDS) + streamed._pending_chunk = self._make_list_value(["A", "B", "C", None]) + chunk = self._make_list_value(["D", "E"]) + merged = streamed._merge_chunk(chunk) + expected = self._make_list_value(["A", "B", "C", None, "D", "E"]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_array_of_int(self): - from google.cloud.spanner_v1.proto.type_pb2 import StructType - from google.cloud.spanner_v1.proto.type_pb2 import Type + from google.cloud.spanner_v1 import StructType + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode - subarray_type = Type(code="ARRAY", array_element_type=Type(code="INT64")) - array_type = Type(code="ARRAY", array_element_type=subarray_type) + subarray_type = Type( + code=TypeCode.ARRAY, array_element_type=Type(code=TypeCode.INT64) + ) + array_type = Type(code=TypeCode.ARRAY, array_element_type=subarray_type) iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [StructType.Field(name="loloi", type=array_type)] + FIELDS = [StructType.Field(name="loloi", type_=array_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_list_value( value_pbs=[self._make_list_value([0, 1]), self._make_list_value([2])] @@ -359,25 +492,28 @@ def test__merge_chunk_array_of_array_of_int(self): self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_array_of_string(self): - from google.cloud.spanner_v1.proto.type_pb2 import StructType - from google.cloud.spanner_v1.proto.type_pb2 import Type + from google.cloud.spanner_v1 import StructType + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode - subarray_type = Type(code="ARRAY", array_element_type=Type(code="STRING")) - array_type = Type(code="ARRAY", array_element_type=subarray_type) + subarray_type = Type( + code=TypeCode.ARRAY, array_element_type=Type(code=TypeCode.STRING) + ) + array_type = Type(code=TypeCode.ARRAY, array_element_type=subarray_type) iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - FIELDS = [StructType.Field(name="lolos", type=array_type)] + FIELDS = [StructType.Field(name="lolos", type_=array_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._pending_chunk = self._make_list_value( value_pbs=[ - self._make_list_value([u"A", u"B"]), - self._make_list_value([u"C"]), + self._make_list_value(["A", "B"]), + self._make_list_value(["C"]), ] ) chunk = self._make_list_value( value_pbs=[ - self._make_list_value([u"D"]), - self._make_list_value([u"E", u"F"]), + self._make_list_value(["D"]), + self._make_list_value(["E", "F"]), ] ) @@ -385,39 +521,47 @@ def test__merge_chunk_array_of_array_of_string(self): expected = self._make_list_value( value_pbs=[ - self._make_list_value([u"A", u"B"]), - self._make_list_value([u"CD"]), - self._make_list_value([u"E", u"F"]), + self._make_list_value(["A", "B"]), + self._make_list_value(["CD"]), + self._make_list_value(["E", "F"]), ] ) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_struct(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - struct_type = self._make_struct_type([("name", "STRING"), ("age", "INT64")]) + struct_type = self._make_struct_type( + [("name", TypeCode.STRING), ("age", TypeCode.INT64)] + ) FIELDS = [self._make_array_field("test", element_type=struct_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - partial = self._make_list_value([u"Phred "]) + partial = self._make_list_value(["Phred "]) streamed._pending_chunk = self._make_list_value(value_pbs=[partial]) - rest = self._make_list_value([u"Phlyntstone", 31]) + rest = self._make_list_value(["Phlyntstone", 31]) chunk = self._make_list_value(value_pbs=[rest]) merged = streamed._merge_chunk(chunk) - struct = self._make_list_value([u"Phred Phlyntstone", 31]) + struct = self._make_list_value(["Phred Phlyntstone", 31]) expected = self._make_list_value(value_pbs=[struct]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_struct_with_empty(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) - struct_type = self._make_struct_type([("name", "STRING"), ("age", "INT64")]) + struct_type = self._make_struct_type( + [("name", TypeCode.STRING), ("age", TypeCode.INT64)] + ) FIELDS = [self._make_array_field("test", element_type=struct_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - partial = self._make_list_value([u"Phred "]) + partial = self._make_list_value(["Phred "]) streamed._pending_chunk = self._make_list_value(value_pbs=[partial]) rest = self._make_list_value([]) chunk = self._make_list_value(value_pbs=[rest]) @@ -429,32 +573,60 @@ def test__merge_chunk_array_of_struct_with_empty(self): self.assertIsNone(streamed._pending_chunk) def test__merge_chunk_array_of_struct_unmergeable(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) struct_type = self._make_struct_type( - [("name", "STRING"), ("registered", "BOOL"), ("voted", "BOOL")] + [ + ("name", TypeCode.STRING), + ("registered", TypeCode.BOOL), + ("voted", TypeCode.BOOL), + ] ) FIELDS = [self._make_array_field("test", element_type=struct_type)] streamed._metadata = self._make_result_set_metadata(FIELDS) - partial = self._make_list_value([u"Phred Phlyntstone", True]) + partial = self._make_list_value(["Phred Phlyntstone", True]) streamed._pending_chunk = self._make_list_value(value_pbs=[partial]) rest = self._make_list_value([True]) chunk = self._make_list_value(value_pbs=[rest]) merged = streamed._merge_chunk(chunk) - struct = self._make_list_value([u"Phred Phlyntstone", True, True]) + struct = self._make_list_value(["Phred Phlyntstone", True, True]) + expected = self._make_list_value(value_pbs=[struct]) + self.assertEqual(merged, expected) + self.assertIsNone(streamed._pending_chunk) + + def test__merge_chunk_array_of_struct_unmergeable_split(self): + iterator = _MockCancellableIterator() + streamed = self._make_one(iterator) + struct_type = self._make_struct_type( + [("name", "STRING"), ("height", "FLOAT64"), ("eye_color", "STRING")] + ) + FIELDS = [self._make_array_field("test", element_type=struct_type)] + streamed._metadata = self._make_result_set_metadata(FIELDS) + partial = self._make_list_value(["Phred Phlyntstone", 1.65]) + streamed._pending_chunk = self._make_list_value(value_pbs=[partial]) + rest = self._make_list_value(["brown"]) + chunk = self._make_list_value(value_pbs=[rest]) + + merged = streamed._merge_chunk(chunk) + + struct = self._make_list_value(["Phred Phlyntstone", 1.65, "brown"]) expected = self._make_list_value(value_pbs=[struct]) self.assertEqual(merged, expected) self.assertIsNone(streamed._pending_chunk) def test_merge_values_empty_and_empty(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) streamed._current_row = [] @@ -463,15 +635,17 @@ def test_merge_values_empty_and_empty(self): self.assertEqual(streamed._current_row, []) def test_merge_values_empty_and_partial(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - BARE = [u"Phred Phlyntstone", 42] + BARE = ["Phred Phlyntstone", 42] VALUES = [self._make_value(bare) for bare in BARE] streamed._current_row = [] streamed._merge_values(VALUES) @@ -479,15 +653,17 @@ def test_merge_values_empty_and_partial(self): self.assertEqual(streamed._current_row, BARE) def test_merge_values_empty_and_filled(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - BARE = [u"Phred Phlyntstone", 42, True] + BARE = ["Phred Phlyntstone", 42, True] VALUES = [self._make_value(bare) for bare in BARE] streamed._current_row = [] streamed._merge_values(VALUES) @@ -495,22 +671,24 @@ def test_merge_values_empty_and_filled(self): self.assertEqual(streamed._current_row, []) def test_merge_values_empty_and_filled_plus(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) BARE = [ - u"Phred Phlyntstone", + "Phred Phlyntstone", 42, True, - u"Bharney Rhubble", + "Bharney Rhubble", 39, True, - u"Wylma Phlyntstone", + "Wylma Phlyntstone", ] VALUES = [self._make_value(bare) for bare in BARE] streamed._current_row = [] @@ -519,30 +697,34 @@ def test_merge_values_empty_and_filled_plus(self): self.assertEqual(streamed._current_row, BARE[6:]) def test_merge_values_partial_and_empty(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - BEFORE = [u"Phred Phlyntstone"] + BEFORE = ["Phred Phlyntstone"] streamed._current_row[:] = BEFORE streamed._merge_values([]) self.assertEqual(list(streamed), []) self.assertEqual(streamed._current_row, BEFORE) def test_merge_values_partial_and_partial(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - BEFORE = [u"Phred Phlyntstone"] + BEFORE = ["Phred Phlyntstone"] streamed._current_row[:] = BEFORE MERGED = [42] TO_MERGE = [self._make_value(item) for item in MERGED] @@ -551,15 +733,17 @@ def test_merge_values_partial_and_partial(self): self.assertEqual(streamed._current_row, BEFORE + MERGED) def test_merge_values_partial_and_filled(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - BEFORE = [u"Phred Phlyntstone"] + BEFORE = ["Phred Phlyntstone"] streamed._current_row[:] = BEFORE MERGED = [42, True] TO_MERGE = [self._make_value(item) for item in MERGED] @@ -568,17 +752,19 @@ def test_merge_values_partial_and_filled(self): self.assertEqual(streamed._current_row, []) def test_merge_values_partial_and_filled_plus(self): + from google.cloud.spanner_v1 import TypeCode + iterator = _MockCancellableIterator() streamed = self._make_one(iterator) FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] streamed._metadata = self._make_result_set_metadata(FIELDS) - BEFORE = [self._make_value(u"Phred Phlyntstone")] + BEFORE = [self._make_value("Phred Phlyntstone")] streamed._current_row[:] = BEFORE - MERGED = [42, True, u"Bharney Rhubble", 39, True, u"Wylma Phlyntstone"] + MERGED = [42, True, "Bharney Rhubble", 39, True, "Wylma Phlyntstone"] TO_MERGE = [self._make_value(item) for item in MERGED] VALUES = BEFORE + MERGED streamed._merge_values(TO_MERGE) @@ -634,14 +820,16 @@ def test_consume_next_empty(self): streamed._consume_next() def test_consume_next_first_set_partial(self): + from google.cloud.spanner_v1 import TypeCode + TXN_ID = b"DEADBEEF" FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] metadata = self._make_result_set_metadata(FIELDS, transaction_id=TXN_ID) - BARE = [u"Phred Phlyntstone", 42] + BARE = ["Phred Phlyntstone", 42] VALUES = [self._make_value(bare) for bare in BARE] result_set = self._make_partial_result_set(VALUES, metadata=metadata) iterator = _MockCancellableIterator(result_set) @@ -651,17 +839,18 @@ def test_consume_next_first_set_partial(self): self.assertEqual(list(streamed), []) self.assertEqual(streamed._current_row, BARE) self.assertEqual(streamed.metadata, metadata) - self.assertEqual(source._transaction_id, TXN_ID) def test_consume_next_first_set_partial_existing_txn_id(self): + from google.cloud.spanner_v1 import TypeCode + TXN_ID = b"DEADBEEF" FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] metadata = self._make_result_set_metadata(FIELDS, transaction_id=b"") - BARE = [u"Phred Phlyntstone", 42] + BARE = ["Phred Phlyntstone", 42] VALUES = [self._make_value(bare) for bare in BARE] result_set = self._make_partial_result_set(VALUES, metadata=metadata) iterator = _MockCancellableIterator(result_set) @@ -674,12 +863,14 @@ def test_consume_next_first_set_partial_existing_txn_id(self): self.assertEqual(source._transaction_id, TXN_ID) def test_consume_next_w_partial_result(self): + from google.cloud.spanner_v1 import TypeCode + FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] - VALUES = [self._make_value(u"Phred ")] + VALUES = [self._make_value("Phred ")] result_set = self._make_partial_result_set(VALUES, chunked_value=True) iterator = _MockCancellableIterator(result_set) streamed = self._make_one(iterator) @@ -690,45 +881,49 @@ def test_consume_next_w_partial_result(self): self.assertEqual(streamed._pending_chunk, VALUES[0]) def test_consume_next_w_pending_chunk(self): + from google.cloud.spanner_v1 import TypeCode + FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] BARE = [ - u"Phlyntstone", + "Phlyntstone", 42, True, - u"Bharney Rhubble", + "Bharney Rhubble", 39, True, - u"Wylma Phlyntstone", + "Wylma Phlyntstone", ] VALUES = [self._make_value(bare) for bare in BARE] result_set = self._make_partial_result_set(VALUES) iterator = _MockCancellableIterator(result_set) streamed = self._make_one(iterator) streamed._metadata = self._make_result_set_metadata(FIELDS) - streamed._pending_chunk = self._make_value(u"Phred ") + streamed._pending_chunk = self._make_value("Phred ") streamed._consume_next() self.assertEqual( list(streamed), - [[u"Phred Phlyntstone", BARE[1], BARE[2]], [BARE[3], BARE[4], BARE[5]]], + [["Phred Phlyntstone", BARE[1], BARE[2]], [BARE[3], BARE[4], BARE[5]]], ) self.assertEqual(streamed._current_row, [BARE[6]]) self.assertIsNone(streamed._pending_chunk) def test_consume_next_last_set(self): + from google.cloud.spanner_v1 import TypeCode + FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] metadata = self._make_result_set_metadata(FIELDS) stats = self._make_result_set_stats( rows_returned="1", elapsed_time="1.23 secs", cpu_time="0.98 secs" ) - BARE = [u"Phred Phlyntstone", 42, True] + BARE = ["Phred Phlyntstone", 42, True] VALUES = [self._make_value(bare) for bare in BARE] result_set = self._make_partial_result_set(VALUES, stats=stats) iterator = _MockCancellableIterator(result_set) @@ -746,14 +941,19 @@ def test___iter___empty(self): self.assertEqual(found, []) def test___iter___one_result_set_partial(self): + from google.cloud.spanner_v1 import TypeCode + from google.protobuf.struct_pb2 import Value + FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] metadata = self._make_result_set_metadata(FIELDS) - BARE = [u"Phred Phlyntstone", 42] + BARE = ["Phred Phlyntstone", 42] VALUES = [self._make_value(bare) for bare in BARE] + for val in VALUES: + self.assertIsInstance(val, Value) result_set = self._make_partial_result_set(VALUES, metadata=metadata) iterator = _MockCancellableIterator(result_set) streamed = self._make_one(iterator) @@ -764,20 +964,22 @@ def test___iter___one_result_set_partial(self): self.assertEqual(streamed.metadata, metadata) def test___iter___multiple_result_sets_filled(self): + from google.cloud.spanner_v1 import TypeCode + FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] metadata = self._make_result_set_metadata(FIELDS) BARE = [ - u"Phred Phlyntstone", + "Phred Phlyntstone", 42, True, - u"Bharney Rhubble", + "Bharney Rhubble", 39, True, - u"Wylma Phlyntstone", + "Wylma Phlyntstone", 41, True, ] @@ -800,21 +1002,23 @@ def test___iter___multiple_result_sets_filled(self): self.assertIsNone(streamed._pending_chunk) def test___iter___w_existing_rows_read(self): + from google.cloud.spanner_v1 import TypeCode + FIELDS = [ - self._make_scalar_field("full_name", "STRING"), - self._make_scalar_field("age", "INT64"), - self._make_scalar_field("married", "BOOL"), + self._make_scalar_field("full_name", TypeCode.STRING), + self._make_scalar_field("age", TypeCode.INT64), + self._make_scalar_field("married", TypeCode.BOOL), ] metadata = self._make_result_set_metadata(FIELDS) - ALREADY = [[u"Pebbylz Phlyntstone", 4, False], [u"Dino Rhubble", 4, False]] + ALREADY = [["Pebbylz Phlyntstone", 4, False], ["Dino Rhubble", 4, False]] BARE = [ - u"Phred Phlyntstone", + "Phred Phlyntstone", 42, True, - u"Bharney Rhubble", + "Bharney Rhubble", 39, True, - u"Wylma Phlyntstone", + "Wylma Phlyntstone", 41, True, ] @@ -840,7 +1044,6 @@ def test___iter___w_existing_rows_read(self): class _MockCancellableIterator(object): - cancel_calls = 0 def __init__(self, *values): @@ -854,7 +1057,6 @@ def __next__(self): # pragma: NO COVER Py3k class TestStreamedResultSet_JSON_acceptance_tests(unittest.TestCase): - _json_tests = None def _getTargetClass(self): @@ -873,7 +1075,7 @@ def _load_json_test(self, test_name): filename = os.path.join(dirname, "streaming-read-acceptance-test.json") raw = _parse_streaming_read_acceptance_tests(filename) tests = self.__class__._json_tests = {} - for (name, partial_result_sets, results) in raw: + for name, partial_result_sets, results in raw: tests[name] = partial_result_sets, results return self.__class__._json_tests[test_name] @@ -959,14 +1161,13 @@ def test_multiple_row_chunks_non_chunks_interleaved(self): def _generate_partial_result_sets(prs_text_pbs): - from google.protobuf.json_format import Parse - from google.cloud.spanner_v1.proto.result_set_pb2 import PartialResultSet + from google.cloud.spanner_v1 import PartialResultSet partial_result_sets = [] for prs_text_pb in prs_text_pbs: - prs = PartialResultSet() - partial_result_sets.append(Parse(prs_text_pb, prs)) + prs = PartialResultSet.from_json(prs_text_pb) + partial_result_sets.append(prs) return partial_result_sets @@ -981,11 +1182,11 @@ def _normalize_int_array(cell): def _normalize_float(cell): - if cell == u"Infinity": + if cell == "Infinity": return float("inf") - if cell == u"-Infinity": + if cell == "-Infinity": return float("-inf") - if cell == u"NaN": + if cell == "NaN": return float("nan") if cell is not None: return float(cell) @@ -993,23 +1194,23 @@ def _normalize_float(cell): def _normalize_results(rows_data, fields): """Helper for _parse_streaming_read_acceptance_tests""" - from google.cloud.spanner_v1.proto import type_pb2 + from google.cloud.spanner_v1 import TypeCode normalized = [] for row_data in rows_data: row = [] assert len(row_data) == len(fields) for cell, field in zip(row_data, fields): - if field.type.code == type_pb2.INT64: + if field.type_.code == TypeCode.INT64: cell = int(cell) - if field.type.code == type_pb2.FLOAT64: + if field.type_.code == TypeCode.FLOAT64: cell = _normalize_float(cell) - elif field.type.code == type_pb2.BYTES: + elif field.type_.code == TypeCode.BYTES: cell = cell.encode("utf8") - elif field.type.code == type_pb2.ARRAY: - if field.type.array_element_type.code == type_pb2.INT64: + elif field.type_.code == TypeCode.ARRAY: + if field.type_.array_element_type.code == TypeCode.INT64: cell = _normalize_int_array(cell) - elif field.type.array_element_type.code == type_pb2.FLOAT64: + elif field.type_.array_element_type.code == TypeCode.FLOAT64: cell = [_normalize_float(subcell) for subcell in cell] row.append(cell) normalized.append(row) diff --git a/tests/unit/test_table.py b/tests/unit/test_table.py new file mode 100644 index 0000000000..3b0cb949aa --- /dev/null +++ b/tests/unit/test_table.py @@ -0,0 +1,131 @@ +# Copyright 2021 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import unittest + +from google.cloud.exceptions import NotFound +import mock + +from google.cloud.spanner_v1.types import ( + StructType, + Type, + TypeCode, +) + + +class _BaseTest(unittest.TestCase): + TABLE_ID = "test_table" + TABLE_SCHEMA = "" + + def _make_one(self, *args, **kwargs): + return self._get_target_class()(*args, **kwargs) + + +class TestTable(_BaseTest): + def _get_target_class(self): + from google.cloud.spanner_v1.table import Table + + return Table + + def test_ctor(self): + from google.cloud.spanner_v1.database import Database + + db = mock.create_autospec(Database, instance=True) + table = self._make_one(self.TABLE_ID, db) + self.assertEqual(table.table_id, self.TABLE_ID) + + def test_exists_executes_query(self): + from google.cloud.spanner_v1.database import Database, SnapshotCheckout + from google.cloud.spanner_v1.snapshot import Snapshot + from google.cloud.spanner_v1.table import _EXISTS_TEMPLATE + + db = mock.create_autospec(Database, instance=True) + checkout = mock.create_autospec(SnapshotCheckout, instance=True) + snapshot = mock.create_autospec(Snapshot, instance=True) + db.snapshot.return_value = checkout + checkout.__enter__.return_value = snapshot + snapshot.execute_sql.return_value = [[False]] + table = self._make_one(self.TABLE_ID, db, schema_name=self.TABLE_SCHEMA) + exists = table.exists() + self.assertFalse(exists) + snapshot.execute_sql.assert_called_with( + _EXISTS_TEMPLATE.format( + "WHERE TABLE_SCHEMA = @schema_name AND TABLE_NAME = @table_id" + ), + params={"schema_name": self.TABLE_SCHEMA, "table_id": self.TABLE_ID}, + param_types={ + "schema_name": Type(code=TypeCode.STRING), + "table_id": Type(code=TypeCode.STRING), + }, + ) + + def test_schema_executes_query(self): + from google.cloud.spanner_v1.database import Database, SnapshotCheckout + from google.cloud.spanner_v1.snapshot import Snapshot + from google.cloud.spanner_v1.table import _GET_SCHEMA_TEMPLATE + + db = mock.create_autospec(Database, instance=True) + db.default_schema_name = "" + checkout = mock.create_autospec(SnapshotCheckout, instance=True) + snapshot = mock.create_autospec(Snapshot, instance=True) + db.snapshot.return_value = checkout + checkout.__enter__.return_value = snapshot + table = self._make_one(self.TABLE_ID, db, schema_name=self.TABLE_SCHEMA) + schema = table.schema + self.assertIsInstance(schema, list) + expected_query = _GET_SCHEMA_TEMPLATE.format("`{}`".format(self.TABLE_ID)) + snapshot.execute_sql.assert_called_with(expected_query) + + def test_schema_returns_cache(self): + from google.cloud.spanner_v1.database import Database + + db = mock.create_autospec(Database, instance=True) + table = self._make_one(self.TABLE_ID, db) + table._schema = [StructType.Field(name="col1")] + schema = table.schema + self.assertEqual(schema, [StructType.Field(name="col1")]) + + def test_reload_raises_notfound(self): + from google.cloud.spanner_v1.database import Database, SnapshotCheckout + from google.cloud.spanner_v1.snapshot import Snapshot + + db = mock.create_autospec(Database, instance=True) + checkout = mock.create_autospec(SnapshotCheckout, instance=True) + snapshot = mock.create_autospec(Snapshot, instance=True) + db.snapshot.return_value = checkout + checkout.__enter__.return_value = snapshot + snapshot.execute_sql.return_value = [[False]] + table = self._make_one(self.TABLE_ID, db) + with self.assertRaises(NotFound): + table.reload() + + def test_reload_executes_queries(self): + from google.cloud.spanner_v1.database import Database, SnapshotCheckout + from google.cloud.spanner_v1.snapshot import Snapshot + from google.cloud.spanner_v1.streamed import StreamedResultSet + + db = mock.create_autospec(Database, instance=True) + checkout = mock.create_autospec(SnapshotCheckout, instance=True) + snapshot = mock.create_autospec(Snapshot, instance=True) + results = mock.create_autospec(StreamedResultSet, instance=True) + db.snapshot.return_value = checkout + checkout.__enter__.return_value = snapshot + results.fields = [StructType.Field(name="col1")] + snapshot.execute_sql.side_effect = [ + [[True]], + results, + ] + table = self._make_one(self.TABLE_ID, db) + table.reload() + self.assertEqual(table.schema, [StructType.Field(name="col1")]) diff --git a/tests/unit/test_transaction.py b/tests/unit/test_transaction.py index 6ae24aedab..7a33372dae 100644 --- a/tests/unit/test_transaction.py +++ b/tests/unit/test_transaction.py @@ -11,19 +11,63 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - - -import unittest +from threading import Lock +from typing import Mapping +from datetime import timedelta import mock +from google.cloud.spanner_v1 import ( + RequestOptions, + CommitRequest, + Mutation, + KeySet, + BeginTransactionRequest, + TransactionOptions, + ResultSetMetadata, +) +from google.cloud.spanner_v1 import DefaultTransactionOptions +from google.cloud.spanner_v1 import Type +from google.cloud.spanner_v1 import TypeCode +from google.api_core.retry import Retry +from google.api_core import gapic_v1 +from google.cloud.spanner_v1._helpers import ( + AtomicCounter, + _metadata_with_request_id, +) +from google.cloud.spanner_v1.batch import _make_write_pb +from google.cloud.spanner_v1.database import Database +from google.cloud.spanner_v1.transaction import Transaction +from google.cloud.spanner_v1.request_id_header import ( + REQ_RAND_PROCESS_ID, + build_request_id, +) +from tests._builders import ( + build_transaction, + build_precommit_token_pb, + build_session, + build_commit_response_pb, + build_transaction_pb, +) + +from tests._helpers import ( + HAS_OPENTELEMETRY_INSTALLED, + LIB_VERSION, + OpenTelemetryBase, + StatusCode, + enrich_with_otel_scope, +) + +KEYS = [[0], [1], [2]] +KEYSET = KeySet(keys=KEYS) +KEYSET_PB = KEYSET._to_pb() TABLE_NAME = "citizens" COLUMNS = ["email", "first_name", "last_name", "age"] -VALUES = [ - ["phred@exammple.com", "Phred", "Phlyntstone", 32], - ["bharney@example.com", "Bharney", "Rhubble", 31], -] +VALUE_1 = ["phred@exammple.com", "Phred", "Phlyntstone", 32] +VALUE_2 = ["bharney@example.com", "Bharney", "Rhubble", 31] +VALUES = [VALUE_1, VALUE_2] + DML_QUERY = """\ INSERT INTO citizens(first_name, last_name, age) VALUES ("Phred", "Phlyntstone", 32) @@ -33,11 +77,21 @@ VALUES ("Phred", "Phlyntstone", @age) """ PARAMS = {"age": 30} -PARAM_TYPES = {"age": "INT64"} +PARAM_TYPES = {"age": Type(code=TypeCode.INT64)} + +TRANSACTION_ID = b"transaction-id" +TRANSACTION_TAG = "transaction-tag" + +PRECOMMIT_TOKEN_PB_0 = build_precommit_token_pb(precommit_token=b"0", seq_num=0) +PRECOMMIT_TOKEN_PB_1 = build_precommit_token_pb(precommit_token=b"1", seq_num=1) +PRECOMMIT_TOKEN_PB_2 = build_precommit_token_pb(precommit_token=b"2", seq_num=2) +DELETE_MUTATION = Mutation(delete=Mutation.Delete(table=TABLE_NAME, key_set=KEYSET_PB)) +INSERT_MUTATION = Mutation(insert=_make_write_pb(TABLE_NAME, COLUMNS, VALUES)) +UPDATE_MUTATION = Mutation(update=_make_write_pb(TABLE_NAME, COLUMNS, VALUES)) -class TestTransaction(unittest.TestCase): +class TestTransaction(OpenTelemetryBase): PROJECT_ID = "project-id" INSTANCE_ID = "instance-id" INSTANCE_NAME = "projects/" + PROJECT_ID + "/instances/" + INSTANCE_ID @@ -45,7 +99,6 @@ class TestTransaction(unittest.TestCase): DATABASE_NAME = INSTANCE_NAME + "/databases/" + DATABASE_ID SESSION_ID = "session-id" SESSION_NAME = DATABASE_NAME + "/sessions/" + SESSION_ID - TRANSACTION_ID = b"DEADBEEF" def _getTargetClass(self): from google.cloud.spanner_v1.transaction import Transaction @@ -58,69 +111,33 @@ def _make_one(self, session, *args, **kwargs): return transaction def _make_spanner_api(self): - import google.cloud.spanner_v1.gapic.spanner_client + from google.cloud.spanner_v1 import SpannerClient - return mock.create_autospec( - google.cloud.spanner_v1.gapic.spanner_client.SpannerClient, instance=True - ) - - def test_ctor_session_w_existing_txn(self): - session = _Session() - session._transaction = object() - with self.assertRaises(ValueError): - self._make_one(session) + return mock.create_autospec(SpannerClient, instance=True) def test_ctor_defaults(self): - session = _Session() - transaction = self._make_one(session) - self.assertIs(transaction._session, session) - self.assertIsNone(transaction._transaction_id) - self.assertIsNone(transaction.committed) - self.assertFalse(transaction.rolled_back) - self.assertTrue(transaction._multi_use) - self.assertEqual(transaction._execute_sql_count, 0) - - def test__check_state_not_begun(self): - session = _Session() - transaction = self._make_one(session) - with self.assertRaises(ValueError): - transaction._check_state() + session = build_session() + transaction = Transaction(session=session) - def test__check_state_already_committed(self): - session = _Session() - transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID - transaction.committed = object() - with self.assertRaises(ValueError): - transaction._check_state() - - def test__check_state_already_rolled_back(self): - session = _Session() - transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID - transaction.rolled_back = True - with self.assertRaises(ValueError): - transaction._check_state() + # Attributes from _SessionWrapper + self.assertEqual(transaction._session, session) - def test__check_state_ok(self): - session = _Session() - transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID - transaction._check_state() # does not raise + # Attributes from _SnapshotBase + self.assertFalse(transaction._read_only) + self.assertTrue(transaction._multi_use) + self.assertEqual(transaction._execute_sql_request_count, 0) + self.assertEqual(transaction._read_request_count, 0) + self.assertIsNone(transaction._transaction_id) + self.assertIsNone(transaction._precommit_token) + self.assertIsInstance(transaction._lock, type(Lock())) - def test__make_txn_selector(self): - session = _Session() - transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID - selector = transaction._make_txn_selector() - self.assertEqual(selector.id, self.TRANSACTION_ID) + # Attributes from _BatchBase + self.assertEqual(transaction._mutations, []) + self.assertIsNone(transaction._precommit_token) + self.assertIsNone(transaction.committed) + self.assertIsNone(transaction.commit_stats) - def test_begin_already_begun(self): - session = _Session() - transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID - with self.assertRaises(ValueError): - transaction.begin() + self.assertFalse(transaction.rolled_back) def test_begin_already_rolled_back(self): session = _Session() @@ -129,6 +146,8 @@ def test_begin_already_rolled_back(self): with self.assertRaises(ValueError): transaction.begin() + self.assertNoSpans() + def test_begin_already_committed(self): session = _Session() transaction = self._make_one(session) @@ -136,68 +155,49 @@ def test_begin_already_committed(self): with self.assertRaises(ValueError): transaction.begin() - def test_begin_w_other_error(self): - database = _Database() - database.spanner_api = self._make_spanner_api() - database.spanner_api.begin_transaction.side_effect = RuntimeError() - session = _Session(database) - transaction = self._make_one(session) - - with self.assertRaises(RuntimeError): - transaction.begin() - - def test_begin_ok(self): - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - ) + self.assertNoSpans() - transaction_pb = TransactionPB(id=self.TRANSACTION_ID) + def test_rollback_not_begun(self): database = _Database() - api = database.spanner_api = _FauxSpannerAPI( - _begin_transaction_response=transaction_pb - ) + api = database.spanner_api = self._make_spanner_api() session = _Session(database) transaction = self._make_one(session) - txn_id = transaction.begin() + transaction.rollback() + self.assertTrue(transaction.rolled_back) - self.assertEqual(txn_id, self.TRANSACTION_ID) - self.assertEqual(transaction._transaction_id, self.TRANSACTION_ID) + # Since there was no transaction to be rolled back, rollback rpc is not called. + api.rollback.assert_not_called() - session_id, txn_options, metadata = api._begun - self.assertEqual(session_id, session.name) - self.assertTrue(txn_options.HasField("read_write")) - self.assertEqual(metadata, [("google-cloud-resource-prefix", database.name)]) - - def test_rollback_not_begun(self): - session = _Session() - transaction = self._make_one(session) - with self.assertRaises(ValueError): - transaction.rollback() + self.assertNoSpans() def test_rollback_already_committed(self): session = _Session() transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID transaction.committed = object() with self.assertRaises(ValueError): transaction.rollback() + self.assertNoSpans() + def test_rollback_already_rolled_back(self): session = _Session() transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID transaction.rolled_back = True with self.assertRaises(ValueError): transaction.rollback() + self.assertNoSpans() + def test_rollback_w_other_error(self): database = _Database() database.spanner_api = self._make_spanner_api() database.spanner_api.rollback.side_effect = RuntimeError("other error") session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID transaction.insert(TABLE_NAME, COLUMNS, VALUES) with self.assertRaises(RuntimeError): @@ -205,6 +205,15 @@ def test_rollback_w_other_error(self): self.assertFalse(transaction.rolled_back) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertSpanAttributes( + "CloudSpanner.Transaction.rollback", + status=StatusCode.ERROR, + attributes=self._build_span_attributes( + database, x_goog_spanner_request_id=req_id + ), + ) + def test_rollback_ok(self): from google.protobuf.empty_pb2 import Empty @@ -213,48 +222,137 @@ def test_rollback_ok(self): api = database.spanner_api = _FauxSpannerAPI(_rollback_response=empty_pb) session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID transaction.replace(TABLE_NAME, COLUMNS, VALUES) transaction.rollback() self.assertTrue(transaction.rolled_back) - self.assertIsNone(session._transaction) session_id, txn_id, metadata = api._rolled_back self.assertEqual(session_id, session.name) - self.assertEqual(txn_id, self.TRANSACTION_ID) - self.assertEqual(metadata, [("google-cloud-resource-prefix", database.name)]) + self.assertEqual(txn_id, TRANSACTION_ID) + req_id = f"1.{REQ_RAND_PROCESS_ID}.{database._nth_client_id}.{database._channel_id}.1.1" + self.assertEqual( + metadata, + [ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + req_id, + ), + ], + ) + + self.assertSpanAttributes( + "CloudSpanner.Transaction.rollback", + attributes=self._build_span_attributes( + database, x_goog_spanner_request_id=req_id + ), + ) def test_commit_not_begun(self): - session = _Session() + database = _Database() + database.spanner_api = self._make_spanner_api() + session = _Session(database) transaction = self._make_one(session) with self.assertRaises(ValueError): transaction.commit() + if not HAS_OPENTELEMETRY_INSTALLED: + return + + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + want_span_names = ["CloudSpanner.Transaction.commit"] + self.assertEqual(got_span_names, want_span_names) + + got_span_events_statuses = self.finished_spans_events_statuses() + want_span_events_statuses = [ + ( + "exception", + { + "exception.type": "ValueError", + "exception.message": "Transaction has not begun.", + "exception.stacktrace": "EPHEMERAL", + "exception.escaped": "False", + }, + ) + ] + self.assertEqual(got_span_events_statuses, want_span_events_statuses) + def test_commit_already_committed(self): - session = _Session() + database = _Database() + database.spanner_api = self._make_spanner_api() + session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID transaction.committed = object() with self.assertRaises(ValueError): transaction.commit() + if not HAS_OPENTELEMETRY_INSTALLED: + return + + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + want_span_names = ["CloudSpanner.Transaction.commit"] + self.assertEqual(got_span_names, want_span_names) + + got_span_events_statuses = self.finished_spans_events_statuses() + want_span_events_statuses = [ + ( + "exception", + { + "exception.type": "ValueError", + "exception.message": "Transaction already committed.", + "exception.stacktrace": "EPHEMERAL", + "exception.escaped": "False", + }, + ) + ] + self.assertEqual(got_span_events_statuses, want_span_events_statuses) + def test_commit_already_rolled_back(self): - session = _Session() + database = _Database() + database.spanner_api = self._make_spanner_api() + session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID transaction.rolled_back = True with self.assertRaises(ValueError): transaction.commit() + if not HAS_OPENTELEMETRY_INSTALLED: + return + + span_list = self.get_finished_spans() + got_span_names = [span.name for span in span_list] + want_span_names = ["CloudSpanner.Transaction.commit"] + self.assertEqual(got_span_names, want_span_names) + + got_span_events_statuses = self.finished_spans_events_statuses() + want_span_events_statuses = [ + ( + "exception", + { + "exception.type": "ValueError", + "exception.message": "Transaction already rolled back.", + "exception.stacktrace": "EPHEMERAL", + "exception.escaped": "False", + }, + ) + ] + self.assertEqual(got_span_events_statuses, want_span_events_statuses) + def test_commit_w_other_error(self): database = _Database() database.spanner_api = self._make_spanner_api() database.spanner_api.commit.side_effect = RuntimeError() session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID transaction.replace(TABLE_NAME, COLUMNS, VALUES) with self.assertRaises(RuntimeError): @@ -262,57 +360,290 @@ def test_commit_w_other_error(self): self.assertIsNone(transaction.committed) - def _commit_helper(self, mutate=True): - import datetime - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.keyset import KeySet - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp - - now = datetime.datetime.utcnow().replace(tzinfo=UTC) - now_pb = _datetime_to_pb_timestamp(now) - keys = [[0], [1], [2]] - keyset = KeySet(keys=keys) - response = CommitResponse(commit_timestamp=now_pb) - database = _Database() - api = database.spanner_api = _FauxSpannerAPI(_commit_response=response) - session = _Session(database) - transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + req_id = f"1.{REQ_RAND_PROCESS_ID}.{_Client.NTH_CLIENT.value}.1.1.1" + self.assertSpanAttributes( + "CloudSpanner.Transaction.commit", + status=StatusCode.ERROR, + attributes=self._build_span_attributes( + database, + x_goog_spanner_request_id=req_id, + num_mutations=1, + ), + ) - if mutate: - transaction.delete(TABLE_NAME, keyset) + def _commit_helper( + self, + mutations=None, + return_commit_stats=False, + request_options=None, + max_commit_delay_in=None, + retry_for_precommit_token=None, + is_multiplexed=False, + expected_begin_mutation=None, + ): + from google.cloud.spanner_v1 import CommitRequest - transaction.commit() + # [A] Build transaction + # --------------------- - self.assertEqual(transaction.committed, now) - self.assertIsNone(session._transaction) + session = build_session(is_multiplexed=is_multiplexed) + transaction = build_transaction(session=session) - session_id, mutations, txn_id, metadata = api._committed - self.assertEqual(session_id, session.name) - self.assertEqual(txn_id, self.TRANSACTION_ID) - self.assertEqual(mutations, transaction._mutations) - self.assertEqual(metadata, [("google-cloud-resource-prefix", database.name)]) + database = session._database + api = database.spanner_api - def test_commit_no_mutations(self): - self._commit_helper(mutate=False) + transaction.transaction_tag = TRANSACTION_TAG - def test_commit_w_mutations(self): - self._commit_helper(mutate=True) + if mutations is not None: + transaction._mutations = mutations - def test__make_params_pb_w_params_wo_param_types(self): - session = _Session() - transaction = self._make_one(session) + # [B] Build responses + # ------------------- - with self.assertRaises(ValueError): - transaction._make_params_pb(PARAMS, None) + # Mock begin API call. + begin_precommit_token_pb = PRECOMMIT_TOKEN_PB_0 + begin_transaction = api.begin_transaction + begin_transaction.return_value = build_transaction_pb( + id=TRANSACTION_ID, precommit_token=begin_precommit_token_pb + ) - def test__make_params_pb_wo_params_w_param_types(self): - session = _Session() - transaction = self._make_one(session) + # Mock commit API call. + retry_precommit_token = PRECOMMIT_TOKEN_PB_1 + commit_response_pb = build_commit_response_pb( + precommit_token=retry_precommit_token if retry_for_precommit_token else None + ) + if return_commit_stats: + commit_response_pb.commit_stats.mutation_count = 4 + + commit = api.commit + commit.return_value = commit_response_pb + + # [C] Begin transaction, add mutations, and execute commit + # -------------------------------------------------------- + + # Transaction must be begun unless it is mutations-only. + if mutations is None: + transaction._transaction_id = TRANSACTION_ID + + commit_timestamp = transaction.commit( + return_commit_stats=return_commit_stats, + request_options=request_options, + max_commit_delay=max_commit_delay_in, + ) + + # [D] Verify results + # ------------------ + + # Verify transaction state. + self.assertEqual(transaction.committed, commit_timestamp) + + if return_commit_stats: + self.assertEqual(transaction.commit_stats.mutation_count, 4) + + nth_request_counter = AtomicCounter() + base_metadata = [ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ] + + # Verify begin API call. + if mutations is not None: + self.assertEqual(transaction._transaction_id, TRANSACTION_ID) + + expected_begin_transaction_request = BeginTransactionRequest( + session=session.name, + options=TransactionOptions(read_write=TransactionOptions.ReadWrite()), + mutation_key=expected_begin_mutation, + ) + + expected_begin_metadata = base_metadata.copy() + expected_begin_metadata.append( + ( + "x-goog-spanner-request-id", + self._build_request_id( + database, nth_request=nth_request_counter.increment() + ), + ) + ) + + begin_transaction.assert_called_once_with( + request=expected_begin_transaction_request, + metadata=expected_begin_metadata, + ) + + # Verify commit API call(s). + self.assertEqual(commit.call_count, 1 if not retry_for_precommit_token else 2) + + if request_options is None: + expected_request_options = RequestOptions(transaction_tag=TRANSACTION_TAG) + elif type(request_options) is dict: + expected_request_options = RequestOptions(request_options) + expected_request_options.transaction_tag = TRANSACTION_TAG + expected_request_options.request_tag = None + else: + expected_request_options = request_options + expected_request_options.transaction_tag = TRANSACTION_TAG + expected_request_options.request_tag = None + + common_expected_commit_response_args = { + "session": session.name, + "transaction_id": TRANSACTION_ID, + "return_commit_stats": return_commit_stats, + "max_commit_delay": max_commit_delay_in, + "request_options": expected_request_options, + } + + # Only include precommit_token if the session is multiplexed and token exists + commit_request_args = { + "mutations": transaction._mutations, + **common_expected_commit_response_args, + } + if session.is_multiplexed and transaction._precommit_token is not None: + commit_request_args["precommit_token"] = transaction._precommit_token + + expected_commit_request = CommitRequest(**commit_request_args) + + expected_commit_metadata = base_metadata.copy() + expected_commit_metadata.append( + ( + "x-goog-spanner-request-id", + self._build_request_id( + database, nth_request=nth_request_counter.increment() + ), + ) + ) + commit.assert_any_call( + request=expected_commit_request, + metadata=expected_commit_metadata, + ) + + if retry_for_precommit_token: + expected_retry_request = CommitRequest( + precommit_token=retry_precommit_token, + **common_expected_commit_response_args, + ) + expected_retry_metadata = base_metadata.copy() + expected_retry_metadata.append( + ( + "x-goog-spanner-request-id", + self._build_request_id( + database, nth_request=nth_request_counter.increment() + ), + ) + ) + commit.assert_any_call( + request=expected_retry_request, + metadata=expected_retry_metadata, + ) + + if not HAS_OPENTELEMETRY_INSTALLED: + return + + # Verify span names. + expected_names = ["CloudSpanner.Transaction.commit"] + if mutations is not None: + expected_names.append("CloudSpanner.Transaction.begin") + + actual_names = [span.name for span in self.get_finished_spans()] + self.assertEqual(actual_names, expected_names) + + # Verify span events statuses. + expected_statuses = [("Starting Commit", {})] + if retry_for_precommit_token: + expected_statuses.append( + ("Transaction Commit Attempt Failed. Retrying", {}) + ) + expected_statuses.append(("Commit Done", {})) + + actual_statuses = self.finished_spans_events_statuses() + self.assertEqual(actual_statuses, expected_statuses) + + def test_commit_mutations_only_not_multiplexed(self): + self._commit_helper(mutations=[DELETE_MUTATION], is_multiplexed=False) + + def test_commit_mutations_only_multiplexed_w_non_insert_mutation(self): + self._commit_helper( + mutations=[DELETE_MUTATION], + is_multiplexed=True, + expected_begin_mutation=DELETE_MUTATION, + ) + + def test_commit_mutations_only_multiplexed_w_insert_mutation(self): + self._commit_helper( + mutations=[INSERT_MUTATION], + is_multiplexed=True, + expected_begin_mutation=INSERT_MUTATION, + ) + def test_commit_mutations_only_multiplexed_w_non_insert_and_insert_mutations(self): + self._commit_helper( + mutations=[INSERT_MUTATION, DELETE_MUTATION], + is_multiplexed=True, + expected_begin_mutation=DELETE_MUTATION, + ) + + def test_commit_mutations_only_multiplexed_w_multiple_insert_mutations(self): + insert_1 = Mutation(insert=_make_write_pb(TABLE_NAME, COLUMNS, [VALUE_1])) + insert_2 = Mutation( + insert=_make_write_pb(TABLE_NAME, COLUMNS, [VALUE_1, VALUE_2]) + ) + + self._commit_helper( + mutations=[insert_1, insert_2], + is_multiplexed=True, + expected_begin_mutation=insert_2, + ) + + def test_commit_mutations_only_multiplexed_w_multiple_non_insert_mutations(self): + mutations = [UPDATE_MUTATION, DELETE_MUTATION] + self._commit_helper( + mutations=mutations, + is_multiplexed=True, + expected_begin_mutation=mutations[0], + ) + + def test_commit_w_return_commit_stats(self): + self._commit_helper(return_commit_stats=True) + + def test_commit_w_max_commit_delay(self): + self._commit_helper(max_commit_delay_in=timedelta(milliseconds=100)) + + def test_commit_w_request_tag_success(self): + request_options = RequestOptions(request_tag="tag-1") + self._commit_helper(request_options=request_options) + + def test_commit_w_transaction_tag_ignored_success(self): + request_options = RequestOptions(transaction_tag="tag-1-1") + self._commit_helper(request_options=request_options) + + def test_commit_w_request_and_transaction_tag_success(self): + request_options = RequestOptions(request_tag="tag-1", transaction_tag="tag-1-1") + self._commit_helper(request_options=request_options) + + def test_commit_w_request_and_transaction_tag_dictionary_success(self): + request_options = {"request_tag": "tag-1", "transaction_tag": "tag-1-1"} + self._commit_helper(request_options=request_options) + + def test_commit_w_incorrect_tag_dictionary_error(self): + request_options = {"incorrect_tag": "tag-1-1"} with self.assertRaises(ValueError): - transaction._make_params_pb(None, PARAM_TYPES) + self._commit_helper(request_options=request_options) + + def test_commit_w_retry_for_precommit_token(self): + self._commit_helper(retry_for_precommit_token=True) + + def test_commit_w_retry_for_precommit_token_then_error(self): + transaction = build_transaction() + + commit = transaction._session._database.spanner_api.commit + commit.side_effect = [ + build_commit_response_pb(precommit_token=PRECOMMIT_TOKEN_PB_0), + RuntimeError(), + ] + + transaction.begin() + with self.assertRaises(RuntimeError): + transaction.commit() def test__make_params_pb_w_params_w_param_types(self): from google.protobuf.struct_pb2 import Struct @@ -334,43 +665,62 @@ def test_execute_update_other_error(self): database.spanner_api.execute_sql.side_effect = RuntimeError() session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID with self.assertRaises(RuntimeError): transaction.execute_update(DML_QUERY) - def test_execute_update_w_params_wo_param_types(self): - database = _Database() - database.spanner_api = self._make_spanner_api() - session = _Session(database) - session = _Session() - transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID - - with self.assertRaises(ValueError): - transaction.execute_update(DML_QUERY_WITH_PARAM, PARAMS) - - def _execute_update_helper(self, count=0, query_options=None): + def _execute_update_helper( + self, + count=0, + query_options=None, + request_options=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + begin=True, + use_multiplexed=False, + ): from google.protobuf.struct_pb2 import Struct - from google.cloud.spanner_v1.proto.result_set_pb2 import ( + from google.cloud.spanner_v1 import ( ResultSet, ResultSetStats, ) - from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionSelector + from google.cloud.spanner_v1 import TransactionSelector from google.cloud.spanner_v1._helpers import ( _make_value_pb, _merge_query_options, ) + from google.cloud.spanner_v1 import ExecuteSqlRequest MODE = 2 # PROFILE - stats_pb = ResultSetStats(row_count_exact=1) database = _Database() api = database.spanner_api = self._make_spanner_api() - api.execute_sql.return_value = ResultSet(stats=stats_pb) + + # If the transaction had not already begun, the first result set will include + # metadata with information about the transaction. Precommit tokens will be + # included in the result sets if the transaction is on a multiplexed session. + transaction_pb = None if begin else build_transaction_pb(id=TRANSACTION_ID) + metadata_pb = ResultSetMetadata(transaction=transaction_pb) + precommit_token_pb = PRECOMMIT_TOKEN_PB_0 if use_multiplexed else None + + api.execute_sql.return_value = ResultSet( + stats=ResultSetStats(row_count_exact=1), + metadata=metadata_pb, + precommit_token=precommit_token_pb, + ) + session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID - transaction._execute_sql_count = count + transaction.transaction_tag = TRANSACTION_TAG + transaction._execute_sql_request_count = count + + if begin: + transaction._transaction_id = TRANSACTION_ID + + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) row_count = transaction.execute_update( DML_QUERY_WITH_PARAM, @@ -378,11 +728,21 @@ def _execute_update_helper(self, count=0, query_options=None): PARAM_TYPES, query_mode=MODE, query_options=query_options, + request_options=request_options, + retry=retry, + timeout=timeout, ) self.assertEqual(row_count, 1) - expected_transaction = TransactionSelector(id=self.TRANSACTION_ID) + expected_transaction = ( + TransactionSelector(id=transaction._transaction_id) + if begin + else TransactionSelector( + begin=TransactionOptions(read_write=TransactionOptions.ReadWrite()) + ) + ) + expected_params = Struct( fields={key: _make_value_pb(value) for (key, value) in PARAMS.items()} ) @@ -392,70 +752,156 @@ def _execute_update_helper(self, count=0, query_options=None): expected_query_options = _merge_query_options( expected_query_options, query_options ) + expected_request_options = request_options + expected_request_options.transaction_tag = TRANSACTION_TAG - api.execute_sql.assert_called_once_with( - self.SESSION_NAME, - DML_QUERY_WITH_PARAM, + expected_request = ExecuteSqlRequest( + session=self.SESSION_NAME, + sql=DML_QUERY_WITH_PARAM, transaction=expected_transaction, params=expected_params, param_types=PARAM_TYPES, query_mode=MODE, query_options=expected_query_options, + request_options=request_options, seqno=count, - metadata=[("google-cloud-resource-prefix", database.name)], ) + api.execute_sql.assert_called_once_with( + request=expected_request, + retry=retry, + timeout=timeout, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{_Client.NTH_CLIENT.value}.1.1.1", + ), + ], + ) + + self.assertSpanAttributes( + "CloudSpanner.Transaction.execute_update", + attributes=self._build_span_attributes( + database, **{"db.statement": DML_QUERY_WITH_PARAM} + ), + ) + + self.assertEqual(transaction._transaction_id, TRANSACTION_ID) + self.assertEqual(transaction._execute_sql_request_count, count + 1) - self.assertEqual(transaction._execute_sql_count, count + 1) + if use_multiplexed: + self.assertEqual(transaction._precommit_token, PRECOMMIT_TOKEN_PB_0) def test_execute_update_new_transaction(self): self._execute_update_helper() + def test_execute_update_w_request_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._execute_update_helper(request_options=request_options) + + def test_execute_update_w_transaction_tag_success(self): + request_options = RequestOptions( + transaction_tag="tag-1-1", + ) + self._execute_update_helper(request_options=request_options) + + def test_execute_update_w_request_and_transaction_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + transaction_tag="tag-1-1", + ) + self._execute_update_helper(request_options=request_options) + + def test_execute_update_w_request_and_transaction_tag_dictionary_success(self): + request_options = {"request_tag": "tag-1", "transaction_tag": "tag-1-1"} + self._execute_update_helper(request_options=request_options) + + def test_execute_update_w_incorrect_tag_dictionary_error(self): + request_options = {"incorrect_tag": "tag-1-1"} + with self.assertRaises(ValueError): + self._execute_update_helper(request_options=request_options) + def test_execute_update_w_count(self): self._execute_update_helper(count=1) + def test_execute_update_w_timeout_param(self): + self._execute_update_helper(timeout=2.0) + + def test_execute_update_w_retry_param(self): + self._execute_update_helper(retry=Retry(deadline=60)) + + def test_execute_update_w_timeout_and_retry_params(self): + self._execute_update_helper(retry=Retry(deadline=60), timeout=2.0) + def test_execute_update_error(self): database = _Database() database.spanner_api = self._make_spanner_api() database.spanner_api.execute_sql.side_effect = RuntimeError() session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID with self.assertRaises(RuntimeError): transaction.execute_update(DML_QUERY) - self.assertEqual(transaction._execute_sql_count, 1) + self.assertEqual(transaction._execute_sql_request_count, 1) def test_execute_update_w_query_options(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest self._execute_update_helper( query_options=ExecuteSqlRequest.QueryOptions(optimizer_version="3") ) + def test_execute_update_wo_begin(self): + self._execute_update_helper(begin=False) + + def test_execute_update_w_precommit_token(self): + self._execute_update_helper(use_multiplexed=True) + + def test_execute_update_w_request_options(self): + self._execute_update_helper( + request_options=RequestOptions( + priority=RequestOptions.Priority.PRIORITY_MEDIUM + ) + ) + def test_batch_update_other_error(self): database = _Database() database.spanner_api = self._make_spanner_api() database.spanner_api.execute_batch_dml.side_effect = RuntimeError() session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID with self.assertRaises(RuntimeError): transaction.batch_update(statements=[DML_QUERY]) - def _batch_update_helper(self, error_after=None, count=0): + def _batch_update_helper( + self, + error_after=None, + count=0, + request_options=None, + retry=gapic_v1.method.DEFAULT, + timeout=gapic_v1.method.DEFAULT, + begin=True, + use_multiplexed=False, + ): from google.rpc.status_pb2 import Status from google.protobuf.struct_pb2 import Struct - from google.cloud.spanner_v1.proto.result_set_pb2 import ResultSet - from google.cloud.spanner_v1.proto.result_set_pb2 import ResultSetStats - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteBatchDmlResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import TransactionSelector + from google.cloud.spanner_v1 import param_types + from google.cloud.spanner_v1 import ResultSet + from google.cloud.spanner_v1 import ExecuteBatchDmlRequest + from google.cloud.spanner_v1 import ExecuteBatchDmlResponse + from google.cloud.spanner_v1 import TransactionSelector from google.cloud.spanner_v1._helpers import _make_value_pb insert_dml = "INSERT INTO table(pkey, desc) VALUES (%pkey, %desc)" insert_params = {"pkey": 12345, "desc": "DESCRIPTION"} - insert_param_types = {"pkey": "INT64", "desc": "STRING"} + insert_param_types = {"pkey": param_types.INT64, "desc": param_types.STRING} update_dml = 'UPDATE table SET desc = desc + "-amended"' delete_dml = "DELETE FROM table WHERE desc IS NULL" @@ -465,78 +911,178 @@ def _batch_update_helper(self, error_after=None, count=0): delete_dml, ] - stats_pbs = [ - ResultSetStats(row_count_exact=1), - ResultSetStats(row_count_exact=2), - ResultSetStats(row_count_exact=3), + # These precommit tokens are intentionally returned with sequence numbers out + # of order to test that the transaction saves the precommit token with the + # highest sequence number. + precommit_tokens = [ + PRECOMMIT_TOKEN_PB_2, + PRECOMMIT_TOKEN_PB_0, + PRECOMMIT_TOKEN_PB_1, ] - if error_after is not None: - stats_pbs = stats_pbs[:error_after] - expected_status = Status(code=400) - else: - expected_status = Status(code=200) - expected_row_counts = [stats.row_count_exact for stats in stats_pbs] - response = ExecuteBatchDmlResponse( - status=expected_status, - result_sets=[ResultSet(stats=stats_pb) for stats_pb in stats_pbs], - ) + expected_status = Status(code=200) if error_after is None else Status(code=400) + + result_sets = [] + for i in range(len(precommit_tokens)): + if error_after is not None and i == error_after: + break + + result_set_args = {"stats": {"row_count_exact": i}} + + # If the transaction had not already begun, the first result + # set will include metadata with information about the transaction. + if not begin and i == 0: + result_set_args["metadata"] = {"transaction": {"id": TRANSACTION_ID}} + + # Precommit tokens will be included in the result + # sets if the transaction is on a multiplexed session. + if use_multiplexed: + result_set_args["precommit_token"] = precommit_tokens[i] + + result_sets.append(ResultSet(**result_set_args)) + database = _Database() api = database.spanner_api = self._make_spanner_api() - api.execute_batch_dml.return_value = response + api.execute_batch_dml.return_value = ExecuteBatchDmlResponse( + status=expected_status, + result_sets=result_sets, + ) + session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID - transaction._execute_sql_count = count - - status, row_counts = transaction.batch_update(dml_statements) + transaction.transaction_tag = TRANSACTION_TAG + transaction._execute_sql_request_count = count + + if begin: + transaction._transaction_id = TRANSACTION_ID + + if request_options is None: + request_options = RequestOptions() + elif type(request_options) is dict: + request_options = RequestOptions(request_options) + + status, row_counts = transaction.batch_update( + dml_statements, + request_options=request_options, + retry=retry, + timeout=timeout, + ) self.assertEqual(status, expected_status) - self.assertEqual(row_counts, expected_row_counts) + self.assertEqual( + row_counts, [result_set.stats.row_count_exact for result_set in result_sets] + ) + + expected_transaction = ( + TransactionSelector(id=transaction._transaction_id) + if begin + else TransactionSelector( + begin=TransactionOptions(read_write=TransactionOptions.ReadWrite()) + ) + ) - expected_transaction = TransactionSelector(id=self.TRANSACTION_ID) expected_insert_params = Struct( fields={ key: _make_value_pb(value) for (key, value) in insert_params.items() } ) expected_statements = [ - { - "sql": insert_dml, - "params": expected_insert_params, - "param_types": insert_param_types, - }, - {"sql": update_dml}, - {"sql": delete_dml}, + ExecuteBatchDmlRequest.Statement( + sql=insert_dml, + params=expected_insert_params, + param_types=insert_param_types, + ), + ExecuteBatchDmlRequest.Statement(sql=update_dml), + ExecuteBatchDmlRequest.Statement(sql=delete_dml), ] + expected_request_options = request_options + expected_request_options.transaction_tag = TRANSACTION_TAG - api.execute_batch_dml.assert_called_once_with( + expected_request = ExecuteBatchDmlRequest( session=self.SESSION_NAME, transaction=expected_transaction, statements=expected_statements, seqno=count, - metadata=[("google-cloud-resource-prefix", database.name)], + request_options=expected_request_options, + ) + api.execute_batch_dml.assert_called_once_with( + request=expected_request, + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ( + "x-goog-spanner-request-id", + f"1.{REQ_RAND_PROCESS_ID}.{_Client.NTH_CLIENT.value}.1.1.1", + ), + ], + retry=retry, + timeout=timeout, ) - self.assertEqual(transaction._execute_sql_count, count + 1) + self.assertEqual(transaction._execute_sql_request_count, count + 1) + self.assertEqual(transaction._transaction_id, TRANSACTION_ID) + + if use_multiplexed: + self.assertEqual(transaction._precommit_token, PRECOMMIT_TOKEN_PB_2) + + def test_batch_update_wo_begin(self): + self._batch_update_helper(begin=False) def test_batch_update_wo_errors(self): - self._batch_update_helper() + self._batch_update_helper( + request_options=RequestOptions( + priority=RequestOptions.Priority.PRIORITY_MEDIUM + ), + ) + + def test_batch_update_w_request_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + ) + self._batch_update_helper(request_options=request_options) + + def test_batch_update_w_transaction_tag_success(self): + request_options = RequestOptions( + transaction_tag="tag-1-1", + ) + self._batch_update_helper(request_options=request_options) + + def test_batch_update_w_request_and_transaction_tag_success(self): + request_options = RequestOptions( + request_tag="tag-1", + transaction_tag="tag-1-1", + ) + self._batch_update_helper(request_options=request_options) + + def test_batch_update_w_request_and_transaction_tag_dictionary_success(self): + request_options = {"request_tag": "tag-1", "transaction_tag": "tag-1-1"} + self._batch_update_helper(request_options=request_options) + + def test_batch_update_w_incorrect_tag_dictionary_error(self): + request_options = {"incorrect_tag": "tag-1-1"} + with self.assertRaises(ValueError): + self._batch_update_helper(request_options=request_options) def test_batch_update_w_errors(self): self._batch_update_helper(error_after=2, count=1) def test_batch_update_error(self): + from google.cloud.spanner_v1 import Type + from google.cloud.spanner_v1 import TypeCode + database = _Database() api = database.spanner_api = self._make_spanner_api() api.execute_batch_dml.side_effect = RuntimeError() session = _Session(database) transaction = self._make_one(session) - transaction._transaction_id = self.TRANSACTION_ID + transaction._transaction_id = TRANSACTION_ID insert_dml = "INSERT INTO table(pkey, desc) VALUES (%pkey, %desc)" insert_params = {"pkey": 12345, "desc": "DESCRIPTION"} - insert_param_types = {"pkey": "INT64", "desc": "STRING"} + insert_param_types = { + "pkey": Type(code=TypeCode.INT64), + "desc": Type(code=TypeCode.STRING), + } update_dml = 'UPDATE table SET desc = desc + "-amended"' delete_dml = "DELETE FROM table WHERE desc IS NULL" @@ -549,49 +1095,52 @@ def test_batch_update_error(self): with self.assertRaises(RuntimeError): transaction.batch_update(dml_statements) - self.assertEqual(transaction._execute_sql_count, 1) + self.assertEqual(transaction._execute_sql_request_count, 1) - def test_context_mgr_success(self): - import datetime - from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - ) - from google.cloud._helpers import UTC - from google.cloud._helpers import _datetime_to_pb_timestamp + def test_batch_update_w_timeout_param(self): + self._batch_update_helper(timeout=2.0) - transaction_pb = TransactionPB(id=self.TRANSACTION_ID) - database = _Database() - now = datetime.datetime.utcnow().replace(tzinfo=UTC) - now_pb = _datetime_to_pb_timestamp(now) - response = CommitResponse(commit_timestamp=now_pb) - database = _Database() - api = database.spanner_api = _FauxSpannerAPI( - _begin_transaction_response=transaction_pb, _commit_response=response - ) - session = _Session(database) - transaction = self._make_one(session) + def test_batch_update_w_retry_param(self): + self._batch_update_helper(retry=gapic_v1.method.DEFAULT) + + def test_batch_update_w_timeout_and_retry_params(self): + self._batch_update_helper(retry=gapic_v1.method.DEFAULT, timeout=2.0) + + def test_batch_update_w_precommit_token(self): + self._batch_update_helper(use_multiplexed=True) + + def test_context_mgr_success(self): + transaction = build_transaction() + session = transaction._session + database = session._database + commit = database.spanner_api.commit with transaction: transaction.insert(TABLE_NAME, COLUMNS, VALUES) - self.assertEqual(transaction.committed, now) - - session_id, mutations, txn_id, metadata = api._committed - self.assertEqual(session_id, self.SESSION_NAME) - self.assertEqual(txn_id, self.TRANSACTION_ID) - self.assertEqual(mutations, transaction._mutations) - self.assertEqual(metadata, [("google-cloud-resource-prefix", database.name)]) + self.assertEqual(transaction.committed, commit.return_value.commit_timestamp) + + commit.assert_called_once_with( + request=CommitRequest( + session=session.name, + transaction_id=transaction._transaction_id, + request_options=RequestOptions(), + mutations=transaction._mutations, + ), + metadata=[ + ("google-cloud-resource-prefix", database.name), + ("x-goog-spanner-route-to-leader", "true"), + ("x-goog-spanner-request-id", self._build_request_id(database)), + ], + ) def test_context_mgr_failure(self): from google.protobuf.empty_pb2 import Empty empty_pb = Empty() - from google.cloud.spanner_v1.proto.transaction_pb2 import ( - Transaction as TransactionPB, - ) + from google.cloud.spanner_v1 import Transaction as TransactionPB - transaction_pb = TransactionPB(id=self.TRANSACTION_ID) + transaction_pb = TransactionPB(id=TRANSACTION_ID) database = _Database() api = database.spanner_api = _FauxSpannerAPI( _begin_transaction_response=transaction_pb, _rollback_response=empty_pb @@ -605,22 +1154,65 @@ def test_context_mgr_failure(self): raise Exception("bail out") self.assertEqual(transaction.committed, None) + # Rollback rpc will not be called as there is no transaction id to be rolled back, rolled_back flag will be marked as true. self.assertTrue(transaction.rolled_back) self.assertEqual(len(transaction._mutations), 1) - self.assertEqual(api._committed, None) - session_id, txn_id, metadata = api._rolled_back - self.assertEqual(session_id, session.name) - self.assertEqual(txn_id, self.TRANSACTION_ID) - self.assertEqual(metadata, [("google-cloud-resource-prefix", database.name)]) + @staticmethod + def _build_span_attributes( + database: Database, **extra_attributes + ) -> Mapping[str, str]: + """Builds the attributes for spans using the given database and extra attributes.""" + + attributes = enrich_with_otel_scope( + { + "db.type": "spanner", + "db.url": "spanner.googleapis.com", + "db.instance": database.name, + "net.host.name": "spanner.googleapis.com", + "gcp.client.service": "spanner", + "gcp.client.version": LIB_VERSION, + "gcp.client.repo": "googleapis/python-spanner", + } + ) + + if extra_attributes: + attributes.update(extra_attributes) + + return attributes + + @staticmethod + def _build_request_id( + database: Database, nth_request: int = None, attempt: int = 1 + ) -> str: + """Builds a request ID for an Spanner Client API request with the given database and attempt number.""" + + client = database._instance._client + nth_request = nth_request or client._nth_request.value + + return build_request_id( + client_id=client._nth_client_id, + channel_id=database._channel_id, + nth_request=nth_request, + attempt=attempt, + ) class _Client(object): + NTH_CLIENT = AtomicCounter() + def __init__(self): - from google.cloud.spanner_v1.proto.spanner_pb2 import ExecuteSqlRequest + from google.cloud.spanner_v1 import ExecuteSqlRequest self._query_options = ExecuteSqlRequest.QueryOptions(optimizer_version="1") + self.directed_read_options = None + self._nth_client_id = _Client.NTH_CLIENT.increment() + self._nth_request = AtomicCounter() + + @property + def _next_nth_request(self): + return self._nth_request.increment() class _Instance(object): @@ -632,40 +1224,78 @@ class _Database(object): def __init__(self): self.name = "testing" self._instance = _Instance() + self._route_to_leader_enabled = True + self._directed_read_options = None + self.default_transaction_options = DefaultTransactionOptions() + @property + def _next_nth_request(self): + return self._instance._client._next_nth_request -class _Session(object): + @property + def _nth_client_id(self): + return self._instance._client._nth_client_id + def metadata_with_request_id( + self, nth_request, nth_attempt, prior_metadata=[], span=None + ): + return _metadata_with_request_id( + self._nth_client_id, + self._channel_id, + nth_request, + nth_attempt, + prior_metadata, + span, + ) + + @property + def _channel_id(self): + return 1 + + +class _Session(object): _transaction = None def __init__(self, database=None, name=TestTransaction.SESSION_NAME): self._database = database self.name = name + @property + def session_id(self): + return self.name -class _FauxSpannerAPI(object): +class _FauxSpannerAPI(object): _committed = None def __init__(self, **kwargs): self.__dict__.update(**kwargs) - def begin_transaction(self, session, options_, metadata=None): - self._begun = (session, options_, metadata) + def begin_transaction(self, session=None, options=None, metadata=None): + self._begun = (session, options, metadata) return self._begin_transaction_response - def rollback(self, session, transaction_id, metadata=None): + def rollback(self, session=None, transaction_id=None, metadata=None): self._rolled_back = (session, transaction_id, metadata) return self._rollback_response def commit( self, - session, - mutations, - transaction_id="", - single_use_transaction=None, + request=None, metadata=None, ): - assert single_use_transaction is None - self._committed = (session, mutations, transaction_id, metadata) + assert not request.single_use_transaction + + max_commit_delay = None + if type(request).pb(request).HasField("max_commit_delay"): + max_commit_delay = request.max_commit_delay + + self._committed = ( + request.session, + request.mutations, + request.transaction_id, + request.request_options, + max_commit_delay, + metadata, + ) return self._commit_response diff --git a/tests/unit/testdata/singer.proto b/tests/unit/testdata/singer.proto new file mode 100644 index 0000000000..1a995614a7 --- /dev/null +++ b/tests/unit/testdata/singer.proto @@ -0,0 +1,17 @@ +syntax = "proto3"; + +package examples.spanner.music; + +message SingerInfo { + optional int64 singer_id = 1; + optional string birth_date = 2; + optional string nationality = 3; + optional Genre genre = 4; +} + +enum Genre { + POP = 0; + JAZZ = 1; + FOLK = 2; + ROCK = 3; +} diff --git a/tests/unit/testdata/singer_pb2.py b/tests/unit/testdata/singer_pb2.py new file mode 100644 index 0000000000..51b049865c --- /dev/null +++ b/tests/unit/testdata/singer_pb2.py @@ -0,0 +1,29 @@ +# -*- coding: utf-8 -*- +# Generated by the protocol buffer compiler. DO NOT EDIT! +# source: singer.proto +# Protobuf Python Version: 4.25.1 +"""Generated protocol buffer code.""" +from google.protobuf import descriptor as _descriptor +from google.protobuf import descriptor_pool as _descriptor_pool +from google.protobuf import symbol_database as _symbol_database +from google.protobuf.internal import builder as _builder + +# @@protoc_insertion_point(imports) + +_sym_db = _symbol_database.Default() + + +DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile( + b'\n\x0csinger.proto\x12\x16\x65xamples.spanner.music"\xc1\x01\n\nSingerInfo\x12\x16\n\tsinger_id\x18\x01 \x01(\x03H\x00\x88\x01\x01\x12\x17\n\nbirth_date\x18\x02 \x01(\tH\x01\x88\x01\x01\x12\x18\n\x0bnationality\x18\x03 \x01(\tH\x02\x88\x01\x01\x12\x31\n\x05genre\x18\x04 \x01(\x0e\x32\x1d.examples.spanner.music.GenreH\x03\x88\x01\x01\x42\x0c\n\n_singer_idB\r\n\x0b_birth_dateB\x0e\n\x0c_nationalityB\x08\n\x06_genre*.\n\x05Genre\x12\x07\n\x03POP\x10\x00\x12\x08\n\x04JAZZ\x10\x01\x12\x08\n\x04\x46OLK\x10\x02\x12\x08\n\x04ROCK\x10\x03\x62\x06proto3' +) + +_globals = globals() +_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) +_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, "singer_pb2", _globals) +if _descriptor._USE_C_DESCRIPTORS == False: + DESCRIPTOR._options = None + _globals["_GENRE"]._serialized_start = 236 + _globals["_GENRE"]._serialized_end = 282 + _globals["_SINGERINFO"]._serialized_start = 41 + _globals["_SINGERINFO"]._serialized_end = 234 +# @@protoc_insertion_point(module_scope)